corpusid
int64
110
268M
title
stringlengths
0
8.56k
abstract
stringlengths
0
18.4k
citations
sequencelengths
0
142
full_paper
stringlengths
0
635k
14,284,100
Discriminative vs. Generative Approaches in Semantic Role Labeling
This paper describes the two algorithms we developed for the CoNLL 2008 Shared Task "Joint learning of syntactic and semantic dependencies". Both algorithms start parsing the sentence using the same syntactic parser.The first algorithm uses machine learning methods to identify the semantic dependencies in four stages: identification and labeling of predicates, identification and labeling of arguments. The second algorithm uses a generative probabilistic model, choosing the semantic dependencies that maximize the probability with respect to the model. A hybrid algorithm combining the best stages of the two algorithms attains 86.62% labeled syntactic attachment accuracy, 73.24% labeled semantic dependency F1 and 79.93% labeled macro F1 score for the combined WSJ and Brown test sets 1 .
[ 62182406, 6534839 ]
Discriminative vs. Generative Approaches in Semantic Role Labeling ManchesterCopyright Manchester2008. August 2008 Deniz Yuret dyuret@ku.edu.tr Koç University Koç University Koç University Ali Mehmet Koç University Koç University Koç University Yatbaz myatbaz@ku.edu.tr Koç University Koç University Koç University Ahmet Engin Ural aural@ku.edu.tr Koç University Koç University Koç University Discriminative vs. Generative Approaches in Semantic Role Labeling CoNLL the 12th Conference on Computational Natural Language LearningManchester2008. August 2008 This paper describes the two algorithms we developed for the CoNLL 2008 Shared Task "Joint learning of syntactic and semantic dependencies". Both algorithms start parsing the sentence using the same syntactic parser.The first algorithm uses machine learning methods to identify the semantic dependencies in four stages: identification and labeling of predicates, identification and labeling of arguments. The second algorithm uses a generative probabilistic model, choosing the semantic dependencies that maximize the probability with respect to the model. A hybrid algorithm combining the best stages of the two algorithms attains 86.62% labeled syntactic attachment accuracy, 73.24% labeled semantic dependency F1 and 79.93% labeled macro F1 score for the combined WSJ and Brown test sets 1 . Introduction In this paper we describe the system we developed for the CoNLL 2008 Shared Task (Surdeanu et al., 2008). Section 2 describes our approach for identifying syntactic dependencies. For semantic role labeling (SRL), we pursued two independent approaches. Section 3 describes our first approach, where we treated predicate identification and labeling, and argument identification and labeling as c 2008. Licensed under the Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported license (http://creativecommons.org/licenses/by-nc-sa/3.0/). Some rights reserved. 1 These numbers are slightly higher than the official results due to a small bug in our submission. four separate machine learning problems. The final program consists of four stages, each stage taking the answers from the previous stage as given and performing its own identification or labeling task based on a model generated from the training set. Section 4 describes our second approach where we used a generative model based on the joint distribution of the predicate, the arguments, their labels and the syntactic dependencies connecting them. Section 5 summarizes our results and suggests possible improvements. Syntactic dependencies We used a non-projective dependency parser based on spanning tree algorithms. The parameters were determined based on the experimental results of the English task in (McDonald et al., 2005), i.e. we used projective parsing and a first order feature set during training. Due to the new representation of hyphenated words in both training and testing data of our shared task and the absence of the gold part of speech (GPOS) column in the test data, the format of the CoNLL08 shared task is slightly different from the format of the CoNLL05 shared task, which is supported by the McDonald's parser. We reformatted the data accordingly. The resulting labeled attachment score on the test set is 87.39% for WSJ and 80.46% for Brown. The 4-stage discriminative approach Our first approach to SRL consists of four distinct stages: (1) predicate identification, (2) predicate labeling, (3) argument identification, and (4) argument labeling. A discriminative machine learning algorithm is trained for each stage using the gold input and output values from the training set. The following sec-tions describe the machine learning algorithm, the nature of its input/output, and the feature selection process for each stage. The performance of each stage is compared to a most frequent class baseline and analyzed separately for the two test sets and for nouns and verbs. In addition we look at the performance given the input from the gold data vs. the input from the previous stage. Predicate identification The task of this stage is to determine whether a given word is a nominal or a verb predicate using the dependency-parsed input. As potential predicates we only consider words that appear as a predicate in the training data or have a corresponding PropBank or NomBank XML file. The method constructs feature vectors for each occurrence of a target word in the training and test data. It assigns class labels to the target words in the training data depending on whether a target word is a predicate or not, and finally classifies the test data. We experimented with combinations of the following features for each word in a 2k + 1 word window around the target: (1) POS(W): the part of speech of the word, (2) DEP(W, HEAD(W)): the syntactic dependency of the word, (3) LEMMA(W): the lemma of the word, (4) POS(HEAD(W)): the part of speech of the syntactic head. We empirically selected the combination that gives the highest accuracy in terms of the precision and recall scores on the development data. The method achieved its highest score when we used features 1-3 for the target word and features 1-2 for the neighbors in a [-3 +3] word window. TiMBL (Daelemans et al., 2004) was used as the learning algorithm. Table 1 (4-stage, All1) shows the results of our learning method on the WSJ and Brown test data. The noun and verb results are given separately (Verb1, Noun1). To distinguish the mistakes coming from parsing we also give the results of our method after the gold parse (4-stage-gold). Our results are significantly above the most frequent class baseline which gives 72.3% on WSJ and 65.3% on Brown. Predicate labeling The task of the second stage is deciding the correct frame for a word given that the word is a predicate. The input of the stage is 11-column data, where the columns contain part of speech, lemma and syntactic dependency for each word. The first stage's decision for the frame is indicated by a string in the predicate column. The output of the stage is simply the replacement of that string with the chosen frame of the word. The chosen frame of the word may be word.X, where X is a valid number in PropBank or NomBank. The statistics of the training data show that by picking the most frequent frame, the system can pick the correct frame in a large percent of the cases. Thus we decided to use the most frequent frame baseline for this stage. If the word is never seen in the training, first frame of the word is picked as default. In the test phase, the results are as the following; in the Brown data, assuming that the stage 1 is gold, the score is 80.8%, noting that 11% of the predicates are not seen in the training phase. In WSJ, the score based on gold input is 88.3%, and only 5% of the predicates are not seen in the training phase. Table 1 gives the full results for Stage 2 (4-stage, Verb2, Noun2, All2). Argument identification The input data at this stage contains the syntactic dependencies, predicates and their frames. We look at the whole sentence for each predicate and decide whether each word should be an argument of that predicate or not. We mark the words we choose as arguments indicating which predicate they belong to and leave the labeling of the argument type to the next stage. Thus, for each predicate-word pair we have a yes/no decision to make. As input to the learning algorithm we experimented with representations of the syntactic dependency chain between the predicate and the argument at various levels of granularity. We identified the syntactic dependency chain between the predicate and each potential argument using breadth-first-search on the dependency tree. We tried to represent the chain using various subsets of the following elements: the argument lemma and part-of-speech, the predicate frame and partof-speech, the parts-of-speech and syntactic dependencies of the intermediate words linking the argument to the predicate. The syntactic dependencies leading from the argument to the predicate can be in the head-modifier or the modifier-head direction. We marked the direction associated with each dependency relation in the chain description. We also experimented with using fine-grained and coarse-grained parts of speech. The coarse-grained part of speech consists of the first two characters of the Penn Treebank part of speech given in the training set. We used a simple learning algorithm: choose the answer that is correct for the majority of the instances with the same chain description from the training set. Not having enough detail in the chain description leaves crucial information out that would help with the decision process, whereas having too much detail results in bad classifications due to sparse data. In the end, neither the argument lemma, nor the predicate frame improved the performance. The best results were achieved with a chain description including the coarse parts of speech and syntactic dependencies of each word leading from the argument to the predicate. The results are summarized in Table 1 (4-stage, Verb3, Noun3, All3). Argument labeling The task of this stage is choosing the correct argument tag for a modifier given that it is modifying a particular predicate. Input data format has additional columns indicating which words are arguments for which predicates. There are 54 possible values for a labeled argument. As a baseline we take the most frequent argument label in the training data (All1) which gives 37.8% on the WSJ test set and 33.8% on the Brown test set. The features to determine the correct label of an argument are either lexical or syntactic. In a few cases, they are combined. The following list gives the set we have used. Link is the type of the syntactic dependency. Direction is left or right, depending the location of the head and the modifier in the sentence. LastLink is the type of the dependency at the end of the dependency chain and firstLink is type of the dependency at the beginning of the dependency chain. Feature1 : modifierStem + headStem Feature2 : modifierStem + coarsePosModifier + headStem + coarsePosHead + direction Feature3 : coarsePosModifier + headPos + firstLink + lastLink + direction Feature4: modifierStem + coarsePosModifier The training phase includes building simple histograms based on four features. Feature1 and Fea-ture2 are sparser than the other two features and are better features as they include lexical information. Last two features are less sparse, covering most of the development data, i.e. their histograms give non-zero values in the development phase. In order to match all the instances in the development and use the semantic information, a cascade of the features is implemented similar to the one done by Gildea and Jurafsky(2002), although no weighting and a kind of back-off smoothing is used. First, a match is searched in the histogram of the first feature, if not found it is searched in the following histogram. After a match, the most frequent argument with that match is returned. Table 1 gives the performance (4-stage, Verb4, Noun4, All4). The generative approach One problem with the four-stage approach is that the later stages provide no feedback to the earlier ones. Thus, a frame chosen because of its high prior probability will not get corrected when we fail to find appropriate arguments for it. A generative model, on the other hand, does not suffer from this problem. The probability of the whole assignment, including predicates, arguments, and their labels, is evaluated together and the highest probability combination is chosen. Our generative model specifies the distribution of the following random variables: P is the lemma (stem+pos) of a candidate predicate. F is the frame chosen for the predicate (could be null). A i is the argument label of word i with respect to a given predicate (could be null). W i is the lemma (stem+pos) of word i. L i is the syntactic dependency chain leading from word i to the given predicate (similar to Section 3.3). The generative model We consider each word in the sentence as a candidate predicate and use the joint distribution of the above variables to find the maximum probability F in the column heading indicates verbal predicates, "Noun" indicates nominal predicates, "All" indicates all predicates. The numbers 1-4 in column headings indicate the 4 stages: (1) predicate identification, (2) predicate labeling, (3) argument identification, (4) argument labeling. The gold results assume perfect output from the previous stages. The highest number in each column is marked with boldface. and A i labels given P , W i , and L i . The graphical model in Figure 1 specifies the conditional independence assumptions we make. Equivalently, we take the following to be proportional to the joint probability of a particular assignment: Pr(F |P ) i Pr(A i |F ) Pr(W i |F A i ) Pr(L i |F A i ) Parameter estimation To estimate the parameters of the generative model we used the following methodology: For Pr(F |P ) we use the maximum likelihood estimate from the training data. As a consequence, frames that were never observed in the training data have zero probability. One exception is lemmas which have not been observed in the training data, for which each frame is considered equally likely. For Pr(A i |F ) we also use the maximum likelihood estimate and normalize it using sentence length. For a given argument label we find the expected number of words in a sentence with that label for frame F . We divide this expected number with the length of the given sentence to find Pr(A i |F ) for a single word. Any leftover probability is given to the null label. If the sentence length is shorter than the expected number of arguments, all probabilities are scaled down proportionally. For the remaining two terms Pr(L i |F, A i ) and Pr(W i |F, A i ) using the maximum likelihood estimate is not effective because of data sparseness. The arguments in the million word training data contain about 16,000 unique words and 25,000 unique dependency chains. To handle the sparseness problem we smoothed these two estimates using the part-of-speech argument distribution, i.e. Pr(L i |POS, A i ) and Pr(W i |POS, A i ), where POS represents the coarse part of speech of the predicate. Table 1 gives the F1 scores for the two models (4-stage and generative), presented separately for noun and verb predicates and the four stages of predicate identification/labeling, argument identification/labeling. In order to isolate the performance of each stage we also give their scores with gold input. The rest of this section analyzes these results and suggests possible improvements. Results and Analysis A hybrid algorithm: A comparison of the two algorithms show that the 4-stage approach is superior in predicate and verbal-argument identification and the generative algorithm is superior in the labeling of predicates and arguments and nominalargument identification. This suggests a hybrid algorithm where we restrict the generative model to take the answers for the better stages from the 4stage algorithm (Noun1, Verb1, Verb3) as given. Tables 1 and 2 present the results for the hybrid algorithm compared to the 4-stage and generative models. Parsing performance: In order to see the effect of syntactic parsing performance, we ran the hybrid algorithm starting with the gold parse. On the other hand, we find that the lexical features are essential for certain tasks. In labeling the arguments of nominal predicates, finding an exact match for the lexical pair guarantees a 90% accuracy. If there is no exact match, the 4-stage algorithm falls back on a syntactic match, which only gives a 75% accuracy. Future work: The hybrid algorithm shows the strengths and weaknesses of our two approaches. The generative algorithm allows feedback from the later stages to the earlier stages and the 4-stage machine learning approach allows the use of better features. One way to improve the system could be by adding feedback to the 4-stage algorithm (later stages can veto input coming from previous ones), or adding more features to the generative model (e.g. information about neighbor words when predicting F ). More importantly, there is no feedback between the syntactic parser and the semantic role labeling in our systems. Treating both problems under the same framework may lead to better results. Another property of both models is the indepen-dence of the argument label assignments from each other. Even though we try to control the number of arguments of a particular type by adjusting the parameters, there are cases when we end up with no assignments for a mandatory argument or multiple assignments where only one is allowed. A more strict enforcement of valence constraints needs to be studied. The use of smoothing in the generative model was critical, it added about 20% to our final F1 score. This raises the question of finding more effective smoothing techniques. In particular, the jump from specific frames to coarse parts of speech is probably not optimal. There may be intermediate groups of noun and verb predicates which share similar semantic or syntactic argument distributions. Identifying and using such groups will be considered in future work. Figure 1 : 1The graphical model depicting the conditional independence assumptions. The labeled semantic score went up to 78.84 for WSJ and 67.20 for Brown, showing that better parsing Data/algorithm Syntactic vs lexical features: Our algorithms use two broad classes of features: information from the dependency parse provides syntactic evidence, and the word pairs themselves provide semantic evidence for a possible relation. To identify their relative contributions, we experimented with two modifications of the generative algorithm: gen-l does not use the Pr(W i |F A i ) term and gen-w does not use the Pr(L i |F A i ) term. genl, using only syntactic information and the predicate, gets a labeled semantic score of 70.97 for WSJ and 58.83 for Brown, a relatively small decrease. In contrast gen-w, using only lexical information gets 43.06 for WSJ and 33.17 for Brown causing almost a 40% decrease in performance.Unlabeled Labeled WSJ 4-stage 81.15 69.44 WSJ generative 81.01 73.66 WSJ hybrid 82.94 74.74 Brown 4-stage 76.91 58.76 Brown generative 73.76 59.05 Brown hybrid 77.22 60.80 Table 2: Semantic scores for the 4-stage, genera- tive, and hybrid algorithms can add about 4-6% to the overall performance. TiMBL: Tilburg memory-Based Learner. W Daelemans, J Zavrel, K Van Der Sloot, A Van Den, Bosch, Tilburg UniversityDaelemans, W., J. Zavrel, K. van der Sloot, and A. van den Bosch. 2004. TiMBL: Tilburg memory- Based Learner. Tilburg University. Automatic labeling of semantic roles. D Gildea, D Jurafsky, Computational Linguistics. 283Gildea, D. and D. Jurafsky. 2002. Automatic label- ing of semantic roles. Computational Linguistics, 28(3):245 288. Online Large-Margin Training of Dependency Parsers. R Mcdonald, K Crammer, F Pereira, 100Ann ArborMcDonald, R., K. Crammer, and F. Pereira. 2005. On- line Large-Margin Training of Dependency Parsers. Ann Arbor, 100. The CoNLL-2008 shared task on joint parsing of syntactic and semantic dependencies. Mihai Surdeanu, Richard Johansson, Adam Meyers, Lluís Màrquez, Joakim Nivre, Proceedings of the 12th Conference on Computational Natural Language Learning. the 12th Conference on Computational Natural Language LearningSurdeanu, Mihai, Richard Johansson, Adam Meyers, Lluís Màrquez, and Joakim Nivre. 2008. The CoNLL-2008 shared task on joint parsing of syntac- tic and semantic dependencies. In Proceedings of the 12th Conference on Computational Natural Lan- guage Learning (CoNLL-2008).
2,481,675
GRAFIX: Automated Rule-Based Post Editing System to Improve English-Persian SMT Output
This paper describes the latest developments in the PeEn-SMT system, specifically covering experiments with Grafix, an APE component developed for PeEn-SMT.The success of well-designed SMT systems has made this approach one of the most popular MT approaches. However, MT output is often seriously grammatically incorrect. This is more prevalent in SMT since this approach is not language-specific. This system works with Persian, a morphologically rich language, so post-editing output is an important step in maintaining translation fluency.Grafix performs a range of corrections on sentences, from lexical transformation to complex syntactical rearrangement. It analyzes the target sentence (the SMT output in Persian language) and attempts to correct it by applying a number of rules which enforce consistency with Persian grammar.We show that the proposed system is able to improve the quality of the state-of-the-art English-Persian SMT systems, yielding promising results from both automatic and manual evaluation techniques.
[ 1272090 ]
GRAFIX: Automated Rule-Based Post Editing System to Improve English-Persian SMT Output December 2012 Mahsa Mohaghegh m.mohaghegh@massey.ac.nz School of Engineering and Advanced Technology Massey University AucklandNew Zealand ( Abdolhossein Sarrafzadeh hsarrafzadeh@unitec.ac.nz Department of Computing AucklandNew Zealand ( Mehdi Mohammadi mehdi.mka@gmail.com Department of Computer Engineering SheikhBahaee University IsfahanIran GRAFIX: Automated Rule-Based Post Editing System to Improve English-Persian SMT Output Proceedings of COLING 2012: Posters COLING 2012: PostersMumbaiDecember 2012Machine TranslationPost-editing of Machine TranslationEvaluation of Machine Translation 873 This paper describes the latest developments in the PeEn-SMT system, specifically covering experiments with Grafix, an APE component developed for PeEn-SMT.The success of well-designed SMT systems has made this approach one of the most popular MT approaches. However, MT output is often seriously grammatically incorrect. This is more prevalent in SMT since this approach is not language-specific. This system works with Persian, a morphologically rich language, so post-editing output is an important step in maintaining translation fluency.Grafix performs a range of corrections on sentences, from lexical transformation to complex syntactical rearrangement. It analyzes the target sentence (the SMT output in Persian language) and attempts to correct it by applying a number of rules which enforce consistency with Persian grammar.We show that the proposed system is able to improve the quality of the state-of-the-art English-Persian SMT systems, yielding promising results from both automatic and manual evaluation techniques. Introduction Since most mistakes associated with machine translation are of a repetitive nature, the task of post-editing can be made automatic (Allen & Hogan, 2000). Furthermore, the process of automatic post-editing (APE) is very similar in nature to a machine translation process (Simard, Goutte, & Isabelle, 2007). Because of this, certain MT systems can be used to model the APE process. The advantages and disadvantages of RBMT and SMT approaches may be summarised as follows: RBMT is strong in syntax, morphology, structural semantics, and lexical reliability, but demonstrates weakness in the areas of lexical semantics and lexical adaptivity. SMT, while being weak in the areas of syntax, morphology, and structural semantics, is superior to RBMT in areas of lexical semantics and adaptability, although the advantage of adaptability to other language pairs is only valuable when the system is to be used with a wider range of languages. The Grafix APE system"s main algorithm follows a Transfer-based approach. Transfer-based MT is among the most commonly used approaches for MT. This method involves capturing the meaning of a source sentence using intermediate representations, and from it generating a target output (Mohamed, 2000). The Grafix system developed by the authors attempts to correct some frequently occurring grammatical SMT system errors in English-to-Persian translations. 2 Related Work Simard et al. (2007), Lagarda, Alabau, Casacuberta, Silva, and Diaz-de-Liano (2009) present APE systems that are added to commercial RBMT systems. Their APE components utilise a phrase-based SMT system using Moses as a decoder. In his recent work, Pilevar (2011) demonstrates a statistical post-editing (SPE) module that is used to improve RBMT output for the English-Persian language pair in order to improve the translation of subtitles for movies. The results show that the SPE module can improve the performance of the RBMT system"s output when used in a new domain. However, they found that the use of the SMT system alone yields a better result compared to the combination of RBMT + SPE. To our knowledge this is the only post-editing system reported for the English-Persian language pair, and it did not succeed in improving the output of the main system. Marecek, Rosa, and Bojar (2011) report on experimental work in correcting the output of an English-Czech MT system by performing several rule-based grammatical corrections on sentences parsed to dependency trees. Their baseline SMT system relies on Moses, a phrasebased translation system. In their post-processing system, DEPFIX, they used a two-step translation that is a setup in which, the English source is first translated into simplified Czech, and then the simplified Czech is monotonically translated to fully inflected Czech. Both steps are simple phrase-based models. Rosa, Marecek, and Duˇsek (2012) enriched the rule set of DEPFIX and used a modified version of MST Parser. Their results show that both modifications led to better performance of DEPFIX 2012; however, they mention that since the effect of DEPFIX on the output in terms of BLEU score is not significant, the results are not as reliable as results obtained through manual evaluation. Description of the System Our approach to the system architecture differs from what is commonly used in most other systems in that the APE does not use an SMT system to automatically post-edit the output of an MT system, as described, for example, in Simard et al. (2007) and Lagarda et al. (2009). In this study, we couple the PeEn-SMT system we previously developed (Mohaghegh, Sarrafzadeh, & Moir, 2011)with an RBMT-based APE. Since post-editing an MT system"s output usually seeks to improve grammatical structure in order to render sentences and phrases with greater fluency, the advantage of RBMT"s linguistic knowledge can be utilised well here. The Underlying SMT System Most recent research in the area of statistical machine translation has been targeted at modelling translation based on phrases in the source language and matching them with their statisticallydetermined equivalents in the target language ("phrase-based" translation) - (Koehn, Och, & Marcu, 2003;Marcu & Wong, 2002;Och & Ney, 2004;Och, Tillmann, & Ney, 1999). After conducting numerous experiments with Moses, we decided to experiment with some modifications of the Joshua 4.0 toolkit, to compare them and see if a better score could be achieved. To the best of our knowledge, this is the first time a hierarchical SMT system is being used for the Persian-English language pair. One motivation for this is the fact that since Persian is a morphologically rich language, word disordering is a common issue that we face. Hierarchical SMT takes syntax into account to some extent, with phrases being used to learn word reordering. This improvement is due to the word order differences between Persian and English, which are better handled with a hierarchical phrase based system than a standard phrasebased approach. Hierarchical phrase-based translation (Chiang, 2005) expands on phrase-based translation by allowing phrases with gaps, modelled as synchronous context-free grammars (SCFGs). Joshua is a well-known open source machine translation toolkit based on the hierarchical approach (Li, Callison-Burch, Khudanpur, & Thornton, 2009). In the latest version of Joshua (Version 4.0), the main changes include implementation of Thrax, which enables extended extraction of Hiero grammars, and a modified hypothesis exploration method (Ganitkevitch, Cao, Weese, Post, & Callison-Burch, 2012). The Proposed APE Model The proposed rule-based APE module consists of three levels of transformation. As shown in Figure 1, these three levels are lexical transformers, shallow transformers and deep transformers. First OOVRemover and Transliterator as lexical transformers are run using a bilingual dictionary, after which some shallow transformers are run based on POS tag patterns. Deep transformation at the third level is applied in which the rules exploit the tree dependecy structure of sentences. Lexical Transformation: The first level benefits from the outcome of two components. OOV 1 remover is a simple substitute rule to replace an English word with the correct translation in Persian. However, there are instances like named entities where OOV remover could not find equivalent Persian translations for English words appearing as OOV in the output. In this case, a transliterator is used to replace English words by their equivalents in Persian scripts. The transliterator component uses a training data set containing over 4600 of the most frequently used Persian words and named entities written using English letters, and also the equivalent in Persian script. Shallow Transformation: The second stage of the system involves a shallow transfer module. POS-tagging the input text is a pre-requisite process for both shallow and deep transformation levels. The MLE POS-tagger is used in this stage and trained with the Persian Dependency Treebank 2 data. Shallow transformers are developed, based on some POS patterns identified as wrong ones. Deep Transformation: In the third level, the input is parsed by a dependency parser. Once the text is tagged, some preparation is performed to parse the input, based on the parsing input format (McDonald, Pereira, Ribarov, & Hajic, 2005). The Persian Dependency Treebank is also used in the parser training process. We used MSTParser, which is an implementation of Dependency Parsing using, the Maximum Spanning Tree (Kübler, McDonald, & Nivre, 2009). The rules here are used for examination of the sentence's dependency tree in order to have some syntactical and grammatical constraints. Training Data Source In a sentence dependency tree, words and relations are graphed, with each word either modifying or being modified by another word, and the root in each tree being the only word which does not modify any other word. We have used Persian Dependency Treebank as our main source of training data for both tagging and data-driven parsing. It contains about 125,500 annotated sentences. The data format is based on CoNLL Shared Task on Dependency Parsing (Buchholz & Marsi, 2006). The sentences are manually annotated in the corpus, which contains about 12,500 sentences and 189,000 tokens Pre-Processing and Tagging The pre-processing of input Persian sentences consists of tokenizing the sentences using our implemented tokenizer. We chose the Maximum Likelihood Estimation (MLE) approach as the POS-tagging component for our APE, due to its ability to be implemented easily and its consistency in yielding promising results for tagging the Persian language (Raja et al., 2007). 1 Out Of Vocabulary 2 http://dadegan.ir/en Parsing In dependency parsing, words are linked to their arguments by dependency representations (Hudson, 1984). These representations have been in use for many years. In Figure 2, the sentence, shown in sentence tree form, is a dependency tree. Each word depends on a "parent" word or a root symbol. Rule-based Transformers The translation rules were gathered manually by investigating a broad range of incorrect translations. By considering the dependency parser output for these sentences, and determining frequent wrong patterns among them, we have defined the most common incorrect patterns under four rules in the shallow transformers, and six in the deep transformers. The following sections cover some of them regarding the transfer level. Shallow Transformers IncompleteDependentTransformer: In Persian, as in English, dependent clauses are usually connected by relative pronouns such as « ‫كه‬ » (English "that"). The rule below identifies a lack of verb in a dependent sentence and corrects it by adding a verb. Currently, in most instances the verb « ‫است‬ » (English "is") is suggested. In the notation below, * denotes any number of POS, and ^ denotes "except". If POS-sequence matches [* SUBR *^V PUNC]  modify as [* SUBR ‫)است(‪V‬‬ PUNC] IncompleteEndedPREMTransformer: Pre-modifiers (denoted by PREM) are a class of noun modifiers that precede nouns and are in complementary distribution with other members of the class. In the POS sequence in which a pre-modifier precedes a punctuation mark (PUNC) deemed as incorrect. Since there is no logical translation for given inputs with this pattern, these sequences were removed from the sentence altogether. The rule is described as: If POS-sequence matches [* a N PREP PREM PUNC * b ]  modify as [* a * b ] Deep Transformers NoSubjectSentenceTransformer: SMT output occasionally contains instances of sentences with a third person verb, no definite subject and an object labelled as OBJ in the parse tree and tagged as POSTP (postposition) in the POS sequence. Compared to known reference sentences, it was seen that what was parsed as the object in the sentence was actually the subject. The transformer is designed to revise the sentence by removing the postposition « ‫را‬ » which is the indicator of a direct object in the sentence. Removal of this postposition changes the sentence to one with a subject. VerbArrangementTransformer: As a natural language, Persian has a preferred word order, with SOV (subject-object-verb) followed by SVO. One frequently violating case is sentences in which a main verb as Root does not occur immediately before the period punctuation. The matching procedure is as follows: For the verb of the sentence tagged as Root, reordering is performed by moving the root verb and its NVE dependants (in the case of compound verbs) to the end of the sentence, immediately before the period punctuation. MissingVerbTransformer: In this transformer, any subject with a referred verb preceding the subject is identified as an incorrect linked subject to any verb, since the sentence does not follow the standard SOV structure. In this case, it can be assumed that the last word in the sentence can act as a candidate in order to find the non-verbal element in the verb Valency Lexicon (Rasooli, Moloodi, Kouhestani, & Minaei-Bidgoli, 2011). If such a verb is found, that verb will be suggested to fill the space of the missing verb. The tense of the verb is then modified to match that of the subject of the sentence. MozafOfAlefEndedTokenTransformer: In Persian, there are certain nouns or pronouns following a head noun which signify relationships with the head noun, such as possession or name relation. Such nouns/pronouns are known as Ezafe dependents. Indication of such in the language is given as the vowel sound /e/, coming immediately after pronunciation of the head noun. If the head-word ends in « ‫ا‬ » /a/, then the character « ‫ی‬ » must be added to the end of that word. This character is a representation of the /e/ vowel that is written in such cases to ease the pronunciation. This transformer recognizes the Ezafe dependents which require a « ‫ي‬ » character between them and add it properly. Experiments and Results The SMT system evaluated in this paper is based on Joshua 4.0 with default setting. The parallel corpus used for the training set was based on the NPEC corpus tested by (Mohaghegh & Sarrafzadeh, 2012), but we built a modified version consisted of almost 85,000 sentence pairs in which we removed the subtitle addition. The language model was extracted from IRNA 3 website. The details of the components of the baseline system prior to alignment are shown in Table 1 Test Data Set We used eight test sets based on text extracted from certain bilingual websites for our experiments, as shown in Table 2. Test sentences have been selected randomly covering different domains, regardless of whether or not they had potential to be covered by any post-editing rules. We performed translation in the English-Persian translation direction. The Persian side of the test sets was used as the translation reference when using scoring metrics to evaluate the output quality of both the baseline system and the final post-APE output. Automatic Evaluation The translation output before and after the APE is scored with BLEU, the results of which are shown in Table 3. The results generally show increases in BLEU metric, which is also shown in Figure 3. The greatest increase in BLEU score due to the APE was achieved in test set #3, with an increase of about 0.15 BLEU. However, in certain test sets the scoring metrics report a decrease in output quality, the worst BLUE score being at a difference of -0.0151. FIGURE 3 -Difference of BLEU score after applying APE on eight test sets We propose that the weakened results are mainly due to the lack of training data for the Transliterator module in which some proper names and terms are scripted incorrectly in Persian. Since we use the output of the SMT system, the quality of statistical translation (in terms of BLEU metric score) affects the APE module directly. Test set #4 yielded poor quality since the parallel corpus contained much less data in the religious genre. Furthermore, where there were some English words in the SMT output that OOVRemover was unable to correct, Transliterator generated a Persian script which completely changed the meaning of the original sentence. Marecek et al. (2011) show that grammatical correctness cannot simply be drawn from BLEU metrics alone. Because of this, we manually evaluated the proposed model. We used the same test sets as the automatic evaluation containing 153 sentences and the sentences were translated using SMT and post-edited by the proposed APE system. We assigned the APE output to two separate annotators, who were to rank the APE output based on the following criteria: Manual Evaluation No Change: There is no difference to APE output and SMT output. Improved: There are certain changes improving fluency. Weakened: There are certain changes decreasing fluency. The results of the manual evaluation are shown in Table 4. Annotator/Rank Improved No Change Weakened Annotator 1 Both annotators completed the evaluation separately, but had very similar judgments of the APE system"s output. The results show an improvement of the quality of the baseline SMT system output by 29.4% and that the rules developed in the APE system are not applicable to more than a half (63.4%) of the SMT output. On the other hand, human evaluation also shows that in some cases, the output is weakened after applying APE. Both annotators' scores (Table 5) show a sentence quality improvement of 25% due to the APE. I / II Conclusion We present an uncommon APE model for English-Persian statistical machine translation modeled on a rule-based approach in different levels of transformation. The automatic and manual evaluation results show encouraging improvement in quality of translation after post editing. While the improvement in some test sets is small, it still improves the SMT output up to 0.15 BLEU. Manual evaluation scores show that a rule-based APE system can yield even better results. From our results we can see at least a 25% improved output for a loss of at most 7%. FIGURE 1 - 1High-Level diagram of the proposed Rule-based APE system Difference 0.0247 -0.0045 0.1474 -0.0151 0.0791 0.0041 0.0006 0.0041 TABLE 3 -Scores of APE based on SMT Joshua version 4.0 .English Persian Training Set Sentences 83042 Sentences 82496 Words 1322470 Words 1399759 Tunings Set Sentences 1578 Sentences 1578 Words 40044 Words 41287 Language Model Sentences 5852532 Words 66331086 TABLE 1 -Baseline System Components 3 http://www.irna.ir/ENIndex.htm TABLE 2 - 2Statistics of eight test set used in automatic and manual evaluationTesting Data Set # 1 2 3 4 5 6 7 8 Total English Word 163 218 371 362 101 354 555 259 2383 Character 878 1381 1941 1922 589 1887 2902 1325 12825 Persian Word 158 222 403 337 115 386 653 297 2571 Character 551 955 1663 1230 430 1717 2551 1063 10160 TABLE 4 - 4Scores of two human evaluators for 153 test sentences47 95 11 Annotator 2 43 99 11 TABLE 5 - 5Mutual score for both human evaluator I and evaluator IIImproved No Change Weakened Improved 39 5 5 No Change 3 90 2 Weakened 3 4 4 Toward the Development of a Post editing Module for Raw Machine Translation Output: A Controlled Language Perspective. J Allen, C Hogan, Allen, J., & Hogan, C. (2000). Toward the Development of a Post editing Module for Raw Machine Translation Output: A Controlled Language Perspective. CoNLL-X shared task on multilingual dependency parsing. S Buchholz, E Marsi, Buchholz, S., & Marsi, E. (2006). CoNLL-X shared task on multilingual dependency parsing. A hierarchical phrase-based model for statistical machine translation. D Chiang, Chiang, D. (2005). A hierarchical phrase-based model for statistical machine translation. . J Ganitkevitch, Y Cao, J Weese, M Post, C Callison-Burch, Joshua 4.0: Packing, PRO, and paraphrasesGanitkevitch, J., Cao, Y., Weese, J., Post, M., & Callison-Burch, C. (2012). Joshua 4.0: Packing, PRO, and paraphrases. Word grammar: Blackwell Oxford. R A Hudson, Hudson, R. A. (1984). Word grammar: Blackwell Oxford. Statistical phrase-based translation. P Koehn, F Och, D Marcu, Koehn, P., Och, F., & Marcu, D. (2003). Statistical phrase-based translation. Dependency parsing. S Kübler, R Mcdonald, J Nivre, Synthesis Lectures on Human Language Technologies. 11Kübler, S., McDonald, R., & Nivre, J. (2009). Dependency parsing. Synthesis Lectures on Human Language Technologies, 1(1), 1-127. Statistical post-editing of a rule-based machine translation system. A L Lagarda, V Alabau, F Casacuberta, R Silva, E Diaz-De-Liano, Lagarda, A. L., Alabau, V., Casacuberta, F., Silva, R., & Diaz-de-Liano, E. (2009). Statistical post-editing of a rule-based machine translation system. Decoding in Joshua. Z Li, C Callison-Burch, S Khudanpur, W Thornton, The Prague Bulletin of Mathematical Linguistics. 91Li, Z., Callison-Burch, C., Khudanpur, S., & Thornton, W. (2009). Decoding in Joshua. The Prague Bulletin of Mathematical Linguistics, 91, 47-56. A phrase-based, joint probability model for statistical machine translation. D Marcu, W Wong, Marcu, D., & Wong, W. (2002). A phrase-based, joint probability model for statistical machine translation. Two-step translation with grammatical postprocessing. D Marecek, R Rosa, O Bojar, Marecek, D., Rosa, R., & Bojar, O. (2011). Two-step translation with grammatical post- processing. Non-projective dependency parsing using spanning tree algorithms. R Mcdonald, F Pereira, K Ribarov, J Hajic, McDonald, R., Pereira, F., Ribarov, K., & Hajic, J. (2005). Non-projective dependency parsing using spanning tree algorithms. A hierarchical phrase-based model for English-Persian statistical machine translation. M Mohaghegh, A Sarrafzadeh, Mohaghegh, M., & Sarrafzadeh, A. (2012). A hierarchical phrase-based model for English- Persian statistical machine translation. Improving Persian-English Statistical Machine Translation: Experiments in Domain Adaptation. M Mohaghegh, A Sarrafzadeh, T Moir, Mohaghegh, M., Sarrafzadeh, A., & Moir, T. (2011). Improving Persian-English Statistical Machine Translation: Experiments in Domain Adaptation. Machine Translation of Noun Phrases from English to Arabic. A A E M Mohamed, GizaFaculty of Engineering, Cairo UniversityMohamed, A. A. E. M. (2000). Machine Translation of Noun Phrases from English to Arabic. Faculty of Engineering, Cairo University, Giza. The alignment template approach to statistical machine translation. F Och, H Ney, Computational Linguistics. 304Och, F., & Ney, H. (2004). The alignment template approach to statistical machine translation. Computational Linguistics, 30(4), 417-449. Improved alignment models for statistical machine translation. F Och, C Tillmann, H Ney, Och, F., Tillmann, C., & Ney, H. (1999). Improved alignment models for statistical machine translation. USING STATISTICAL POST-EDITING TO IMPROVE THE OUTPUT OF RULE-BASED MACHINE TRANSLATION SYSTEM. A H Pilevar, Training. 3300Pilevar, A. H. (2011). USING STATISTICAL POST-EDITING TO IMPROVE THE OUTPUT OF RULE-BASED MACHINE TRANSLATION SYSTEM. Training, 330, 330,000. Evaluation of part of speech tagging on Persian text. F Raja, H Amiri, S Tasharofi, M Sarmadi, H Hojjat, F Oroumchian, 8University of Wollongong in Dubai-PapersRaja, F., Amiri, H., Tasharofi, S., Sarmadi, M., Hojjat, H., & Oroumchian, F. (2007). Evaluation of part of speech tagging on Persian text. University of Wollongong in Dubai-Papers, 8. A syntactic valency lexicon for Persian verbs: The first steps towards Persian dependency treebank. M S Rasooli, A Moloodi, M Kouhestani, B Minaei-Bidgoli, Rasooli, M. S., Moloodi, A., Kouhestani, M., & Minaei-Bidgoli, B. (2011). A syntactic valency lexicon for Persian verbs: The first steps towards Persian dependency treebank. DEPFIX: A System for Automatic Correction of Czech MT Outputs. R Rosa, D Marecek, O Duˇsek, Rosa, R., Marecek, D., & Duˇsek, O. (2012). DEPFIX: A System for Automatic Correction of Czech MT Outputs. . M Simard, C Goutte, P Isabelle, Statistical phrase-based post-editingSimard, M., Goutte, C., & Isabelle, P. (2007). Statistical phrase-based post-editing.
802,701
REES: A Large-Scale Relation and Event Extraction System
This paper reports on a large-scale, end-toend relation and event extraction system. At present, the system extracts a total of 100 types of relations and events, which represents a much wider coverage than is typical of extraction systems. The system consists of three specialized pattem-based tagging modules, a high-precision coreference resolution module, and a configurable template generation module. We report quantitative evaluation results, analyze the results in detail, and discuss future directions.
[ 5803393, 10667460, 725590, 2922593 ]
REES: A Large-Scale Relation and Event Extraction System Chinatsu Aone aonec@verdi.sra.com SRA International, Inc 4300 Fair Lakes Court Fairfax22033VA Mila Ramos-Santacruz SRA International, Inc 4300 Fair Lakes Court Fairfax22033VA REES: A Large-Scale Relation and Event Extraction System This paper reports on a large-scale, end-toend relation and event extraction system. At present, the system extracts a total of 100 types of relations and events, which represents a much wider coverage than is typical of extraction systems. The system consists of three specialized pattem-based tagging modules, a high-precision coreference resolution module, and a configurable template generation module. We report quantitative evaluation results, analyze the results in detail, and discuss future directions. Introduction One major goal of information extraction (IE) technology is to help users quickly identify a variety of relations and events and their key players in a large volume of documents. In contrast with this goal, state-of-the-art information extraction systems, as shown in the various Message Understanding Conferences (MUCs), extract a small number of relations and events. For instance, the most recent MUC, MUC-7, called for the extraction of 3 relations (person-employer, maker-product, and organization-location) and 1 event (spacecraft launches). Our goal is to develop an IE system which scales up to extract as many types of relations and events as possible with a minimum amount of porting effort combined with high accuracy. Currently, REES handles 100 types of relations and events, and it does so in a modular, configurable, and scalable manner. Below, Section 1 presents the ontologies of relations and events that we have developed. Section 2 describes REES' system architecture. Section 3 evaluates the system's performance, and offers a qualitative analysis of system errors. Section 4 discusses future directions. Relation and Event Ontologies As the first step in building a large-scale relation and event extraction system, we developed ontologies of the relations and events to be extracted. These ontologies represent a wide variety of domains: political, financial, business, military, and life-related events and relations. "Relations" covers what in MUC-7 are called Template Elements (TEs) and Template Relations (TRs). There are 39 types of relations. While MUC TE's only dealt with singular entities, REES extracts both singular and plural entities (e.g., "five executives"). The TR relations are shown in italic in the table below. Table 1: Relation Ontology "Events" are extracted along with their event participants, e.g., "who did what to whom when and where?" For example, for a BUYING event, REES extracts the buyer, the artifact, the seller, and the time and location of the BUYING event. Person-OtherRelative Person-BirthPlace Person-BirthDate REES currently covers 61 types of events, as shown below. REES consists of three main components: a tagging component (cf. Section 2.1), a co-reference resolution module (cf. Section 2.2), and a template generation module (cf. Section 2.3). Figure 3 also illustrates that the user may run REES from a Graphical User Interface (GUI) called TemplateTool (cf. Section 2.4). Events Tagging Modules The tagging component consists of three modules as shown in Figure 3: NameTagger, NPTagger and EventTagger. Each module relies on the same pattern-based extraction engine, but uses different sets of patterns. The NameTagger recognizes names of people, organizations, places, and artifacts (currently only vehicles). Building upon the XML output of the NPTagger, the EventTagger recognizes events applying its lexicon-driven, syntactically-based generic patterns. These patterns tag events in the presence of at least one of the arguments specified in the lexical entry for a predicate. Subsequent pattems try to find additional arguments as well as place and time adjunct information for the tagged event. As an example of the EventTagger's generic patterns, consider the simplified pattern below. This pattem matches on an event-denoting verb that requires a direct object of type weapon (e.g., "fire a gun") remplateroot //v (& {AND $VP {ARG2_SYN=DO} {ARG2_SEM=WEAPON } } {AND $ARTIFACT {SUBTYPE=WEAPON} })1 The important aspect of REES is its declarative, lexicon-driven approach. This approach requires a lexicon entry for each event-denoting word, which is generally a I &=concatenation, AND=Boolean operator, $VP and SARTIFACT are macro references for complex phrases. 71:1 verb. The lexicon entry specifies the syntactic and semantic restrictions on the verb's arguments. For instance, the following lexicon entry is for the verb "attack." It indicates that the verb "attack" belongs to the CONFLICT ontology and to the ATTACK_TARGET type. The first argument for the verb "attack" is semantically an organization, location, person, or artifact (ARGI_SEM), and syntactically a subject (ARGI_SYN). The second argument is semantically an organization, location, person or artifact, and syntactically a direct object. The third argument is semantically a weapon and syntactically a prepositional phrase introduced by the preposition "with". ATTACK { { {CATEGORY VERB} {ONTOLOGY CONFLICT} {TYPE ATTACK_TARGET} {ARGI_SEM {ORGANIZATION LOCATION PERSON ARTIFACT} } {ARGI_SYN {SUBJECT} } {ARG2_SEM {ORGANIZATION LOCATION PERSON ARTIFACT} } {ARG2_SYN {DO} } {ARG3_SEM{WEAPON} } {ARG3_SYN {WITH} } } } About 50 generic event extraction patterns, supported by lexical information as shown above, allow extraction of events and their arguments in cases like: An lraqi warplane attacked the frigate Stark with missiles May 17, 1987. This generic, lexicon-driven event extraction approach makes REES easily portable because new types of events can be extracted by just adding new verb entries to the lexicon. No new patterns are required. Moreover, this approach allows for easy customization capability: a person with no knowledge of the pattern language would be able to configure the system to extract new events. While the tagging component is similar to other pattern-based IE systems (e.g., Appelt et al. 1995;Aone et al. 1998, Yangarber andGrishman 1998), our EventTagger is more portable through a lexicon-driven approach. Co-reference Resolution After the tagging phase, REES sends the XML output through a rule-based co-reference resolution module that resolves: • definite noun phrases of Organization, Person, and Location types, and • singular person pronouns: he and she. Only "high-precision" rules are currently applied to selected types of anaphora. That is, we resolve only those cases of anaphora whose antecedents the module can identify with high confidence. For example, the pronoun rules look for the antecedents only within 3 sentences, and the definite NP rules rely heavily on the head noun matches. Our highprecision approach results from our observation that unless the module is very accurate (above 80% precision), the coreference module can hurt the overall extraction results by over-merging templates. Template Generation Module A typical template generation module is a hard-coded post-processing module which has to be written for each type of template. By contrast, our Template Generation module is unique as it uses declarative rules to generate and merge templates automatically so as to achieve portability. Declarative Template Generation REES outputs the extracted information in the form of either MUC-style templates, as illustrated in Figure 1 and 2, or XML. A crucial part of a portable, scalable system is to be able to output different types of relations and events without changing the template generation code. REES maps XML-tagged output of the co-reference module to templates using declarative template definitions, which specifies the template label (e.g., ATTACK_TARGET), XML attribute names (e.g., ARGUMENT l), corresponding template slot names (e.g., ATTACKER), and the type restrictions on slot values (e.g., string). Event Merging One of the challenges of event extraction is to be able to recognize and merge those event descriptions which refer to the same event. The Template Generation module uses a set of declarative, customizable rules to merge coreferring events into a single event. Often, the rules reflect pragmatic knowledge of the world. For example, consider the rule below for the DYING event type. This rule establishes that if two die events have the same subject, then they refer to the same event (i.e., a person cannot die more than once Graphical User Interface (GUI) For some applications such as database population, the user may want to validate the system output. REES is provided with a Javabased Graphical User Interface that allows the user to run REES and display, delete, or modify the system output. As illustrated in Figure 4, the tool displays the templates on the bottom half of the screen, and the user can choose which template to display. The top half of the screen displays the input document with extracted phrases in different colors. The user can select any slot value, and the tool will highlight the portion of the input text responsible for the slot value. This feature is very useful in efficiently verifying system output. Once the system's output has been verified, the resulting templates can be saved and used to populate a database. System Evaluation The The blind set F-Measure for 31 types of relations (73.95%) exceeded our initial goal of 70%. While the blind set F-Measure for 61 types of events was 53.75%, it is significant to note that 26 types of events achieved an F-Measure over 70%, and 37 types over 60% (cf. Table 4). For reference, though not exactly comparable, the best-performing MUC-7 system achieved 87% in TE, 76% in TR, and 51% in event extraction. Regarding relation extraction, the difference in the score between the training and blind sets was very small. In fact, the total F-Measure on the blind set is less than 2 points lower than that of the training set. It is also interesting to note that for 8 of the 12 relation types where the F-Measure dropped more than 10 points, the training set includes less than 20 instances. In other words, there seems to be a natural correlation between low number of instances in the training set and low performance in the blind set. F-M in There was a significant drop between the training and blind sets in event extraction: 11 points. We believe that the main reason is that the total number of events in the training set is fairly low: 801 instances of 61 types of events (an average of 13/event), where 35 of the event types had fewer than 10 instances. In fact, 9 out of the 14 event types which scored lower than 40% F-Measure had fewer than I0 examples. In comparison, there were 34,000 instances of 39 types of relations in the training set. The contribution of the co-reference module is illustrated in the table below. Co-reference resolution consistently improves F-Measures both in training and blind sets. Its impact is larger in relation than event extraction. In the next two sections, we analyze both false positives and false negatives. False Positives (or Precision Errors) REES produced precision errors following cases: • Most of the errors were due in the to overgeneration of templates. These are mostly cases of co-referring noun phrases that the system failed to resolve. For example: "Panama ... the nation ... this country.., his country" Rules for the co-reference module are still under development, and at present REES handles only limited types of plural noun phrase anaphora. Spurious events resulted from verbs in conditional constructions (e.g., "if ... then...") or from ambiguous predicates. For instance, "appoint" as a POLITICAL event vs. a PERSONNEL CHANGE event. The subject of a verb was misidentified. This is particularly frequent in reduced relative clauses. Kabul radio said the latest deaths brought to 38 the number of people killed in the three car bomb explosions, (Wrong subject: "the number of people" as the KILLER instead of the victim) False Negatives (or Recall Errors) Below, we list the most frequent recall errors in the training set. • Some event arguments are mentioned with event nouns instead of event verbs. We asked a person who is not involved in the development of REES to review the event extraction output for the blind set. This person reported that: • In 35% of the cases where the REES system completely missed an event, it was because the lexicon was missing the predicate. REES's event predicate lexicon is rather small at present (a total of 140 verbs for 61 event types) and is mostly based on the examples found in the training set, • In 30% of the cases, the subject or object was elliptical. The system does not currently handle ellipsis. • In 25% of the cases, syntactic/semantic argument structures were missing from existing lexical entries. It is quite encouraging that simply adding additional predicates and predicate argument structures to the lexicon could significantly increase the blind set performance. Future Directions We believe that improving co-reference resolution and adding noun-based event extraction capability are critical to achieving our ultimate goal of at least 80% F-Measure for relations and 70% for events. Co-reference Resolution As discussed in Section 3.1 and 3.2, accurate co-reference resolution is crucial to improving the accuracy of extraction, both in terms of recall and precision. In particular, we identified two types of high-payoff coreference resolution: • definite noun phrase resolution, especially plural noun phrases • 3 rd person neutral pronouns "it" and "they." Noun-based Event Extraction REES currently handles only verb-based events. Noun-based event extraction adds more complexity because: Nouns are often used in a generic, nonreferential manner (e.g., "We see a merger as being in the consumer's interest"), and When referential, nouns often refer to verb-based events, thus requiring nounverb co-reference resolution ("An F-14 crashed shortly after takeoff... The crash"). However, noun-based events are crucial because they often introduce additional key information, as the underlined phrases below indicate: While Bush's meetings with prominent antiapartheid leaders such as Archbishop Desmond Tutu and Albertina Sisulu are important... We plan to develop a generic set of patterns for noun-based event extraction to complement the set of generic verb-based extraction patterns. Conclusions In this paper, we reported on a fast, portable, large-scale event and relation extraction system REES. To the best of our knowledge, this is the first attempt to develop an IE system which can extract such a wide range of relations and events with high accuracy. It performs particularly well on relation extraction, and it achieves 70% or higher F-Measure for 26 types of events already. In addition, the design of REES is highly portable for future addition of new relations and events. Figure 2 : 2Example Table 2 : 2Event Ontology Figures 1 and 2 show sample relation and event templates.Figure 1shows a Person-Affiliation relation template for "Frank Ashley, a spokesman for Occidental Petroleum Corp.'"<PERSON AFFILIATION-AP8802230207-54> := TYPE: PERSON AFFILIATION PERSON: [TE for"Frank Ashley"] ORG: [TE for "Occidental Petroleum"] Figure 1: Example of Relation Template Figure 2 shows an Attack Target event template for the sentence "an Iraqi warplane attacked the frigate Stark with missiles May 17, 1987. " <ATTACK TARGET-AP8804160078-12>: = i TYPE: CONFLICT SUBTYPE: ATTACK TARGET ATTACKER: [TE for "an Iraqi warplane"] TARGET: [TE for "the frigate Stark"] WEAPON: [TE for "missiles"] TIME: "May 17, 1987" PLACE: [TE for "the gulf'] COMMENT: "attacked" includes MUC-style TEs and TRs.table below shows the system's recall, precision, and F-Measure scores for the training set (200 texts) and the blind set (208 texts) from about a dozen news sources. Each set contains at least 3 examples of each type of relations and events. As we mentioned earlier, "relations" Text Task Templates R P F-M Set in keys Rel. 9955 76 74 75.35 Train Events 2525 57 74 64.57 Rel. & 10707 74 74 73.95 Events Rel. 8938 74 74 73.74 Blind Events 2020 42 75 53.75 Rel. & 9526 69 74 71.39 Events Table 3 : 3Evaluation Results Table 4 : 4Top-performing Event TypesFigure 4: TemplateTool Table 5 : 5Comparative results with and without co-reference rules The subject of the event is relatively far from the event-denoting verb: Vladislav Listyev, 38, who brought television interview shows in the style of Phil Donahue or Larry King to Russian viewers and pioneered hard-hitting television journalism in the 1980s, was shot in the heart by unknown assailants and died immediately... (The system missed subject Vladislav Listyev for attack event shot) • Missed ORG LOCATION relations for locations that are part of the organization's name. Larnaca General Hospital (Missed ORG_LOCATION TR for this and Larnaca. )The current system does not handle noun-based event extraction. India's acquisition last month of the nuclear submarine from the Soviet Union... (SELLER="Soviet Union" and TIME="last month'" come with the noun- based event "acquisition.") • Pronouns "it" and "they," which carry little semantic information, are currently not resolved by the co-reference module. It also has bought three late-1970s vintage ICilo class Soviet submarines and two West German HDW 209 subs (Missed BUYER=India because of unresolved it.) • Verb arguments are a conjunction of noun phrases. The current system does not handle coordination of verb arguments. Hezbollah killed 21 lsraelis and 43 of Lahad's soldiers (The system gets only the first object: 21 Israelis. ) • Ellipsis cases. The current system does not handle ellipsis. The two were sentenced to five-year prison terms with hard labor by the state security court... (Missed PERSON_SENTENCED fill because of unresolved the two.) • AcknowledgementsThis project would have not been possible without the contributions of Arcel Castillo, Lauren Halverson, and Sandy Shinn. Our thanks also to Brandon Kennedy, who prepared the hand-tagged data. SRA: Description of the IE 2 System Used for MUC-7. Chinatsu Aone, Lauren Halverson, Tom Hampton, Mila Ramos-Santacruz, Proceedings of the 7thMessage Understanding Conference. the 7thMessage Understanding Conference7Aone, Chinatsu, Lauren Halverson, Tom Hampton, and Mila Ramos-Santacruz. 1998. "SRA: Description of the IE 2 System Used for MUC-7." In Proceedings of the 7thMessage Understanding Conference (MUC-7). SRI International FASTUS System: MUC-6 Test Results and Analysis. Douglas E Appelt, R Jerry, John Hobbs, David Bear, Megumi Israel, Andy Kameyama, David Kehler, Karen Martin, Mabry Myers, Tyson, Proceedings of the 6 th Message Understanding Conference. the 6 th Message Understanding Conference6Appelt, Douglas E., Jerry R Hobbs, John Bear, David Israel, Megumi Kameyama, Andy Kehler, David Martin, Karen Myers, and Mabry Tyson. 1995. "SRI International FASTUS System: MUC- 6 Test Results and Analysis." In Proceedings of the 6 th Message Understanding Conference (MUC-6). Text Chunking Using Transformation-Based Learning. Lance A Ramshaw, Mitchell P Marcus, Proceedings of the 3 rd ACL Workshop on Very Large Corpora (WVLC95). the 3 rd ACL Workshop on Very Large Corpora (WVLC95)Ramshaw, Lance A., and Mitchell P. Marcus. 1995. "Text Chunking Using Transformation-Based Learning". In Proceedings of the 3 rd ACL Workshop on Very Large Corpora (WVLC95). NYU: Description of the Proteus~PET System as Used for MUC-7 ST. Roman Yangarber, Ralph Grishman, Proceedings of the 6 th Message Understanding Conference. the 6 th Message Understanding Conference7Yangarber, Roman and Ralph Grishman. 1998. "NYU: Description of the Proteus~PET System as Used for MUC-7 ST." In Proceedings of the 6 th Message Understanding Conference (MUC-7).
226,262,310
On the weak link between importance and prunability of attention heads
Given the success of Transformer-based models, two directions of study have emerged: interpreting role of individual attention heads and down-sizing the models for efficiency. Our work straddles these two streams: We analyse the importance of basing pruning strategies on the interpreted role of the attention heads. We evaluate this on Transformer and BERT models on multiple NLP tasks. Firstly, we find that a large fraction of the attention heads can be randomly pruned with limited effect on accuracy. Secondly, for Transformers, we find no advantage in pruning attention heads identified to be important based on existing studies that relate importance to the location of a head. On the BERT model too we find no preference for top or bottom layers, though the latter are reported to have higher importance. However, strategies that avoid pruning middle layers and consecutive layers perform better. Finally, during fine-tuning the compensation for pruned attention heads is roughly equally distributed across the un-pruned heads. Our results thus suggest that interpretation of attention heads does not strongly inform pruning.
[]
On the weak link between importance and prunability of attention heads Association for Computational LinguisticsCopyright Association for Computational LinguisticsNovember 16-20, 2020. 2020 Aakriti Budhraja Robert Bosch Centre for Data Science and Artificial Intelligence (RBC-DSAI) IIT Madras India Madhura Pande mpande@cse.iitm.ac.in Robert Bosch Centre for Data Science and Artificial Intelligence (RBC-DSAI) IIT Madras India Preksha Nema preksha@cse.iitm.ac.in Robert Bosch Centre for Data Science and Artificial Intelligence (RBC-DSAI) IIT Madras India Pratyush Kumar pratyush@cse.iitm.ac.in Robert Bosch Centre for Data Science and Artificial Intelligence (RBC-DSAI) IIT Madras India Mitesh M Khapra miteshk@cse.iitm.ac.in Robert Bosch Centre for Data Science and Artificial Intelligence (RBC-DSAI) IIT Madras India On the weak link between importance and prunability of attention heads Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing the 2020 Conference on Empirical Methods in Natural Language ProcessingAssociation for Computational LinguisticsNovember 16-20, 2020. 20203230 Given the success of Transformer-based models, two directions of study have emerged: interpreting role of individual attention heads and down-sizing the models for efficiency. Our work straddles these two streams: We analyse the importance of basing pruning strategies on the interpreted role of the attention heads. We evaluate this on Transformer and BERT models on multiple NLP tasks. Firstly, we find that a large fraction of the attention heads can be randomly pruned with limited effect on accuracy. Secondly, for Transformers, we find no advantage in pruning attention heads identified to be important based on existing studies that relate importance to the location of a head. On the BERT model too we find no preference for top or bottom layers, though the latter are reported to have higher importance. However, strategies that avoid pruning middle layers and consecutive layers perform better. Finally, during fine-tuning the compensation for pruned attention heads is roughly equally distributed across the un-pruned heads. Our results thus suggest that interpretation of attention heads does not strongly inform pruning. Introduction The acclaimed success of Transformer-based models across NLP tasks has been followed by two important directions of research. In the first direction, interpretability studies aim to understand how these models work. Given that multi-headed attention is an important feature of these models, researchers have focused on attention heads as the units of interpretation. These studies comment on the role of each attention head and the relation between a head's position and its significance (Clark et al., 2019;Michel et al., 2019;Voita et al., 2019b,a;Liu et al., 2019;Belinkov et al., 2017). These studies show that certain heads are more important based on (i) their position in the network (top, middle, bottom), or (ii) the component to which they belong (encoder self-attention, decoder self-attention, encoder-decoder cross attention), or (iii) the functional role they play (e.g., syntactic/semantic). In the other major direction, these large Transformer-based models have been down-sized to be more time and space efficient. Different methods for down-sizing have been studied such as pruning (McCarley, 2019;Gordon et al., 2020;Sajjad et al., 2020), distillation (Sanh et al., 2019;Liu et al., 2019;Jiao et al., 2019), weight quantization (Zafrir et al., 2019;Shen et al., 2019), and weight factorization and parameter sharing (Lan et al., 2019). Pruning techniques have been particularly successful in reinforcing the folk-lore that these models are highly over-parameterized. These pruning methods prune parameters based on magnitude (Gordon et al., 2020), importance (McCarley, 2019) or layer-wise (Sajjad et al., 2020). In this paper, we straddle these two directions of work by asking the following question: Can we randomly prune heads, thus completely ignoring any notion of importance of heads? To answer this, we systematically study the effect of randomly pruning specific subsets of attention heads on the accuracy on different tasks. Across experiments, we modify the random sampling to vary the percentage of heads pruned and their location in the network (components and layers). We evaluate these experiments both on the Transformer and BERT models. Our results show that a large fraction of attention heads can be pruned randomly: 75% of the attention heads of Transformer can be randomly pruned with a drop of less than 1 BLEU point on NMT tasks. Similarly, half of the attention heads of BERT can be randomly pruned with an average drop in accuracy of less than 1% across a chosen set of GLUE tasks 1 . Significantly for Transformers, we find no evidence for pruning methods preferring specific attention heads based on their location; even when the locations are chosen to match attention heads identified to be more important in existing studies. Similarly on the BERT model, pruning top and bottom layers do not show significant difference, even though existing studies attribute higher importance to the latter (Sajjad et al., 2020). However, we identify a preference to avoid pruning the middle layers and consecutive layers. Lastly, we check if during fine-tuning certain heads compensate more for the pruned heads. If so, such heads would perhaps be more important. However, we find no such evidence. In particular, during fine-tuning, the un-pruned heads change similarly across most pruning configurations. Overall, our experiments suggest that interpretation of attention heads does not strongly inform pruning. The rest of the paper is organized as follows: Section 2 mentions about the models and the datasets used for this work followed by Section 3 which provides details of the experimental process. This section reports results on both Transformer and BERT models. We summarize our work in Section 4. Models and Datasets Multi-headed Self Attention In each multi-headed attention layer we have multiple attention heads which transform the representation of inputs of a given sequence of tokens. Given the d v dimensional representation of T tokens as X ∈ T ×dv , the output of multi-headed self attention with N attention heads is given by Concat N i=1 softmax (XW q i )(XW k i ) T √ d k XW v i ,(1) where W k i , W q i , W v i ∈ dv×d k are parameters of the i-th attention head. Transformers We use the Transformer-Base model (Vaswani et al., 2017) which has 6 layers each in the three components: encoder self-attention (ES), encoderdecoder cross-attention (ED), and decoder selfattention (DS). In each layer of each of the three components, we have 8 attention heads, totalling to 3 × 6 × 8 = 144 attention heads. We train the mod-els with 2.5 million sentence pairs each from the WMT'14 English-Russian (EN-RU) and English-German (EN-DE) datasets. We report BLEU scores on WMT's newstest2014. We use Adam optimizer (Kingma and Ba, 2014) with parameters β 1 = 0.9, β 2 = 0.997, and = 10 −9 . We vary the learning rate according to the formula described in Vaswani et al. (2017) with warmup steps = 16k. We use large batch sizes of 32k and 25k for EN-RU and EN-DE, respectively, as it has been established that large batch sizes are inherent to the performance of Transformers (Popel and Bojar, 2018;Voita et al., 2019b). We achieve effectively large batch sizes using the technique of gradient accumulation on single NVIDIA V100 and 1080Ti GPUs. BERT In all experiments involving BERT, we use the BERT Base-uncased model (Devlin et al., 2018). It has 12 layers and each layer contains 12 attention heads, summing to 144 attention heads. We fine-tune and evaluate the pre-trained model 2 on sentence entailment task MNLI-M, the question similarity task QQP, the question-answering task QNLI, and the movie review task SST-2 from the GLUE Benchmark (Wang et al., 2018). We report accuracies on the official development sets of the considered GLUE tasks. For each of the four GLUE tasks, namely MNLI-M, QQP, QNLI and SST-2, we tried combinations of batch size and learning rate from {8, 16, 32, 64, 128} and {2, 3, 4, 5} × 10 −5 respectively and selected the best performing configuration. The exact hyperparameters used for each of the tasks have been made available with the code released 3 . Each BERT experiment was run on a single Cloud TPU (v2-8). Experiments Experimental Process In all the experiments, we perform random pruning where a subset of attention heads chosen by random sampling are zeroed out. Formally, each attention head is assigned a weight ξ which is 0 if the head is pruned and 1 otherwise. Then, the output of an attention layer is given by Concat N i=1 ξ i softmax (XW q i )(XW k i ) T √ d k XW v i(2) After pruning, we fine-tune the Transformer model for 30 epochs and the BERT model for 10 epochs. Since the values ξ are randomly sampled, in each experiment we report the average of three different samplings of ξ. The standard deviations are 0.668% and 0.778% of the reported average values for Transformer and BERT respectively. Experimental Results on Transformers Varying Pruning Percentage. We randomly prune attention heads across all components and layers varying the percentage of pruning from 25% to 87% (Table 1). We observed that in the case of extreme pruning, i.e., keeping just one head in each layer of each of the three components (which corresponds to a pruning percentage of 87%), the drop in BLEU was 1.62 (EN-RU) and 1.03 (EN-DE) as can be seen from Pruning based on Layer Numbers. Voita et al. (2019b) identify that attention heads in specific layers of the Transformer -lower layers of Self-Attention components, i.e., Encoder-Self (ES) and Decoder-Self (DS), and higher layers of Encoder-Decoder cross attention (ED) -are more important. We evaluate the correspondence of this importance to pruning. We choose 5 pruning percentages from 25% to 75% and in each case two pruning configurations: One where the heads considered important are retained and the other where the important heads are pruned. The configurations and the corresponding BLEU scores on the EN-RU dataset are shown in Table 2 where each configuration is specified as a string. For example, the string 777322 indicates that 7 heads each were retained in the first three layers, 3 in the fourth layer and 2 each in the last two layers. For each pruning percentage, the first row corresponds to the configuration in which heads considered important (Voita et al., 2019b) were retained and the second row corresponds to the adversarial configuration in which heads considered important were pruned. We identify no preference in pruning as for each pruning percentage the performance of both configurations is very similar. Pruning Based on Component. Some studies show that heads in the ED component are most important while those in the ES module are least important (Voita et al., 2019b). We choose 4 different pruning percentages and in each case consider three configurations where the number of attention heads is least in one chosen component (ES, ED, DS). The configurations and corresponding BLEU scores on the EN-RU dataset are shown in Table 3. Pruning % Configuration BLEU Score Baseline (48,48,48) 29.09 (14,31,30) 28.96 (-0.13) 48% (31,14,30) 29.00 (-0.09) (30,31,14) 29.13 (+0.04) (12,21,25) 28.48 (-0.61) 60% (21,12,25) 28.78 (-0.31) (25,21,12) 28.48 (-0.61) (8,13,15) 27.95 (-1.14) 75% (13,8,15) 27.96 (-1.13) (15,13,8) 28.04 (-1.05) (5,9,12) 27.24 (-1.85) 82% (9,5,12) 26.95 (-2.14) (12,9,5) 27.83 (-1.26) Table 3: BLEU scores for different pruning configurations of Transformer specified by the triple denoting the number of heads retained in the Encoder-Self, Encoder-Decoder, and Decoder-Self attention components. We identify no consistent preference in the pruning strategy: In the 4 cases considered, each of the 3 configurations has the highest BLEU score in at least one case. Note that we chose the number of heads in each layer (14, 31, etc) to be consistent with those used in (Voita et al., 2019b). Varying Pruning Percentage. We vary the pruning percentage from 10 to 90% and report the accuracy on the 4 GLUE tasks: MNLI-M, QQP, QNLI, and SST-2 (Table 4). We observe that half of the attention heads can be pruned with an average accuracy drop of under 1%. As shown in Figure 1, beyond 50% pruning, the accuracy drop is sharper. Experimental Results on BERT Pruning based on Layer Numbers. To identify any preference to pruning heads in specific layers, we consider several configurations as shown in Table 5, where we prune a subset of layers entirely, i.e. we prune all the attention heads of particular layers. When all the self-attention heads of a layer l are pruned, only the feed-forward network of that layer will be active whose input will just be the output from the previous layer l-1. Bottom layers of BERT have been identified to model word morphology (Liu et al., 2019;Belinkov et al., 2017) and are considered to be important (Sajjad et al., 2020). Further, recent work has identified high cosine-similarity between output vectors of the top layers, indicating reduced importance of top layers (Goyal et al., 2020). We relate these studies to pruning by comparing the pruning of the same number of top and bottom layers (rows 2-9 in Table 5). Amongst the four cases, two cases each favor pruning top layers and bottom layers, revealing no preference in pruning. The middle layers in BERT have been shown to have specific characteristics of higher attention entropy and greater attention to specific tokens (Clark et al., 2019). We thus considered configurations where we compare pruning top and bottom layers against pruning middle layers (last eight rows of Table 5). The results indicate a clear preference: In 14 out of 16 cases, pruning the middle layers performs worse that pruning equal number of layers distributed among top/bottom layers. Indeed, we incur an additional over 2% average drop in accuracy for QNLI and SST-2 tasks, indicating a task-specific sensitivity to pruning middle layers. Recent work has identified that consecutive lay- We now evaluate this for fine-tuning after pruning. (a) (b) (c) (d) (e)(f) In Figure 2, we plot the average change in magnitude of parameters for different attention heads (W q , W k , W v in Equation 1) for the MNLI-M task. We observe no spatial patterns in the parameter changes or with respect to relative distance from pruned heads. In particular, for all experiments in Table 5 and 6, the average change in attention parameters for any two layers differs by less than 10%. This shows that the compensation for pruned attention heads is roughly equally distributed across the unpruned heads. Conclusion We systematically studied the effect of pruning attention heads in Transformer and BERT models. We confirmed the general expectation that a large number of attention heads can be pruned with limited impact on performance. For Transformers we observed no preference for pruning attention heads which have been identified as important in interpretability studies. Similarly, for BERT we found no preference between pruning top and bottom layers. However, pruning middle layers and consecutive layers led to a larger drop in accuracy. We also observe that the recovery during fine-tuning was uniformly distributed across attention heads. We conclude that there is often no direct entailment between importance of an attention head as characterised in several recent studies, and low prunability of the respective head using random pruning. and for supporting Preksha Nema through their Google Ph.D. India Fellowship program. We also thank the Department of Computer Science and Engineering as well as the Robert Bosch Centre for Data Science and Artificial Intelligence (RBC-DSAI), IIT Madras for providing us with all the resources that made this work possible. Table 1 . 1Across both EN-RU and EN-DE tasks, 60% of the attention heads can be pruned with a maximum drop in BLEU score by only 0.15. As can be observed from Figure 1, the drop is sharper as we increase the pruning percentage beyond 60%. % Pruning EN-RU EN-DE 0 (Baseline) 29.09 27.95 25 29.59 (+0.50) 28.19 (+0.24) 35 29.29 (+0.20) 27.94 (-0.01) 50 29.38 (+0.29) 28.02 (+0.07) 55 29.00 (-0.09) 28.24 (+0.29) 60 28.94 (-0.15) 27.88 (-0.07) 75 28.22 (-0.87) 27.49 (-0.46) 81 27.97 (-1.12) 26.80 (-1.15) 87 27.47 (-1.62) 26.92 (-1.03) Table 1: BLEU scores for Transformer on EN-RU and EN-DE datasets when subject to varying pruning per- centages. Difference from the baseline score is indi- cated in brackets. Table 2 : 2BLEU scores for different pruning configura- tions of Transformer. Every row has 2 configurations: first, where the important heads are retained, and sec- ond, where the important heads are pruned. Table 4 : 4Performance of random pruning on BERT for different pruning percentages. The accuracies are re- ported on the official GLUE development datasets. Table 5 : 5Accuracy on GLUE tasks for multiple layerwise pruned configurations of BERT. Figure 2 : 2Head-wise average magnitude change of weights during fine-tuning for the following pruning configurations of BERT for the MNLI-M task: (a) 10% pruned (b) 50% pruned (c) 90% pruned (d) Top three layers pruned (e) Bottom three layers pruned (f) Alternate layers pruned. ers of BERT have similar functionality(Lan et al., 2019). To study this, we considered configurations where six even and odd alternate layers are pruned and compare it with other strategies of pruning 50% layers of BERT(Table 6). We observe that the odd configuration performs better than the Top 6 and Bottom 6 configurations, indicating a preference to avoid pruning of consecutive layers.Layers Pruned MNLI-M QQP QNLI SST-2 Top 6 80.98 90.52 87.44 90.02 Bottom 6 79.29 90.17 87.40 91.05 Even 6 81.54 90.74 86.39 90.36 Odd 6 81.95 90.58 90.18 92.20 Top 3, Bottom 3 81.72 90.67 88.30 92.31 Middle 6 80.08 90.49 87.07 87.84 Table 6 : 6Accuracy on GLUE tasks when half of the layers of BERT are pruned. Pruning odd numbered layers retains the maximal accuracy across most of the tasks. of Fine-Tuning. Recent studies(Kovaleva et al., 2019;Houlsby et al., 2019) have reported that when fine-tuning BERT for specific tasks, the top layers change much more than the lower layers.Effect We avoid WNLI, RTE, MRPC, STS-B, CoLA as the results on these datasets tend to be noisy and unstable as reported in(Gordon et al., 2020;Sajjad et al., 2020) https://github.com/google-research/bert 3 https://github.com/iitmnlp/head importance and pruning AcknowledgementsWe thank Amazon Web Services for their support with the NVIDIA GPUs. We also thank Google for the free TPU credits under their TFRC program, What do neural machine translation models learn about morphology?. Yonatan Belinkov, Nadir Durrani, Fahim Dalvi, Hassan Sajjad, James Glass, arXiv:1704.03471arXiv preprintYonatan Belinkov, Nadir Durrani, Fahim Dalvi, Hassan Sajjad, and James Glass. 2017. What do neural ma- chine translation models learn about morphology? arXiv preprint arXiv:1704.03471. What does bert look at? an analysis of bert's attention. Kevin Clark, Urvashi Khandelwal, Omer Levy, Christopher D Manning, arXiv:1906.04341arXiv preprintKevin Clark, Urvashi Khandelwal, Omer Levy, and Christopher D Manning. 2019. What does bert look at? an analysis of bert's attention. arXiv preprint arXiv:1906.04341. Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, arXiv:1810.04805Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprintJacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understand- ing. arXiv preprint arXiv:1810.04805. Compressing bert: Studying the effects of weight pruning on transfer learning. Kevin Mitchell A Gordon, Nicholas Duh, Andrews, arXiv:2002.08307arXiv preprintMitchell A Gordon, Kevin Duh, and Nicholas Andrews. 2020. Compressing bert: Studying the effects of weight pruning on transfer learning. arXiv preprint arXiv:2002.08307. Saurabh Goyal, Anamitra Roy Choudhary, Venkatesan Chakaravarthy, arXiv:2001.08950Saurabh ManishRaje, Yogish Sabharwal, and Ashish Verma. 2020. Power-bert: Accelerating bert inference for classification tasks. arXiv preprintSaurabh Goyal, Anamitra Roy Choudhary, Venkate- san Chakaravarthy, Saurabh ManishRaje, Yogish Sabharwal, and Ashish Verma. 2020. Power-bert: Accelerating bert inference for classification tasks. arXiv preprint arXiv:2001.08950. Neil Houlsby, Andrei Giurgiu, Stanislaw Jastrzebski, Bruna Morrone, Quentin De Laroussilhe, Andrea Gesmundo, Mona Attariyan, Sylvain Gelly, arXiv:1902.00751Parameter-efficient transfer learning for nlp. arXiv preprintNeil Houlsby, Andrei Giurgiu, Stanislaw Jastrzebski, Bruna Morrone, Quentin De Laroussilhe, Andrea Gesmundo, Mona Attariyan, and Sylvain Gelly. 2019. Parameter-efficient transfer learning for nlp. arXiv preprint arXiv:1902.00751. Xiaoqi Jiao, Yichun Yin, Lifeng Shang, Xin Jiang, Xiao Chen, Linlin Li, arXiv:1909.10351Fang Wang, and Qun Liu. 2019. Tinybert: Distilling bert for natural language understanding. arXiv preprintXiaoqi Jiao, Yichun Yin, Lifeng Shang, Xin Jiang, Xiao Chen, Linlin Li, Fang Wang, and Qun Liu. 2019. Tinybert: Distilling bert for natural language understanding. arXiv preprint arXiv:1909.10351. Adam: A method for stochastic optimization. P Diederik, Jimmy Kingma, Ba, arXiv:1412.6980arXiv preprintDiederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Revealing the dark secrets of bert. Olga Kovaleva, Alexey Romanov, Anna Rogers, Anna Rumshisky, arXiv:1908.08593arXiv preprintOlga Kovaleva, Alexey Romanov, Anna Rogers, and Anna Rumshisky. 2019. Revealing the dark secrets of bert. arXiv preprint arXiv:1908.08593. Albert: A lite bert for self-supervised learning of language representations. Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, Radu Soricut, arXiv:1909.11942arXiv preprintZhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. 2019. Albert: A lite bert for self-supervised learn- ing of language representations. arXiv preprint arXiv:1909.11942. Attentive student meets multi-task teacher: Improved knowledge distillation for pretrained models. Linqing Liu, Huan Wang, Jimmy Lin, Richard Socher, Caiming Xiong, arXiv:1911.03588arXiv preprintLinqing Liu, Huan Wang, Jimmy Lin, Richard Socher, and Caiming Xiong. 2019. Attentive student meets multi-task teacher: Improved knowledge dis- tillation for pretrained models. arXiv preprint arXiv:1911.03588. Pruning a bert-based question answering model. Scott Mccarley, arXiv:1910.06360arXiv preprintJ Scott McCarley. 2019. Pruning a bert-based question answering model. arXiv preprint arXiv:1910.06360. Are sixteen heads really better than one?. Paul Michel, Omer Levy, Graham Neubig, Advances in Neural Information Processing Systems. Paul Michel, Omer Levy, and Graham Neubig. 2019. Are sixteen heads really better than one? In Ad- vances in Neural Information Processing Systems, pages 14014-14024. Training tips for the transformer model. Martin Popel, Ondřej Bojar, The Prague Bulletin of Mathematical Linguistics. 1101Martin Popel and Ondřej Bojar. 2018. Training tips for the transformer model. The Prague Bulletin of Mathematical Linguistics, 110(1):43-70. Poor man's bert: Smaller and faster transformer models. Hassan Sajjad, Fahim Dalvi, Nadir Durrani, Preslav Nakov, arXiv:2004.03844arXiv preprintHassan Sajjad, Fahim Dalvi, Nadir Durrani, and Preslav Nakov. 2020. Poor man's bert: Smaller and faster transformer models. arXiv preprint arXiv:2004.03844. Victor Sanh, Lysandre Debut, Julien Chaumond, Thomas Wolf, arXiv:1910.01108Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprintVictor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. 2019. Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108. Q-bert: Hessian based ultra low precision quantization of bert. Sheng Shen, Zhen Dong, Jiayu Ye, Linjian Ma, Zhewei Yao, Amir Gholami, W Michael, Kurt Mahoney, Keutzer, arXiv:1909.05840arXiv preprintSheng Shen, Zhen Dong, Jiayu Ye, Linjian Ma, Zhewei Yao, Amir Gholami, Michael W Mahoney, and Kurt Keutzer. 2019. Q-bert: Hessian based ultra low precision quantization of bert. arXiv preprint arXiv:1909.05840. Attention is all you need. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, Illia Polosukhin, Advances in neural information processing systems. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information pro- cessing systems, pages 5998-6008. The bottom-up evolution of representations in the transformer: A study with machine translation and language modeling objectives. Elena Voita, Rico Sennrich, Ivan Titov, arXiv:1909.01380arXiv preprintElena Voita, Rico Sennrich, and Ivan Titov. 2019a. The bottom-up evolution of representations in the transformer: A study with machine translation and language modeling objectives. arXiv preprint arXiv:1909.01380. Analyzing multihead self-attention: Specialized heads do the heavy lifting, the rest can be pruned. Elena Voita, David Talbot, Fedor Moiseev, Rico Sennrich, Ivan Titov, arXiv:1905.09418arXiv preprintElena Voita, David Talbot, Fedor Moiseev, Rico Sen- nrich, and Ivan Titov. 2019b. Analyzing multi- head self-attention: Specialized heads do the heavy lifting, the rest can be pruned. arXiv preprint arXiv:1905.09418. Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, Samuel R Bowman, arXiv:1804.07461Glue: A multi-task benchmark and analysis platform for natural language understanding. arXiv preprintAlex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R Bowman. 2018. Glue: A multi-task benchmark and analysis platform for natural language understanding. arXiv preprint arXiv:1804.07461. Ofir Zafrir, Guy Boudoukh, Peter Izsak, Moshe Wasserblat, arXiv:1910.06188Q8bert: Quantized 8bit bert. arXiv preprintOfir Zafrir, Guy Boudoukh, Peter Izsak, and Moshe Wasserblat. 2019. Q8bert: Quantized 8bit bert. arXiv preprint arXiv:1910.06188.
3,426,165
Comparison of Grapheme-to-Phoneme Conversion Methods on a Myanmar Pronunciation Dictionary
Grapheme-to-Phoneme (G2P) conversion is the task of predicting the pronunciation of a word given its graphemic or written form. It is a highly important part of both automatic speech recognition (ASR) and text-to-speech (TTS) systems. In this paper, we evaluate seven G2P conversion approaches: Adaptive Regularization of Weight Vectors (AROW) based structured learning (S-AROW), Conditional Random Field (CRF), Joint-sequence models (JSM), phrase-based statistical machine translation (PBSMT), Recurrent Neural Network (RNN), Support Vector Machine (SVM) based point-wise classification, Weighted Finite-state Transducers (WFST) on a manually tagged Myanmar phoneme dictionary. The G2P bootstrapping experimental results were measured with both automatic phoneme error rate (PER) calculation and also manual checking in terms of voiced/unvoiced, tones, consonant and vowel errors. The result shows that CRF, PBSMT and WFST approaches are the best performing methods for G2P conversion on Myanmar language.
[ 11689669, 8884845, 3219410, 5284722, 11985819 ]
Comparison of Grapheme-to-Phoneme Conversion Methods on a Myanmar Pronunciation Dictionary December 11-17 2016 Ye Kyaw Artificial Intelligence Lab Okayama Prefectural University Japan Language and Speech Science Research Lab Waseda University Japan Thu Win Pa winpapa@ucsy.edu.mm Natural Language Processing Lab University of Computer Studies Yangon Myanmar Pa ‡ Yoshinori Sagisaka ysagisaka@gmail.com Language and Speech Science Research Lab Waseda University Japan Naoto Iwahashi iwahashi@c.oka-pu.ac.jp Artificial Intelligence Lab Okayama Prefectural University Japan Comparison of Grapheme-to-Phoneme Conversion Methods on a Myanmar Pronunciation Dictionary Proceedings of the 6th Workshop on South and Southeast Asian Natural Language Processing the 6th Workshop on South and Southeast Asian Natural Language ProcessingOsaka, JapanDecember 11-17 2016 Grapheme-to-Phoneme (G2P) conversion is the task of predicting the pronunciation of a word given its graphemic or written form. It is a highly important part of both automatic speech recognition (ASR) and text-to-speech (TTS) systems. In this paper, we evaluate seven G2P conversion approaches: Adaptive Regularization of Weight Vectors (AROW) based structured learning (S-AROW), Conditional Random Field (CRF), Joint-sequence models (JSM), phrase-based statistical machine translation (PBSMT), Recurrent Neural Network (RNN), Support Vector Machine (SVM) based point-wise classification, Weighted Finite-state Transducers (WFST) on a manually tagged Myanmar phoneme dictionary. The G2P bootstrapping experimental results were measured with both automatic phoneme error rate (PER) calculation and also manual checking in terms of voiced/unvoiced, tones, consonant and vowel errors. The result shows that CRF, PBSMT and WFST approaches are the best performing methods for G2P conversion on Myanmar language. Introduction Grapheme-to-Phoneme (G2P) conversion models are important for natural language processing (NLP), automatic speech recognition (ASR) and text-to-speech (TTS) developments. Although many machine learning approaches are applicable for G2P conversion, most of them are supervised learning approaches and as a prerequisite we have to prepare clean annotated training data and this is costly. As a consequence, G2P models are rarely available for under-resourced languages such as South and Southeast Asian languages. In practice, we need to perform bootstrapping or active learning with a small manually annotated G2P dictionary for efficient development of G2P converters. In this paper, we examine seven G2P conversion methodologies for incremental training with a small Myanmar language G2P lexicon. We used automatic evaluation in the form of phoneme error rate (PER) and also manually evaluated Myanmar language specific errors such as inappropriate voiced to unvoiced conversion and tones, on syllable units. G2P Conversion for Myanmar Language Myanmar language (Burmese) is one of the under-resourced Southeast Asian languages for NLP. It has SOV (Subject-Object-Verb) typology and syntactically is quite similar to Japanese and Korean in that functional morphemes succeed content morphemes, and verb phrases succeed noun phrases. In Myanmar text, words composed of single or multiple syllables are usually not separated by white space. Although spaces are used for separating phrases for easier reading, it is not strictly necessary, and these spaces are rarely used in short sentences. In this paper, we only consider phonetic conversion of syllables within words for G2P bootstrapping with a dictionary. Myanmar syllables are generally composed of sequences of consonants and (zero or more) vowel combinations starting with a consonant. Here, vowel combinations can be single vowels, sequences of vowels and sequences of vowels starting with a consonant that modifies the pronunciation of the first vowel. Some examples of Myanmar vowel combinations are အင (in:), အ န (ein:), အ င (ain:), အန (an:) and အ င (aun:). The relationship between words and the pronunciation of Myanmar language is not completely consistent, ambiguous, and context dependent, depending on adjacent syllables. Moreover, there are many exceptional cases and rules that present difficulties for G2P conversion (Ye Kyaw Thu et al., 2015a). Some Myanmar syllables can be pronounced in more than 4 ways depending on the context and Part-of-Speech (POS) of the syllable. As an example, consider the pronunciation of the two-syllable word ရ င ဝယ (meaning trade) with corresponding standard pronunciation of its syllables " ရ င " (pronunciation: jaun:) and "ဝယ " (pronunciation: we). This is a simple pronunciation pattern of a Myanmar word and it has no pronunciation change (i.e. jaun: + we => jaun:). However, many pronunciations of syllables are changed depending on their combination such as in the Myanmar word မတ ( မတ syllable + တ syllable), love in English; the pronunciation changes from "mi' + ta" to "mji' + ta", န ရ က (န syllable + ရ က syllable) , ear in English; the pronunciation changes from "na: + jwe'" to "na-+ jwe'" . POS is also a factor for pronunciation. The Myanmar word ထမင ခ က can be pronounced in two ways; "htamin: che'" when used as a verb "cook rice" and "htamin: gye'" when used as a noun "a cook". In another example, the three syllable Myanmar word စ ရင စစ can be pronounced "sa jin: si'" when used to mean verb "audit" or "sajin: zi'" when used to mean a noun "auditor"; the single-syllable Myanmar word ခ င can be pronounced "chein". for usage as an adjective "dented" or can be pronounced "gyein.". when used as a noun meaning "food carrier"; one syllable Myanmar word ခ can be pronounced "gyi" when used as a noun meaning "barking deer" or can be pronounced "chei" when used as a verb. The most common pronunciation change of Myanmar syllables is unvoiced to voiced and it is contextually dependent, for example the change from: "pi. tau'" to "badau'" for the word ပ တ က (Pterocarpus macrocarpus flower) , "pja. tin: pau'" to "badin: bau'" for ပတင ပ က (window) word. Some same syllables within a word can be pronounced differently, for example, the Myanmar consonant က pronounced "ka." and "ga-" for three syllables Myanmar word ကကတစ "ka. ga-di'" (giant sea perch in English). In some Myanmar words, the pronunciation of a syllable is totally different from its grapheme or spelling such as one old Myanmar name လ လင က "lu. lin kyo" pronounced as "nalin gyo". (Davel and Martirosian, 2009) designed a process for the development of pronunciation dictionaries in resource-scarce environments, and applied it to the development of pronunciation dictionaries for ten of the official languages of South Africa. The authors mentioned that it is a means of developing practically usable pronunciation dictionaries with minimal resources. (Schlippe, 2014) proposed efficient methods which contribute to rapid and economic semi-automatic pronunciation dictionary development and evaluated them on English, German, Spanish, Vietnamese, Swahili, and Haitian Creole. A novel modified Expectation-Maximization (EM)-driven G2P sequence alignment algorithm that supports joint-sequence language models, and several decoding solutions using weighted finite-state transducers (WFSTs) was presented in (Novak et al., 2012). G2P conversion using statistical machine translation (SMT) was proposed in (Laurent et al., 2009), (Karanasou and Lamel, 2011). In (Laurent et al., 2009), it is shown that applying SMT gives better results than a joint sequence model-based G2P converter for French. The automatic generation of a pronunciation dictionary is proposed in (Karanasou and Lamel, 2011), and their technique used Moses phrase-based SMT toolkit (Koehn et al., 2007) G2P conversion. (Damper et al., 1999) compared different G2P methods and found that data-driven methods outperform rule-based methods. Related Work As far as the authors are aware, there have been only three published methodologies for Myanmar language G2P conversion. (Ei Phyu Phyu Soe, 2013) proposed a dictionary based approach and analyzed it only on pure Myanmar syllables without considering subscript consonants or Pali words. It is a simple approach with a dictionary that is not able to handle out-of-vocabulary (OOV) words. (Ye Kyaw Thu et al., 2015a) proposed four simple Myanmar syllable pronunciation patterns as features that can be used to augment the models in a CRF approach to G2P conversion. The results show that the new features can substantially improve the accuracy of G2P conversion especially on conversion of syllables specifically targeted by the new feature sets. (Ye Kyaw Thu et al., 2015b) applied a phrase-based SMT (PBSMT) approach to Myanmar G2P conversion and found that G2P conversion using SMT outperformed a CRF approach, with a considerably faster training time. Their comparison between the CRF and PBSMT models shows that the PBSMT approach can handle pronunciation prediction on new compound words (a common form of OOV) well, and can also handle the influence of neighbouring words on the pronunciation of a word. G2P Conversion Methodologies In this section, we describe the G2P conversion methodologies used in the experiments in this paper. (Kubo et al., 2014) proposed Structured AROW extending AROW (Crammer et al., 2013) to structured learning for G2P conversion. AROW is an online learning algorithm for binary classification that that has several useful properties: large margin training, confidence weighting, and the capacity to handle non-separable data. To overcome the overfitting problems encountered by competitive methods such as Margin Infused Relaxed Algorithm (MIRA) (Crammer and Singer, 2003) and the Confidence Weighted Algorithm (CW) (Dredze et al., 2008) AROW recasts the terms for the constraint of CW as regularizers. S-AROW is applicable for G2P conversion tasks and has a shorter learning time than MIRA. It also has been shown to have a lower phoneme and word error rate compared to MIRA (Kubo et al., 2014). Structured Adaptive Regularization of Weight Vectors (S-AROW) Conditional Random Fields Linear-chain conditional random Fields (CRFs) (Lafferty et al., 2001) are models that consider dependencies among the predicted segmentation labels that are inherent in the state transitions of finite state sequence models and can incorporate domain knowledge effectively into segmentation. Unlike heuristic methods, they are principled probabilistic finite state models on which exact inference over sequences can be efficiently performed. The model computes the following probability of a label sequence Y = {y 1 , …, y T } of a particular character string W = {w 1 , …, w T }. P λ (Y|W) = 1 Z(W) exp( T ∑ t=1 |λ| ∑ k=1 λ k f k (y t−1 , W, t))(1) where Z(W) is a normalization term, f k is a feature function, and λ is a feature weight vector. Joint-sequence models (JSM) The joint-sequence models (JSM) approach for G2P was proposed by (Bisani and Ney, 2008) and it is also one of the most popular approaches for G2P conversion. The fundamental idea of JSM is that both the grapheme and phoneme sequences can be generated jointly by means of a sequence of joint units (graphones) which carry both grapheme and phoneme symbols. The goal of the JSM is to find a sequence of Y phonemes, Q = Q Y 1 = {q 1 , q 2 , ..., q Y }, that given by a sequence of X graphemes defined by G = G X 1 = {g 1 , g 2 , ..., g X }. This problem can be describe as the determination of the optimal sequence of phonemes,Q, that maximizes their conditional probability, Q, given a sequence of graphemes, G: Q = arg max Q P (Q|G).(2) The calculation for all possible sequences of Q directly from P (Q|G) is difficult and we can express it using Bayes' Rule as follows: Q = arg max Q P (Q|G) = arg max Q { P (G|Q) · P (Q)/P (G) }(3) Here, P (G) is common to all sequences Q. The above equation can be simplified as follows: Q = arg max Q P (G|Q) · P (Q)(4) Phrase-based Statistical Machine Translation (PBSMT) A PBSMT translation model is based on joint phrasal units analogous to graphones (Koehn et al., 2003b), (Och and Marcu, 2003). A phrase-based translation system also includes length models, a language model on the target side, and a re-ordering model (which is typically not used for monotonic transduction such as G2P conversion. The models are integrated within a log-linear framework. Recurrent Neural Network (RNN) Encoder-Decoder The RNN Encoder-Decoder technique for machine translation , (Bahdanau et al., 2014) is a neural network model that links blocks of Long Short-Term Memory (LSTM) (Hochreiter and Schmidhuber, 1997) in an RNN that encodes the source language and decoder units that generate the target language. The basic architecture of the Encoder-Decoder model includes two networks: one encodes the source sentence into a real-valued vector, and the other decodes the vector into a target sentence. In the case of G2P, input is a sequence of graphemes of a Myanmar word, and the output is a phoneme sequence. For example, G2P conversion for Myanmar word ရ က ပ န သ (hidden talent in English), the model takes the graphemes of the source word as input: ရ က , ပ န , သ and outputs the target phoneme sequence jwe', poun: and dhi:, which is terminated by an end-of-sequence token (see Figure 1). Support Vector Machine (SVM) based Point-wise classification Generally, sequence-based pronunciation prediction methods such as (Nagano et al., 2005) require a fully annotated training corpus. To reduce the cost of preparing a fully annotated corpus and also considering possible future work on domain adaptation from the general to the target domain, techniques involving only partial annotation have been developed (Ringger et al., 2007), (Tsuboi et al., 2008). (Neubig and Mori, 2010) proposed the combination of two separate techniques to achieve more efficient corpus annotation: point-wise estimation and word-based annotation. Point-wise estimation assumes that every decision about a segmentation point or word pronunciation is independent from the other decisions (Neubig and Mori, 2010). (Novak et al., ) introduced a modified WFST-based many-to-many Expectation Maximization (EM) driven alignment algorithm for G2P conversion, and presented preliminary experimental results applying a RNN language model (RNNLM) as an N-best rescoring mechanism for G2P conversion. Their many-to-many approach contained three main modifications to G2P alignment, (1) only many-to-one and one-to-many arcs are trained, (2) a joint WFSA alignment lattice is built from each sequence pair using a log semiring (3) all remaining arcs (including deletion and substitution) are initialized to and constrained to maintain a non-zero weight. This approach provides EM training to produce better estimation for all possible transitions. The authors applied an RNNLM-based N-best rescoring method to G2P conversion. Weighted Finite-state Transducers (WFST) Experimental Setup Data Preparation In the experiments, we used 25,300 words of Myanmar Language Commission (MLC) Dictionary data (Lwin, 1993). We randomized the original MLC dictionary and prepared 25,000 words for training, 300 words for three open test sets (100 words for each test set) for evaluation. In order to study how the seven G2P approaches behave with varying amounts of training data, we ran a sequence of experiments that trained G2P models from 2,500 words to 25,000 (2393 unique graphemes, 1864 unique pronunciations and 113 unique phonemes) words in increments of 2,500 words. 100 words from the training data also used for closed testing. The G2P mapping is used same mapping proposed by (Ye Kyaw Thu et al., 2015b) and some examples are given in Table 1. Consonant Vowel Independent Vowel Foreign Pronunciation • CRFSuite: We used the CRFsuite tool (version 0.12) (Okazaki, 2007), (https://github. com/chokkan/crfsuite) for training and testing CRF models. The main reason was its speed relative to other CRF toolkits. က => k ွ : => wa: ဥ => au. (က ) => K ခ => kh ွ => wa. ဦ => u (ခ ) => KH ဂ => g ေွ: => wei: ဦ => u: (လ ) => L ဃ => gh ေွ့ => wei. ၏ => i. (စ ) => S င => ng ွန => un ဤ => i (ထ ) => HT • KyTea: is a general toolkit (version 0.47) (Neubig and Mori, 2010), (https://github.com/ neubig/kytea) and it is able to handle word segmentation and tagging. It uses a point-wise classifier-based (SVM or logistic regression) approach and the classifiers are trained with LIBLINEAR (http://www.csie.ntu.edu.tw/~cjlin/liblinear/). We used the KyTea toolkit for studying G2P bootstrapping with SVM based point-wise classification for Myanmar language. • Moses: We used the PBSMT system provided by the Moses toolkit (http://www.statmt. org/moses/) for training the PBSMT model for G2P conversion. The word segmented source language was aligned with the word segmented target language using GIZA++ (Och and Ney, 2000). The alignment was symmetrized by grow-diag-final-and heuristic (Koehn et al., 2003a). The lexicalized reordering model was trained with the msd-bidirectional-fe option (Tillmann, 2004). We used SRILM for training the 5-gram language model with interpolated modified Kneser-Ney discounting (Stolcke, 2002), (Chen and Goodman, 1996). Minimum error rate training (MERT) (Och, 2003) was used to tune the decoder parameters and the decoding was done using the Moses decoder (version 2.1.1). We used default settings of Moses for all experiments. • Phonetisaurus: A WFST-driven G2P converter (Novak et al., 2012), (https://github. com/AdolfVonKleist/Phonetisaurus). Version 0.8a was used. An EM-based many-tomany aligner was applied to grapheme and phoneme sequences (training data) prior to building a G2P model. In the updated version of Phonetisaurus, dictionary alignment is performed with OpenFst (http://www.openfst.org/twiki/bin/view/FST/WebHome). In order to estimate an n-gram language model, any language model toolkit such as MITLM (https://github.com/mitlm/mitlm)or SRILM (http://www.speech.sri.com/ projects/srilm/) can be used. We used MITLM toolkit and conversion from ARPA format to a binary FST representation was done with OpenFST. • Sequitur: A data-driven G2P converter developed at RWTH Aachen University -Department of Computer Science by Maximilian Bisani (Bisani and Ney, 2008). The 2016-04-25 release version (https://www-i6.informatik.rwth-aachen.de/web/Software/g2p.html) was used for the JSM G2P conversion experiment. • Slearp: Structured LEarning And Prediction (Kubo et al., 2014). We used Slearp (version 0.96) (https://osdn.jp/projects/slearp/) for S-AROW G2P model building. We ran all above software with default parameters for building the G2P models. Although feature engineering is usually an important component of machine-learning approaches, the G2P models were built with features from only the grapheme and phoneme parallel data, to allow for a fair comparison between the seven approaches. Evaluation To evaluate the quality of the G2P approaches, we used two evaluation criteria. One is automatic evaluation of phoneme error rate (PER) with SCLITE (score speech recognition system output) program from the NIST scoring toolkit SCTK version 2.4.10 (http://www1.icsi.berkeley.edu/Speech/docs/sctk-1.2/sclite.htm). The other evaluation was done manually by counting voiced/unvoiced, tones, consonant and vowel errors on G2P outputs. The SCLITE scoring method for calculating the erroneous words in Word Error Rate (WER), is as follows: first make an alignment of the G2P hypothesis (the output from the trained model) and the reference (human transcribed) word strings and then perform a global minimization of the Levenshtein distance function which weights the cost of correct words, insertions (I), deletions (D) and substitutions (S). The formula for WER is as follows: W ER = (I + D + S) * 100/N(5) In our case, we trained G2P models with syllable segmented words and thus alignment was done on syllable units and the PER was derived from the Levenshtein distance at the phoneme level rather than the word level. For example, phoneme level of syllable alignment, counting I, D and S for Myanmar word "ခ င ခ က " (exception in English), left column and "စ တ ပ က လက ပ က " (disappointed in English), right column is as follows: Results Automatic Evaluation with Phoneme Error Rate (PER) We used PER to evaluate the performance of G2P conversion. We computed the PER scores using sclite (http://www1.icsi.berkeley.edu/Speech/docs/sctk-1.2/sclite.htm) on the hypotheses of G2P models and references. The results are presented in Figure 2 and lower PER is better in performance as we mentioned in Section 5.3. The experimental results also show the learning curve variations of seven G2P conversion approaches on the training data. We can clearly see that there is no significant learning improvement for the SVM based point-wise classification from the evaluation results on both the closed and the three open test sets (see Figure 2, (g)). Also, the PER results of S-AROW, JSM, PBSMT and RNNA on the closed test data are unstable. Each of the graphs show the performance of G2P conversion and the best PER scores (i.e. 0) was achieved on the closed test data by the RNN, S-AROW and WFST. The best PER scores of the CRF and PBSMT on closed test data were 6.4 and 7.5 respectively. On the other hand, the final models of the CRF and WFST achieved the lowest PER scores for all three open test data sets (open1, open2 and open3). A PER score 14.7 for open1 was achieved by WFST, 11.4 for open2, and 15.7 for open3 by both CRF and WFST. An interesting point is that the PBSMT approach achieved close to the lowest PERs for the three open test sets (16.1 for open1, 13.1 for open2 and 22.0 for open3). Figure 2, (e) shows the RNN approach is able to learn to reach zero PER score on the closed test data from epoch two (i.e. with 5,000 words). The PER of RNN is lower than RNNA approach for both the closed and the open test data (see Figure 2, (e) and (f)). Manual Evaluation Manual evaluation was mainly done on the results from the models trained with 25,000 words in terms of errors on voiced/unvoiced pronunciation change, vowel, consonant and tone. The results show that voiced/unvoiced error is the highest among them. (Ye Kyaw Thu et al., 2015a) discussed the importance of the pronunciation change patterns, and our experimental results also show how these patterns affect the G2P performance. Pronunciation error rates for PBSMT and WFST are comparable and the PBSMT approach gives the best performance overall. The SVM based point-wise classification approach produced the highest phoneme errors on unknown words (i.e. UNK tagging for OOV case by KyTea) among the seven G2P approaches. Generally, all methods can handle tone well and we assume that almost all the tonal information of Myanmar graphemes is covered in the training dictionary. The lowest error rate on tone was achieved by PBSMT. From the overall manual evaluation results from train1 (training number 1: trained with 2,500 words) to train10 (training number 2: trained with 25,000 words), we can see clearly that RNN, PBSMT and WFST approaches gradually improve with increasing training data set size. Some difficult pronunciation changes at the consonant level (such as pronunciation prediction from ljin to jin for the Myanmar word "kau'jin", " က က လ င ") can be predicted correctly by the PBSMT approach and the RNN but not by the other approaches. Although the training accuracy of RNN is higher than the other techniques, in the automatic evaluation, some OOV predictions are the worst (refer Table 2). Discussion As we presented in the previous section, some evaluation results of the G2P conversion approaches on closed test data are inconsistent especially for S-AROW and JSM (refer Figure 3, (a) and (c)). However all models are generally improve on the three open test evaluation sets. Here we investigate the OOV rates over test data. Figure 3 shows the OOV rate for graphemes of the three open test data sets over the incremental training process from train1 to train10. As expected, the OOV rate gradually decreases as the the training data size increases. We performed a detailed analysis of each error type by manual evaluation, and the results are shown in Figure 4. From the results, we can clearly see that SVM based point-wise classification produced highest number of voiced/unvoiced errors, and we have already discussed UNK tags or KyTea pronunciation estimation errors in Section 6.2. We now turn to RNN specific errors. RNNs are capable sequence models with high potential for building G2P conversion models and thus we present a detailed error analysis. The RNN produced some reordering errors and the automatic evaluation counts one reordering error as one deletion and one insertion. For example, Method Hypothesis Note on Error S-AROW tha' ba. ja. nan baun: tone error in "ba." and consonant error in "ja." CRF tha' ba-ja. nan baun: consonant error in ja. JSM tha' ba. ra-baun: tone error in "ba." and "ra-" one phoneme deletion PBSMT tha' ba. ja-nan baun: tone error in "ba." RNN tha' ba-WA. SA MI: 3 syllables "WA. SA MI:" are predicted and they are far from the correct pronunciation SVM based point-wise UNK ba-ja-nan baun: OOV error WFST tha' ba-ra. nan baun: 0 Error Some RNN pronunciation prediction errors were semantic in nature, and we were surprised to discover them. For example, the RNN model output for the Myanmar word " မ သမ ခင ", mwei: tha-MI. gin (mother in English) is similar word " မ သဖခင ", mwei: tha-HPA. gin (father in English). Similar semantic errors were also produced by the PBSMT approach. Another interesting point is that the RNN and WFST approaches can predict correctly for some rare patterns (i.e. where all syllable pronunciations of a word are changed) even when all other models made errors. For example, the errors for the Myanmar word "စ ပ ခင ", za-bwe: gin: (tablecloth in English) made by the other approaches were: S-AROW: za-bwe: khin:, JSM: za-bwe: khin:, RNN: za-bwe: gin:, WFST: za-bwe: gin: and SVM based point-wise classification: za-bwe: khin:. Myanmar pronunciation prediction. Although the manual evaluation was expensive, we believe it was necessary in order to analyse these approaches in depth. In summary, our main findings are that the CRF, Phonetisaurus, SMT approaches gave rise to the the lowest error rates on the most important features of Myanmar G2P conversion: voiced/unvoiced, vowel patterns and tone. We plan to find out the performance of these approaches on sentence level since Myanmar pronunciation highly depends on the context. Figure 1 : 1An Architecture of Encoder-Decoder Machine Translation for G2P conversion of Myanmar word ရ က ပ န သ (hidden talent in English) From this concept, a single annotation model can be trained on single annotated words, even if the surrounding words are not annotated such as င /{ } က :ဇ :/{kyei: zu:} တင ပ တယ /{tin ba de} (Thank you in English). In this paper, we applied this approach for phonemes of syllables within a word and thus the previous example will change to င /{ } က :/{kyei:} ဇ :/{zu:} တင /{tin} ပ /{ba} တယ /{de}. : *** CHWIN: GYE' HYP: CHI NWIN: CHE' Eval: I S S Scores: (#C #S #D #I) 2 1 1 0 REF: sei' PJEI le' PJAU' HYP: sei' PJAUN: le' ***** Eval: S D Figure 2 : 2) Method: SVM based point-wise classification, Phoneme Error Rate (PER) of G2P conversion methodologies Figure 3 :Figure 4 : 34OOV graphemes over incremental training process7 Conclusion and Future WorkThe aim of this work is to show the relative performance of different machine learning techniques on Myanmar G2P conversion. Both automatic evaluation and manual evaluation showed that CRF, Phonetisaurus, SMT and RNN have their own unique advantages when applied to Average error scores of manual checking for G2P conversion methods Table 1 : 1An example of grapheme to phoneme mapping for Myanmar language //preferred.jp/en/) and Preferred Networks, Inc. (PFN) (https://www. preferred-networks.jp/en/). It was released as open source software in June, 2015 (https://github.com/pfnet/chainer). Some key features of Chainer are that it is supported as a Python library (PyPI: Chainer) and is able to run on both CUDA with multi-GPU computers. We used the Chainer Python module (version 1.15.0.1) for the G2P conversion experiments based on RNN and RNNA approaches. For both the RNN and the RNNA models, we trained for 100 epochs.5.2 Software We used following open source G2P converters, software frameworks and systems for our G2P experiments: • Chainer: A framework for neural network development that provides an easy and straightforward way to implement complex deep learning architectures. (Tokui et al., 2015). A deep learning framework developed by Preferred Infrastructure, Inc. (PFI) (https: Table 2 : 2An example of phoneme prediction errors of G2P conversion methods.the RNN model output for the Myanmar word "ထ ပ ပ", htoun pei BEI (recalcitrantly in English). Its SCLITE alignment and scoring is shown in the left column below:Scores: (#C #S #D #I) 2 0 1 1 REF: *** htoun pei BEI HYP: PEI htoun pei *** Eval: I D Scores: (#C #S #D #I) 3 1 0 0 REF: mwei: tha-MI. gin HYP: mwei: tha-HPA. gin Eval: S AcknowledgementsThe authors would like to thank Dr. Andrew Finch, Multilingual Translation Lab., Advanced Speech Translation Research and Development Promotion Center, National Institute of Information and Communications Technology (NICT), Japan for valuable comments. Neural machine translation by jointly learning to align and translate. Dzmitry Bahdanau, Kyunghyun Cho, Yoshua Bengio, abs/1409.0473CoRRDzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. CoRR, abs/1409.0473. Joint-sequence models for grapheme-to-phoneme conversion. Maximilian Bisani, Hermann Ney, Speech Commun. 505Maximilian Bisani and Hermann Ney. 2008. Joint-sequence models for grapheme-to-phoneme conversion. Speech Commun., 50(5):434-451, May. An empirical study of smoothing techniques for language modeling. F Stanley, Joshua Chen, Goodman, Proceedings of the 34th Annual Meeting of the ACL. the 34th Annual Meeting of the ACLSanta Cruz, CaliforniaStanley F. Chen and Joshua Goodman. 1996. An empirical study of smoothing techniques for language modeling. In Proceedings of the 34th Annual Meeting of the ACL, pages 310-318. Santa Cruz, California, June. Learning phrase representations using RNN encoder-decoder for statistical machine translation. Kyunghyun Cho, Çaglar Bart Van Merrienboer, Fethi Gülçehre, Holger Bougares, Yoshua Schwenk, Bengio, abs/1406.1078CoRRKyunghyun Cho, Bart van Merrienboer, Çaglar Gülçehre, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using RNN encoder-decoder for statistical machine translation. CoRR, abs/1406.1078. Ultraconservative online algorithms for multiclass problems. Koby Crammer, Yoram Singer, J. Mach. Learn. Res. 3Koby Crammer and Yoram Singer. 2003. Ultraconservative online algorithms for multiclass problems. J. Mach. Learn. Res., 3:951-991, March. This work is licensed under a Creative Commons Attribution 4.0 International License. httpLicense detailsThis work is licensed under a Creative Commons Attribution 4.0 International License. License details: http: Adaptive regularization of weight vectors. Koby Crammer, Alex Kulesza, Mark Dredze, Machine Learning. 91Koby Crammer, Alex Kulesza, and Mark Dredze. 2013. Adaptive regularization of weight vectors. Machine Learning, 91(2):155-187. A comparison of letter-to-sound conversion techniques for english text-to-speech synthesis. R I Damper, Y Marchand, M J Adamson, K Gustafson, R.I. Damper, Y. Marchand, M.J. Adamson, and K. Gustafson. 1999. A comparison of letter-to-sound conversion techniques for english text-to-speech synthesis. Pronunciation dictionary development in resource-scarce environments. Marelie Davel, Olga Martirosian, Proc. Interspeech. InterspeechMarelie Davel and Olga Martirosian. 2009. Pronunciation dictionary development in resource-scarce environments. In in Proc. Interspeech, pages 2851-2854. Confidence-weighted linear classification. Mark Dredze, Koby Crammer, Fernando Pereira, Proceedings of the 25th International Conference on Machine Learning, ICML '08. the 25th International Conference on Machine Learning, ICML '08New York, NY, USAACMMark Dredze, Koby Crammer, and Fernando Pereira. 2008. Confidence-weighted linear classification. In Proceedings of the 25th International Conference on Machine Learning, ICML '08, pages 264-271, New York, NY, USA. ACM. Grapheme-to-phoneme conversion for myanmar language. The 11th International Conference on Computer Applications (ICCA2013). Ei Phyu Phyu SoeYangon, MyanmarEi Phyu Phyu Soe. 2013. Grapheme-to-phoneme conversion for myanmar language. In The 11th International Conference on Computer Applications (ICCA2013), pages 195-200, Yangon, Myanmar, Feb. Long short-term memory. Sepp Hochreiter, Jürgen Schmidhuber, Neural Comput. 98Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural Comput., 9(8):1735- 1780, November. Automatic generation of a pronunciation dictionary with rich variation coverage using smt methods. Panagiota Karanasou, Lori Lamel, Proceedings of the 12th International Conference on Computational Linguistics and Intelligent Text Processing -Volume Part II, CICLing'11. the 12th International Conference on Computational Linguistics and Intelligent Text Processing -Volume Part II, CICLing'11Berlin, HeidelbergSpringer-VerlagPanagiota Karanasou and Lori Lamel. 2011. Automatic generation of a pronunciation dictionary with rich variation coverage using smt methods. In Proceedings of the 12th International Conference on Computational Linguistics and Intelligent Text Processing -Volume Part II, CICLing'11, pages 506- 517, Berlin, Heidelberg. Springer-Verlag. Statistical phrase-based translation. Philipp Koehn, Franz Josef Och, , , Daniel Marcu, Proceedings of the Human Language Technology Conference. the Human Language Technology ConferenceEdmonton, CanadaPhilipp Koehn, Franz Josef Och, , and Daniel Marcu. 2003a. Statistical phrase-based translation. In In Proceedings of the Human Language Technology Conference, Edmonton, Canada. Statistical phrase-based translation. Philipp Koehn, Franz Josef Och, Daniel Marcu, HLT-NAACL. Philipp Koehn, Franz Josef Och, and Daniel Marcu. 2003b. Statistical phrase-based translation. In HLT-NAACL. Moses: Open source toolkit for statistical machine translation. Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ondřej Bojar, Alexandra Constantin, Evan Herbst, Proceedings of the 45th Annual Meeting of the ACL on Interactive Poster and Demonstration Sessions, ACL '07. the 45th Annual Meeting of the ACL on Interactive Poster and Demonstration Sessions, ACL '07Stroudsburg, PA, USAAssociation for Computational LinguisticsPhilipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ondřej Bojar, Alexandra Constantin, and Evan Herbst. 2007. Moses: Open source toolkit for statistical machine translation. In Proceedings of the 45th Annual Meeting of the ACL on Interactive Poster and Demonstration Sessions, ACL '07, pages 177-180, Stroudsburg, PA, USA. Association for Computational Linguistics. Structured adaptive regularization of weight vectors for a robust grapheme-to-phoneme conversion model. Keigo Kubo, Sakriani Sakti, Graham Neubig, Tomoki Toda, Satoshi Nakamura, IEICE Transactions on Information and Systems. 6Keigo Kubo, Sakriani Sakti, Graham Neubig, Tomoki Toda, and Satoshi Nakamura. 2014. Structured adaptive regularization of weight vectors for a robust grapheme-to-phoneme conversion model. IEICE Transactions on Information and Systems, E97-D(6):1468-1476, June. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. John D Lafferty, Andrew Mccallum, Fernando C N Pereira, Proceedings of the Eighteenth International Conference on Machine Learning, ICML '01. the Eighteenth International Conference on Machine Learning, ICML '01San Francisco, CA, USAMorgan Kaufmann Publishers IncJohn D. Lafferty, Andrew McCallum, and Fernando C. N. Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In Proceedings of the Eighteenth International Conference on Machine Learning, ICML '01, pages 282-289, San Francisco, CA, USA. Morgan Kaufmann Publishers Inc. Grapheme to phoneme conversion using an smt system. Antoine Laurent, Paul Deléglise, Sylvain Meignier, INTERSPEECH. ISCAAntoine Laurent, Paul Deléglise, and Sylvain Meignier. 2009. Grapheme to phoneme conversion using an smt system. In INTERSPEECH, pages 708-711. ISCA. Myanmar -English Dictionary. Department of the Myanmar Language Commission, Ministry of Education. San Lwin, Union of MyanmarSan Lwin. 1993. Myanmar -English Dictionary. Department of the Myanmar Language Commission, Ministry of Education, Union of Myanmar. A stochastic approach to phoneme and accent estimation. Tohru Nagano, Shinsuke Mori, Masafumi Nishimura, INTERSPEECH 2005 -Eurospeech, 9th European Conference on Speech Communication and Technology. Lisbon, PortugalISCATohru Nagano, Shinsuke Mori, and Masafumi Nishimura. 2005. A stochastic approach to phoneme and accent estimation. In INTERSPEECH 2005 -Eurospeech, 9th European Conference on Speech Communication and Technology, Lisbon, Portugal, September 4-8, 2005, pages 3293-3296. ISCA. Word-based partial annotation for efficient corpus construction. Graham Neubig, Shinsuke Mori, The seventh international conference on Language Resources and Evaluation (LREC 2010). MaltaGraham Neubig and Shinsuke Mori. 2010. Word-based partial annotation for efficient corpus construc- tion. In The seventh international conference on Language Resources and Evaluation (LREC 2010), pages 2723-2727, Malta, May. Improving wfst-based g2p conversion with alignment constraints and rnnlm n-best rescoring. Josef R Novak, Paul R Dixon, Nobuaki Minematsu, Keikichi Hirose, Chiori Hori, Hideki Kashioka, Josef R. Novak, Paul R. Dixon, Nobuaki Minematsu, Keikichi Hirose, Chiori Hori, and Hideki Kashioka. Improving wfst-based g2p conversion with alignment constraints and rnnlm n-best rescoring. Wfst-based grapheme-to-phoneme conversion: Open source tools for alignment, model-building and decoding. Josef R Novak, Nobuaki Minematsu, Keikichi Hirose, Proceedings of the 10th International Workshop on Finite State Methods and Natural Language Processing. the 10th International Workshop on Finite State Methods and Natural Language ProcessingDonostia-San Sebastiían, Spain2012Josef R. Novak, Nobuaki Minematsu, and Keikichi Hirose. 2012. Wfst-based grapheme-to-phoneme conversion: Open source tools for alignment, model-building and decoding. In Proceedings of the 10th International Workshop on Finite State Methods and Natural Language Processing, FSMNLP 2012, Donostia-San Sebastiían, Spain, July 23-25, 2012, pages 45-49. Statistical phrase-based translation. Josef Franz, Daniel Och, Marcu, Franz Josef Och and Daniel Marcu. 2003. Statistical phrase-based translation. pages 127-133. Improved statistical alignment models. F J Och, H Ney, ACL00. Hong Kong, ChinaF. J. Och and H. Ney. 2000. Improved statistical alignment models. In ACL00, pages 440-447, Hong Kong, China. Minimum error rate training for statistical machine translation. Franz J Och, Proceedings of the 41st Meeting of the Association for Computational Linguistics (ACL 2003). the 41st Meeting of the Association for Computational Linguistics (ACL 2003)Sapporo, JapanFranz J. Och. 2003. Minimum error rate training for statistical machine translation. In Proceedings of the 41st Meeting of the Association for Computational Linguistics (ACL 2003), Sapporo, Japan. Crfsuite: a fast implementation of conditional random fields (crfs). Naoaki Okazaki, Naoaki Okazaki. 2007. Crfsuite: a fast implementation of conditional random fields (crfs). Active learning for part-of-speech tagging: Accelerating corpus annotation. Eric Ringger, Peter Mcclanahan, Robbie Haertel, George Busby, Marc Carmen, James Carroll, Kevin Seppi, Deryle Lonsdale, Proceedings of the Linguistic Annotation Workshop, LAW '07. the Linguistic Annotation Workshop, LAW '07Stroudsburg, PA, USAAssociation for Computational LinguisticsEric Ringger, Peter McClanahan, Robbie Haertel, George Busby, Marc Carmen, James Carroll, Kevin Seppi, and Deryle Lonsdale. 2007. Active learning for part-of-speech tagging: Accelerating corpus annotation. In Proceedings of the Linguistic Annotation Workshop, LAW '07, pages 101-108, Strouds- burg, PA, USA. Association for Computational Linguistics. Rapid Generation of Pronunciation Dictionaries for new Domains and Languages. Tim Schlippe, Uni KarlsruhePh.D. thesisTim Schlippe. 2014. Rapid Generation of Pronunciation Dictionaries for new Domains and Languages. Ph.D. thesis, Uni Karlsruhe. Tutorial, fundamental and new approaches to statistical machine translation. Lucia Specia, International Conference Recent Advances in Natural Language Processing. Lucia Specia. 2011. Tutorial, fundamental and new approaches to statistical machine translation. In International Conference Recent Advances in Natural Language Processing. SRILM -An Extensible Language Modeling Toolkit. Andreas Stolcke, Proceedings of the International Conference on Spoken Language Processing. the International Conference on Spoken Language ProcessingDenver2Andreas Stolcke. 2002. SRILM -An Extensible Language Modeling Toolkit. In Proceedings of the International Conference on Spoken Language Processing, volume 2, pages 901-904, Denver. A unigram orientation model for statistical machine translation. Christoph Tillmann, Proceedings of HLT-NAACL 2004: Short Papers, HLT-NAACL-Short '04. HLT-NAACL 2004: Short Papers, HLT-NAACL-Short '04Stroudsburg, PA, USAAssociation for Computational LinguisticsChristoph Tillmann. 2004. A unigram orientation model for statistical machine translation. In Proceed- ings of HLT-NAACL 2004: Short Papers, HLT-NAACL-Short '04, pages 101-104, Stroudsburg, PA, USA. Association for Computational Linguistics. Chainer: a next-generation open source framework for deep learning. Seiya Tokui, Kenta Oono, Shohei Hido, Justin Clayton, Proceedings of Workshop on Machine Learning Systems (LearningSys) in The Twenty-ninth Annual Conference on Neural Information Processing Systems (NIPS). Workshop on Machine Learning Systems (LearningSys) in The Twenty-ninth Annual Conference on Neural Information Processing Systems (NIPS)Seiya Tokui, Kenta Oono, Shohei Hido, and Justin Clayton. 2015. Chainer: a next-generation open source framework for deep learning. In Proceedings of Workshop on Machine Learning Systems (LearningSys) in The Twenty-ninth Annual Conference on Neural Information Processing Systems (NIPS). Training conditional random fields using incomplete annotations. Yuta Tsuboi, Hisashi Kashima, Hiroki Oda, Shinsuke Mori, Yuji Matsumoto, Proceedings of the 22Nd International Conference on Computational Linguistics. the 22Nd International Conference on Computational LinguisticsStroudsburg, PA, USAAssociation for Computational Linguistics1COLING '08Yuta Tsuboi, Hisashi Kashima, Hiroki Oda, Shinsuke Mori, and Yuji Matsumoto. 2008. Training condi- tional random fields using incomplete annotations. In Proceedings of the 22Nd International Confer- ence on Computational Linguistics -Volume 1, COLING '08, pages 897-904, Stroudsburg, PA, USA. Association for Computational Linguistics. Syllable pronunciation features for myanmar grapheme to phoneme conversion. Ye Kyaw Thu, Win Pa Pa, Finch Andrew, The 13th International Conference on Computer Applications (ICCA2015). Yangon, MyanmarAye Mya Hlaing, Hay Mar Soe Naing, Sumita Eiichiro, and Hori ChioriYe Kyaw Thu, Win Pa Pa, Finch Andrew, Aye Mya Hlaing, Hay Mar Soe Naing, Sumita Eiichiro, and Hori Chiori. 2015a. Syllable pronunciation features for myanmar grapheme to phoneme conversion. In The 13th International Conference on Computer Applications (ICCA2015), pages 161-167, Yangon, Myanmar, Feb. The application of phrase based statistical machine translation techniques to myanmar grapheme to phoneme conversion. Ye Kyaw Thu, Win Pa Pa, Finch Andrew, Ni Jinfu, Sumita Eiichiro, Hori Chiori, The Pacific Association for Computational Linguistics Conference (PACLING2016). Legian, Bali, IndonesiaYe Kyaw Thu, Win Pa Pa, Finch Andrew, Ni Jinfu, Sumita Eiichiro, and Hori Chiori. 2015b. The ap- plication of phrase based statistical machine translation techniques to myanmar grapheme to phoneme conversion. In The Pacific Association for Computational Linguistics Conference (PACLING2016), pages 170-176, Legian, Bali, Indonesia, May.
253,628,203
Extracting relevant user behaviors through customer's transaction description is one of the ways to collect customer information. In the current text mining field, most of the researches are mainly study text classification, and only few study text clusters. Find the relationship between letters and words in the unstructured transaction consumption description. Use Word Embedding and text mining technology to break through the limitation of classification conditions that need to be distinguished in advance, establish automatic identification and analysis methods, and improve the accuracy of grouping. In this study, use Jieba to segment Chinese words, were based on the content of credit card transaction description. Feature extractions of Word2Vec, combined with Density-Based Spatial Clustering of Applications with Noise (DBSCAN) and Hierarchical Agglomerative Clustering, crosscombination experiments. The prediction results of MUC, B 3 and CEAF's F1 average of 67.58% are more significant 關鍵字:詞頻-逆向文件頻率、Word2Vec、 BERT、餘弦相似度、分群演算法、密度分群法、 K-平均演算法、階層分群法
[ 857321, 11394500, 4891749, 11239061 ]
Dept. of Data Science Dept. of Data Science Jheng-Long Wu Soochow University Dept. of Data Science Soochow University Soochow University 摘要 透過客戶消費資料萃取相關之使用者 行為,是蒐集客戶資訊的方式之一。 現行文字探勘的領域中,大多以文本 分類之相關研究為主,顯少有文本分 群之研究主題。從非結構化之交易消 費說明中,找尋字詞之間的關係,運 用不同詞向量技術,突破分類需事先 區分條件之限制,建立自動化辨識分 析方法,提升分群之準確率。在本研 究中將以銀行信用卡交易消費說明內 容 , 進 行 Jieba 中 文 斷 詞 並 採 用 Word2Vec 特徵值萃取,搭配基於密度 分群法(DBSCAN)和階層分群法,交叉 組合進行實驗。預測結果以 MUC、B 3 和 CEAF 之 F1 平均值 67.58%較為顯著。TF-IDFWord2VecBERTCosine SimilarityClustering AlgorithmsDBSCANK-meansHierarchical Clustering。 Extracting relevant user behaviors through customer's transaction description is one of the ways to collect customer information. In the current text mining field, most of the researches are mainly study text classification, and only few study text clusters. Find the relationship between letters and words in the unstructured transaction consumption description. Use Word Embedding and text mining technology to break through the limitation of classification conditions that need to be distinguished in advance, establish automatic identification and analysis methods, and improve the accuracy of grouping. In this study, use Jieba to segment Chinese words, were based on the content of credit card transaction description. Feature extractions of Word2Vec, combined with Density-Based Spatial Clustering of Applications with Noise (DBSCAN) and Hierarchical Agglomerative Clustering, crosscombination experiments. The prediction results of MUC, B 3 and CEAF's F1 average of 67.58% are more significant 關鍵字:詞頻-逆向文件頻率、Word2Vec、 BERT、餘弦相似度、分群演算法、密度分群法、 K-平均演算法、階層分群法 熱門的深度學習方法,許多研究指出 BERT 應 用於自然語言上都有獲得不錯的成績。Gong 等 (2019) 將 CRF 層加到雙向 GRU 模型的隱藏 層以限制每次的輸出,提高模型的識別性能 (BGRU-CRF model),並將 Bert Embedding 和 Radical Embedding 串 聯 在 一 起 做 為 輸 入 Embedding 放入 BGRU-CRF 模型中。王子牛、 姜 猛 、 高 建 瓴 與 陳 婭 先 (2019) 提 出 了 基 於 BERT 的神經網路方法進行命名實體辨識,結 合 BERT 和 BiLSTM-CRF 模型,實驗結果均 表示中文實體識別以無需添加任何特徵的方 式,明顯提升了準確率、召回率和 F1 值。 2.2 字詞相似度 向量空間模型:李琳與李輝 (2018) 研究指出, 已提出的非結構化文本相似度計算方法,主 要包含基於詞袋 (Bag of Words, BOW) 模型、 主題 (Topic) 模型、知識本體和詞向量等方法, 然而這些模型方法仍有一些關鍵問題待解決。 於是他們嘗試結合依存句法分析和詞嵌入方 法 , 提 出 一 種 基 於 概 念 向 量 空 間 (Concept Vector Space) 語義相似度的計算方法。曹錫 (2021) 使用 BERT、RoBERTa、ALBERT 三種 預訓練語言模型,進行法律判決書案件情境 相 似 檢 索 實 驗 , 搭 配 餘 弦 相 似 度 (黃宇翔、王品鈞與方志強 (2017) 考量現今資 料屬性為多樣式,為改善 K-means 演算法之處 理效能,提出了將資料依數值、類別和順序 三種屬性分別做 K-means,以取得較好之初始 中心點後再進行組合找到質心。黃郁豪與張 芳仁 (2017) 探討在網路資訊眾多的環境之下, 如何讓閱讀者更容易獲取有興趣之相關文章, 提 升 讀 者 點 擊 意 願 。 研 究 以 Word2Vec 和 Doc2Vec 模型進行詞向量處理,並取每則新聞 前 3%、5%、7%之 TF-IDF 權重較大為特徵關 鍵字,與 Word2Vec 相乘轉換後產生新聞字詞 向量,採用階層式聚合分群法將文章分群, 以 Purity 和 Entropy 評估結果好壞。 3 研究方法 在本節中,我們將描述數據收集和清理、數 據註釋、用於解決 NER 任務的模型和學習方 法。 3.1 數據收集與清理 本研究以銀行 2020 年度信用卡交易消費為資 料來源,銀行依據 VISA 與 MasterCard 國際組 織所定義之行業代碼 (Merchant Category Code, MCC) 將資料區分為 15 大類,排除研究限制 之含有客戶個人資料和國外消費,預計收集 10 萬筆國內十大消費類別,且不重複的刷卡 消費交易記錄之樣本為本研究資料來源。為 預防收集之消費類別筆數過少而無法抽樣取 得全部消費類別之情形,單一消費類別之母 體筆數占總筆數小於 2% 者全數收集,其他則 依據母體筆數之比例抽樣收集。刷卡交易除 了商戶名稱之外,多數含有分店資訊、使用 的支付工具、分期期數或金額等訊息,以下 將以各範例分別描述說明。 • 一般商戶 同一商戶但在不同行銷通路或透過網 購平台上架之賣家名稱內容,但商戶 傳送交易的中文說明並無統一格式, 如: 「富邦 momo-EC」 、 「愛貝金流- momore25」 • 商戶且有分店或分期 這類型資料常因交易消費說明過長, 導致資料傳送時會將資料截斷,造成 分店資訊不完整,如: 「三澧-MoMo Paradise 復興牧」 • 可使用支付工具之商戶 屬於非現金交易之掃碼行動支付或銀 行與商戶自行開發之行動支付 APP 軟 體,如: 「全聯門市-PX Pay」 、 「街口 電支-2派克脆皮雞排」 。 • 提供自動加值功能之商戶或分店 現行提供悠遊卡、一卡通 (iPASS) 和愛 金卡 (i-cash) 三家公司之自動加值功能, 如: 「悠遊卡自動加值-比漾廣場摩斯 漢堡」 。 3.2 分群模型評估資料處理 為了讓資料在格式上能達成一致標準,處理 內容含有分期資料的雜訊,移除不必要的資 訊,以提升資料品質。在進行特徵植萃取前, 本組使用 Jieba 與 CKIP Transformers 先行斷詞。 Transformer 多用於處理連續資料之任務, 與 RNN 不相同的是,Transformer 不需要依照 順序處理資料,因此減少了訓練時間,在近 年的諸多 NER 任務中,Transformer 已取代了 舊的遞歸神經網絡模型,迅速成為 NLP 問題 的首選模型。 3.3 特徵值萃取 本研究以斷詞工具辨識內容中含有特定意義 之 名 稱 所 產 生 的 資 料 集 , 採 用 TF (Term Frequency)、TF-IDF 以及 Word2Vec 三種方法 進行文字轉特徵,萃取文本中關注的成分, 並將這些詞句轉換為詞向量,以及 BERT 中文 預訓練模型技術,計算其相似度供分群模型 建立使用。 • TF、TF-IDF 依斷詞後的字詞,TF 採用計算字詞在消費 說明內容中出現的次數、TF-IDF 以評估字詞 在消費說明的重要程度產生關鍵詞,分別建 立 TF 和 TF-IDF 不重複的字詞向量,分別產生特徵矩陣,供後續計算每筆記錄之間的相似 度。 • Word2Vec 具有考慮上下文之特性,將字詞投射在向 量空間,其訓練模型有 CBOW 和 Skip-gram 兩 種,本研究採用 CBOW 模型架構,給定一個 商戶名稱的前後鄰近的交易消費說明字詞, 預測商戶名稱出現的機率。 • BERT 本研究採用 Cui 等人(2021) 提出的 Chinese- MacBERT-Base 預訓練模型,輸入資料集之每 一筆商戶交易明細,訓練出每一筆 768 維詞向 量的記錄,計算兩筆記錄之間的相似度,產 生相似矩陣。 3.4 資料訓練與模型建置 本研究以四種特徵值方法分別計算消費說明 資料彼此之間的相似度,依相似矩陣轉換成 距 離 特 徵 資 訊 , 作 為 密 度 聚 類 演 算 法 (DBSCAN) 、 DBSCAN + K-means 以 及 DBSCAN + 階層分群法三種演算法之分群基 準,將資料分成數個群集,目標找到群內差 異小、群外差異大之群集,並配合特徵值萃 取之技術,進行訓練與建立模型。 3.5 評估指標 本研究以共指消解作為評估方式,共指消 解,是將文字中指向同一 Entity 的詞語劃分到 同一個等價集的過程,其中被劃分的詞語稱 為表述或指稱語(Mention) ,形成的等價集稱 為共指鏈(Coreference Chain) 。在共指消解中, 指稱語包含:普通名詞、專有名詞和代詞, 因此可以將顯性代詞消解看作是共指消解針 對代詞的子問題。研究使用任務中最常使用 之評估指標包括 MUC、B 3 、CEAF 作為評估 方式。 • MUC MUC score 計算將預測的共指鏈映射到標註 的共指鏈所需插入或者刪除的最少的鏈接數 量,但 MUC 的缺點為無法衡量系統預測單例 實體的性能。 • B 3 B 3 算法可以克服 MUC 的缺點,該算法主 要是對每個 mention 分別計算 precision 和 recall, 然後以所有 mention 的平均值作為最終的指標。 • CEAF CEAF 是一種基於實體相似度的評估算法, 相比於前兩個評估指標的算法更加直觀的表 現評估共指簇劃分的好壞,就是對應地比較 每個共指簇劃分。 3.6 相關參數設定 資料點之半徑距離會影響分群個數,而分群 個數會直接影響結果。在 K-means 和 AGC 需 事先設定群組個數,其分群數則由 DBSCAN 分群演算法而來。DBSCAN 依據不同之半徑 距離Ԑ、在Ԑ之內最少有 1 個資料個數 (MinPts = 1),計算可歸納為 n 個分群數。本研究以驗 證集資料進行調參,並經由觀察不同特徵值 方法之分群數遞減變化。下表為本研究所設 定各特徵值萃取方法之參數。 特徵值方法 參數設定 TF 單詞在消費說明內容中出現的 次數。 TF-IDF 重新計算 IDF 權重,若文檔不 含關鍵詞時,無需對 IDF 做平 滑 (不考慮分母為 0 的情形)。 Word2Vec 採用 CBOW 的方式,根據目標 字的左右 5 個字進行預測,將 訓練出對映到 300 維度空間的 詞向量,迭代次數設定為 50 次。 BERT Model 和 Tokenizer 採 用 Chinese-Macbert-Base 預訓練模 型,訓練出每一筆消費記錄之 768 維度空間的詞向量。 表 1. 特徵值萃取參數設定 分群演算法 參數設定 DBSCAN 半徑距離Ԑ之內最少有 1 個資料 個 數 。 Ԑ 之 範 圍 為 TF = 1.1~2.9 , 每 次 調 整 0.2 、 TF- IDF = 1.1~2.9,每次調整 0.2、 Word2Vec = 1.1~3.8,每次調整 0.3、BERT = 1.1~2.9,每次調 整 0.2。距離計算方式使用預設 值 Euclidean。依據上述參數設 定,獲得 n 個 clusters Kaggle 的 Fake News 資料集進行預測 假新聞之研究,使用 TF-IDF 找出文本的字詞 特 徵 , 並 使 用 線 性 區 別 分 析 (Linear Discriminant Analysis, LDA) 進行降維,搭配 Random Forests、XGBoost、Naïve Bayes 和羅 吉斯迴歸四種分類進行比較,經實驗結果得Cosine Similarity)、歐式距離 (Euclidean Distance) 和向 量內積 (Inner Product) 三種演算法,以案由分 群亂度 (Average Entropy of the Offence-charged Clustering, AEOC) 為指標評估,判斷檢索的優 劣程度,其 AEOC 值愈小愈好,代表各分群 內的類別蒐集愈收斂。 2.3 Bag-of-Word Model TF-IDF (Term Frequency-Inverse Document Frequency) 是一種用於資訊檢索與文字探勘的 傳統機器學習統計方法,用來評估一字詞對 於一個檔案中的重要程度。王美淋(2020)提出 結合擷取和萃取兩段式模型方式處理 NLP 任 務,其實驗結果較 Transformer 良好。劉賢鈞 (2019) 以 The 34th Conference on Computational Linguistics and Speech Processing (ROCLING 2022) Taipei, Taiwan, November 21-22, 2022. The Association for Computational Linguistics and Chinese Language Processing 知,以羅吉斯迴歸分類方法最佳,準確率高 達 96.32%。TF-IDF 雖簡單、容易快速理解, 但僅使用詞頻評估文章某一字詞的重要性, 缺乏整體性;有時關鍵的字詞出現可能不多, 無法表達字詞位置與上下文字的重要性。 2.4 分群模型評估 The 34th Conference on Computational Linguistics and Speech Processing (ROCLING 2022)Taipei, Taiwan, November 21-22, 2022. The Association for Computational Linguistics and Chinese Language Processing 低,且當分群數愈少時,F1 平均值比 DBKM 和 DBAGC 明 顯 較 低 , TF-IDF 較 不 適 合 DBSCAN。 Word2Vec 在半徑距離 2.6 時,DBKM 之 F1 為 61.7 最高;DBSCAN 在半徑距離 2.9 時最高, 其 F1 為 73.0; DBAGC 在半徑距離 2.6 時最 高,其 F1 為 60.9。雖 DBKM 和 DBAGC 二種 演算法之 F1 平均值計算結果相似,無明顯之 差異,但三種演算法之 F1 平均值以 65.1 為最 F1 較 TF-IDF + DBKM 和 TF + DBAGC 分別高出 30%和 20%;其中又 以 TF-IDF + DBKM 之 Recall 為 10.56%最The 34th Conference on Computational Linguistics and Speech Processing (ROCLING 2022) Taipei, Taiwan, November 21-22, 2022. The Association for Computational Linguistics and Chinese Language Processing 低,單一商戶分群數之多寡對 CEAF 影 響最為明顯,是影響實驗結果主要原因。 • Word2Vec + DBSCAN 之 F1 平均值較 TF + DBAGC 高出 6.82%,亦較 TF-IDF + DBKM 高出 9.6%,其特徵值萃取之方法 對評估結果具有影響力。 綜合整體研究測試結果得知,中文採用 Jieba 斷詞並以 Word2Vec 之特徵值萃取方式, 搭配 DBSCAN 分群演算法之評估指標 F1 平均 值 67.58%表現最佳;BERT 特徵值方法受限於 模型訓練之方法不同,其測試結果表現最差。 結論 整 體 研 究 結 果 發 現 , Jieba 斷 詞 所 訓 練 之 Word2Vec 模型,以 DBSCAN 演算法經由半徑 距離設定,較容易獲得單一商戶之分群數, 對於商戶名稱分群之效果最好,有助於自動 分群之應用。依據 CEAF 評估指標可以發現, 測試集正確商戶分群數為 264 個,DBSCAN 演 算法預測之商戶分群數為 145 個,Recall 分數 為 36.42;DBAGC 演算法預測之商戶分群數 為 112 個,Recall 分數為 19.32;DBKM 演算 法預測之商戶分群數為 60 個,Recall 分數為 10.56;正確分群數與預測分群數之差異多少, 對於 CEAF 評估指標之 Recall 影響最為明顯。 在 B 3 評估指標中,測試集正確商戶分群數中, 多個商戶之分群數為 107 個;DBSCAN 演算 法預測多個商戶之分群數為 52 個,Precision 分數為 48.37;DBKM 演算法預測多個商戶之 分群數 60 個,Precision 分數為 71.89;DBAGC 演算法預測多個商戶之分群數為 108 個, Precision 分數為 80.34。多個商戶之分群數多 寡,對於 B3 評估指標具有影響。若單一商戶 (非連鎖) 較多時,其評估指標可能失去之可性 度。The 34th Conference on Computational Linguistics and Speech Processing (ROCLING 2022) Taipei, Taiwan, November 21-22, 2022. The Association for Computational Linguistics and Chinese Language Processing 分群演算法 參數設定 DBKM 依據 DBSCAN 所以獲得 n 個分 群數為參數,距離計算方式採用 預設值 Euclidean,其隨機種子 為 0 DBAGC 採用 CBOW 的方式,根據目標 字的左右 5 個字進行預測,將訓 練出對映到 300 維度空間的詞向 量,迭代次數設定為 50 次。 BERT 依據 DBSCAN 所以獲得 n 個 clusters 為參數,資料點之間的 距離採用 Euclidean 計算方式, 群與群之間的距離則使用 Ward 方法。 表 2. 分群演算法參數設定 4 實驗結果 4.1 特徵值方法與演算法分析 依據斷詞工具加三種特徵值方法,以及 BERT 萃取特徵值結果,經由不同半徑距離所獲得 之分群數,再依據表 2 分群演算法的參數設定, 分別帶入 DBSCAN、DBKM、DBAGC 三種演 算法,各自計算評估指標再取四種演算法 F1 平均值之最大值,作為 DBSCAN 半徑距離最 佳參數設定,實驗結果分析如下圖 1 至圖 4。 圖 1. TF 特徵值方法之評估結果 圖 2. TF-IDF 特徵值方法之評估結果 圖 3. Word2Vec 特徵值方法之評估結果 圖 4. BERT 特徵值方法之評估結果 TF 在半徑距離 1.9 時,DBKM 之 F1 為 59.2 最 高;DBSCAN 和 DBAGC 在半徑距離 1.7 時均 為最高,其 F1 分別為 59.8 和 59.3。三種演算 法之 F1 平均值以 59.3 為最高,故 TF 特徵值 方法最佳半徑距離設定為 1.7。 TF-ID 在半徑距離 1.9 時,DBKM、DBSCAN 和 DBAGC 之 F1 均最高,分別為 60.3、58.3 和 59.8。三種演算法之 F1 平均值為 59.4。可 以發現 DBSCAN 之 F1 平均值計算結果均較 The 34th Conference on Computational Linguistics and Speech Processing (ROCLING 2022) Taipei, Taiwan, November 21-22, 2022. The Association for Computational Linguistics and Chinese Language Processing 高,故 Word2Vec 最佳半徑距離設定為 2.9。 BERT 模型在半徑距離 2.9 時,DBKM 之 F1 為 41.6 最高;DBSCAN 在半徑距離 1.5 時最高, 其 F1 為 55.2; DBAGC 在半徑距離 2.9 時最 高,其 F1 為 43.3。DBSCAN 演算法對群數之 多寡差異較為明顯,DBKM 和 DBAGC 之計算 結果較為相似,且可以發現群組數愈少其評 估指標 F1 平均值愈高。三種演算法之 F1 平均 值以 43.0 為最高,故 BERT 最佳半徑距離設定 為 1.7。 BERT 模型訓練以學習完整句子為主,非以 簡短之交易特店說明,雖然訓練集實驗結果 以 DBSCAN 評估指標 F1 平均值 55.2%最高, DBKM 和 DBAGC 評估指標 F1 平均值均不超 過 45%,整體而言此特徵值方法較不理想。 4.2 測試集結果 測試集正確商戶分群數為 264 分群數,其中 單一商戶之分群數有 157 個,占正確總分群數 59.5%。由表 3 評估 F1 結果得知。 • 三種演算法的評估指標 F1 平均值與驗證 集之實驗結果接近,以 Word2Vec 特徵值 方法搭配 DBSCAN 演算法之 F1 平均值 67.58 最高。 • BERT 萃取特徵值方式受不同資料集筆數 之多寡影響,依訓練集實驗結果之半徑 距離進行測試,其分群數僅有 5 群,F1 平均值 36.93 最低。 • MUC 在不同特徵值方法搭配不同演算法 之差異較不顯著,主要因 Precision 和 Recall 計算時,單一商戶之個數減 1 後相 抵消失,評估指標無法計算含單一商戶 之準確率。 • Word2Vec + DBSCAN 之分群數最多,其 評估指標 B3 因排除單一商戶之分群,故 Precision 較 DBKM、DBAGC 偏低;評估 指標 CEAF 經由調整後,包含單一商戶 分群數,其 演算法 DBKM DBAGC DBSCAN DBSCAN 特徵值 TF-IDF TF Word2Vec BERT 半徑距離 1.9 1.7 2.9 1.7 總分群數 60 112 145 5 單一商戶 0 4 93 2 多個商戶 60 108 52 3 評估指標 MUC B 3 CE AF 平均 MUC B 3 CE AF 平均 MUC B 3 CE AF 平均 MUC B 3 CE AF 平均 Precision 85.66 71.89 46.49 68.01 85.52 80.34 45.53 70.47 89.77 48.37 66.32 68.15 86.24 8.65 61.69 52.19 Recall 96.38 61.05 10.56 56.00 93.50 55.75 19.32 56.19 96.32 89.51 36.42 74.08 99.94 99.77 1.17 66.96 F1 90.70 66.03 17.22 57.98 89.33 65.83 27.13 60.76 92.93 62.80 47.02 67.58 92.58 15.93 2.29 36.93 表 3. 各模型於測試集之辨識效果評估結果 5 Algorithms for Scoring Coreference Chains. A Bagga, B Baldwin, The first international conference on language resources and evaluation workshop on linguistics coreference. 1Bagga, A., & Baldwin, B. (1998). Algorithms for Scoring Coreference Chains. In The first international conference on language resources and evaluation workshop on linguistics coreference (Vol. 1, pp. 563-566). . Y Cui, W Che, T Liu, B Qin, Z Yang, Cui, Y., Che, W., Liu, T., Qin, B., & Yang, Z. (2021). Pre-training with whole word masking for chinese bert. 10.1109/TASLP.2021.3124365Speech, and Language Processing. 29Pre-training with whole word masking for chinese bert. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 29, 3504-3514. doi:10.1109/TASLP.2021.3124365 Chinese named entity recognition with bert. C Gong, J Tang, S Zhou, Z Hao, J Wang, 10.12783/dtcse/cisnrc2019/33299DEStech Transactions on Computer Science and Engineering. Gong, C., Tang, J., Zhou, S., Hao, Z., & Wang, J. (2019). Chinese named entity recognition with bert. DEStech Transactions on Computer Science and Engineering. doi:10.12783/dtcse/cisnrc2019/33299 Conditional random fields: Probabilistic models for segmenting and labeling sequence data. J Lafferty, A Mccallum, F C N Pereira, Lafferty, J., McCallum, A., & Pereira, F. C. N. (2001). Conditional random fields: Probabilistic models for segmenting and labeling sequence data. Higherorder coreference resolution with coarse-to-fine inference. K Lee, L He, L Zettlemoyer, 10.18653/v1/N18-2108Proceedings of the 2018 conference of the North American chapter of the association for computational linguistics: human language technologies. the 2018 conference of the North American chapter of the association for computational linguistics: human language technologies2short papersLee, K., He, L., & Zettlemoyer, L. (2018). Higher- order coreference resolution with coarse-to-fine inference. Proceedings of the 2018 conference of the North American chapter of the association for computational linguistics: human language technologies, volume 2 (short papers), 687-692. doi:10.18653/v1/N18-2108 Fine-tune BERT for extractive summarization. Y Liu, arXiv:1903.10318Liu, Y. (2019). Fine-tune BERT for extractive summarization. arXiv:1903.10318. On coreference resolution performance metrics. X Luo, Proceedings of Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing. Human Language Technology Conference and Conference on Empirical Methods in Natural Language ProcessingLuo, X. (2005). On coreference resolution performance metrics. In Proceedings of Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing (pp. 25-32) Efficient estimation of word representations in vector space. T Mikolov, K Chen, G Corrado, J Dean, arXiv:1301.3781arXiv preprintMikolov, T., Chen, K., Corrado, G., & Dean, J. (2013). Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781. ERNIE 2.0: A continual pretraining framework for language understanding. Y Sun, S Wang, Y Li, S Feng, H Tian, H Wu, H Wang, 10.1609/aaai.v34i05.6428Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence34Sun, Y., Wang, S., Li, Y., Feng, S., Tian, H., Wu, H., & Wang, H. (2020). ERNIE 2.0: A continual pre- training framework for language understanding. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 34, No. 05, pp. 8968-8975). doi:10.1609/aaai.v34i05.6428 A model-theoretic coreference scoring scheme. M Vilain, J Burger, J Aberdeen, D Connolly, L Hirschman, Sixth Message Understanding Conference. MUC-6; Columbia, MarylandProceedings of a Conference Held inVilain, M., Burger, J., Aberdeen, J., Connolly, D., & Hirschman, L. (1995). A model-theoretic coreference scoring scheme. In Sixth Message Understanding Conference (MUC-6): Proceedings of a Conference Held in Columbia, Maryland, November 6-8, 1995. Mining Language Patterns Using Word Embeddings). X Xiao, S.-Z Ye, L.-C Yu, K R Lai, ; 應 用 詞 向 量 於 語 言 樣 式 探 勘 之 研 究, Proceedings of the 29th Conference on Computational Linguistics and Speech Processing. the 29th Conference on Computational Linguistics and Speech ProcessingIn ChineseXiao, X., Ye, S.-Z., Yu, L.-C., & Lai, K. R. (2017). 應 用 詞 向 量 於 語 言 樣 式 探 勘 之 研 究 (Mining Language Patterns Using Word Embeddings) [In Chinese]. In Proceedings of the 29th Conference on Computational Linguistics and Speech Processing (ROCLING 2017) (pp. 230-243) Z Zhang, X Han, Z Liu, X Jiang, M Sun, Q Liu, arXiv:1905.07129ERNIE: Enhanced language representation with informative entities. arXiv preprintZhang, Z., Han, X., Liu, Z., Jiang, X., Sun, M., & Liu, Q. (2019). ERNIE: Enhanced language representation with informative entities. arXiv preprint arXiv:1905.07129. . 姜猛 王子牛, 高建瓴 , &amp; 陈娅先, 。基于 BERT 的中文命名实体识别方法。计算机科学. 46S2王子牛, 姜猛, 高建瓴, & 陈娅先 (2019)。基于 BERT 的中文命名实体识别方法。计算机科学, 46(S2), 138-142。 王美淋 (2020)。結合擷取式與萃取式兩段式模型 以增進摘要效能之研究。 The 34th Conference on Computational Linguistics and Speech Processing. ROCLING 2022王美淋 (2020)。結合擷取式與萃取式兩段式模型 以增進摘要效能之研究。 The 34th Conference on Computational Linguistics and Speech Processing (ROCLING 2022) Taipei, ; Taiwan, 秦兵 刘挺, &amp; 李生, The Association for Computational Linguistics and Chinese Language Processing 车万翔. 14Taipei, Taiwan, November 21-22, 2022. The Association for Computational Linguistics and Chinese Language Processing 车万翔, 刘挺, 秦兵, & 李生 (2004)。基于改进编辑 距离的中文相似句子检索。高技术通讯, 14(7), 15-19。 . 陳冠宇 ; 。ebsum:基於 吳政育, Bert, 健性抽取式摘要法, 中文計算語言學期刊, 24吳政育, 陳冠宇 (2019)。EBSUM:基於 BERT 的強 健性抽取式摘要法. 中文計算語言學期刊, 24(2), 19-35。 . &amp; 张占英, 王中立, 。中文文本中公司名简称 的识别。许昌学院学报. 222张占英, & 王中立. (2003)。中文文本中公司名简称 的识别。许昌学院学报, 22(2), 99-101 . &amp; 李琳, 李辉 ; 。一种基于概念向量空间的文, 本相似度计算方法。数据分析与知识发现, 10.11925/infotech.2096-3467.2018.0007李琳, & 李辉 (2018)。一种基于概念向量空间的文 本相似度计算方法。数据分析与知识发现, 5。 doi:10.11925/infotech.2096-3467.2018.0007 . 施瑞朗 ; 。基于社交平台数据的文本分类算, 法研究。电子科技, 69-70。doi:10. 16180 /j. cnki. issn1007 -7820. 2018. 10. 01631施瑞朗 (2018)。基于社交平台数据的文本分类算 法研究。电子科技, 31(10), 69-70。doi:10. 16180 /j. cnki. issn1007 -7820. 2018. 10. 016 . 孙钢 胡若云, 沈然 丁麒, &amp; 谷泓杰 ; 。基于雙 向傳播框架的客服對話文本挖掘算法。沈阳工 业大学学报。, 郭家清, 王智超 蔡東風, &amp; 劉浩公 ; 。一種基 于 條 件 隨 機 場 的 人 名 識 别 方 法, Journal of Communication and Computer. 42胡若云, 孙钢, 丁麒, 沈然, & 谷泓杰 (2021)。基于雙 向傳播框架的客服對話文本挖掘算法。沈阳工 业大学学报。 郭家清, 蔡東風, 王智超, & 劉浩公 (2007)。一種基 于 條 件 隨 機 場 的 人 名 識 别 方 法 . Journal of Communication and Computer, 4(2), 22-25。 . 王品鈞 黃宇翔, &amp; 方志強, 10.6188/JEB.2017.19(1).0119。混合型資料集的 K-means 分群演算法。電子商務學報黃宇翔, 王品鈞, & 方志強 (2017)。混合型資料集的 K-means 分群演算法。電子商務學報, 19(1), 1- 28。doi:10.6188/JEB.2017.19(1).01 . &amp; 黃郁豪, 張芳仁 ; 。新聞分群方法之比較研, 究及應用, Doctoral dissertation黃郁豪, & 張芳仁 (2017)。新聞分群方法之比較研 究及應用 (Doctoral dissertation)。 . 張嘉惠 ; 。應用記憶增強條件隨機場 域與之深度學習及自動化詞彙特徵於中文命名 簡國峻, 實體辨識之研究。中文計算語言學期刊, 24簡國峻, 張嘉惠 (2019)。應用記憶增強條件隨機場 域與之深度學習及自動化詞彙特徵於中文命名 實體辨識之研究。中文計算語言學期刊, 24(1), 1-14. The 34th Conference on Computational Linguistics and Speech Processing. ROCLING 2022The 34th Conference on Computational Linguistics and Speech Processing (ROCLING 2022) Taiwan Taipei, The Association for Computational Linguistics and Chinese Language Processing. Taipei, Taiwan, November 21-22, 2022. The Association for Computational Linguistics and Chinese Language Processing
2,913,402
Impact of Initiative on Collaborative Problem Solving *
Even though collaboration in peer learning has been shown to have a positive impact for students, there has been little research into collaborative peer learning dialogues. We analyze such dialogues in order to derive a model of knowledge co-construction that incorporates initiative and the balance of initiative. This model will be embedded in an artificial agent that will collaborate with students.
[ 1006472, 2659316, 8704366 ]
Impact of Initiative on Collaborative Problem Solving * Cynthia Kersey ckerse2@uic.edu Department of Computer Science University of Illinois at Chicago Chicago 60613Illinois Impact of Initiative on Collaborative Problem Solving * Even though collaboration in peer learning has been shown to have a positive impact for students, there has been little research into collaborative peer learning dialogues. We analyze such dialogues in order to derive a model of knowledge co-construction that incorporates initiative and the balance of initiative. This model will be embedded in an artificial agent that will collaborate with students. Introduction While collaboration in dialogue has long been researched in computational linguistics (Chu-Carroll and Carberry, 1998;Constantino-González and Suthers, 2000;Jordan and Di Eugenio, 1997;Lochbaum and Sidner, 1990;Soller, 2004;Vizcaíno, 2005), there has been little research on collaboration in peer learning. However, this is an important area of study because collaboration has been shown to promote learning, potentially for all of the participants (Tin, 2003). Additionally, while there has been a focus on using natural language for intelligent tutoring systems (Evens et al., 1997;Graesser et al., 2004;, peer to peer interactions are notably different from those of expertnovice pairings, especially with respect to the richness of the problem-solving deliberations and negotiations. Using natural language in collaborative learning could have a profound impact on the way in which educational applications engage students in learning. * This work is funded by NSF grants 0536968 and 0536959. There are various theories as to why collaboration in peer learning is effective, but one of that is commonly referenced is co-construction (Hausmann et al., 2004). This theory is a derivative of constructivism which proposes that students construct an understanding of a topic by interpreting new material in the context of prior knowledge (Chi et al., 2001). Essentially, students who are active in the learning process are more successful. In a collaborative situation this suggests that all collaborators should be active participants in order to have a successful learning experience. Given the lack of research in modeling peer learning dialogues, there has been little study of what features of dialogue characterize co-construction. I hypothesize that since instances of co-construction closely resemble the concepts of control and initiative, these dialogue features can be used as identifiers of co-construction. While there is some dispute as to the definitions of control and initiative (Jordan and Di Eugenio, 1997;Chu-Carroll and Brown, 1998), it is generally accepted that one or more threads of control pass between participants in a dialogue. Intuitively, this suggests that tracking the transfer of control can be useful in determining when co-construction is occurring. Frequent transfer of control between participants would indicate that they are working together to solve the problem and perhaps also to construct knowledge. The ultimate goal of this research is to develop a model of co-construction that incorporates initiative and the balance of initiative. This model will be embedded in KSC-PaL, a natural language based peer agent that will collaborate with students to solve In section 2, I will describe how we collected the dialogues and the initial analysis of those dialogues. Section 3 details the on-going annotation of the corpus. Section 4 describes the future development of the computational model and artificial agent. This is followed by the conclusion in section 5. Data Collection In a current research project on peer learning, we have collected computer-mediated dialogues between pairs of students solving program comprehension and error diagnosis problems in the domain of data structures. The data structures that we are focusing on are (1) linked lists, (2) stacks and (3) binary search trees. This domain was chosen because data structures and their related algorithms are one of the core components of computer science education and a deep understanding of these topics is essential to a strong computer science foundation. Interface A computer mediated environment was chosen to more closely mimic the situation a student will have to face when interacting with KSC-PaL, the artificial peer agent. After observing face-to-face interactions of students solving these problems, I developed an interface consisting of four distinct areas (see 1. Problem display: Displays the problem description that is retrieved from a database. 2. Code display: Displays the code from the problem statement. The students are able to make changes to the code, such as crossing-out lines and inserting lines, as well as undoing these corrections. 3. Chat Area: Allows for user input and an interleaved dialogue history of both students participating in the problem solving. The history is logged for analysis. 4. Drawing area: Here users can diagram data structures to aid in the explanation of parts of the problem being solved. The drawing area has objects representing nodes and links. These objects can then be placed in the drawing area to build lists, stacks or trees depending on the type of problem being solved. The changes made in the shared workspace (drawing and code areas) are logged and propagated to the partner's window. In order to prevent users from making changes at the same time, I implemented a system that allows only one user to draw or make changes to code at any point in time. In order to make a change in the shared workspace, a user must request the "pencil" (Constantino-González and Suthers, 2000). If the pencil is not currently allocated to her partner, the user receives the pencil and can make changes in the workspace. Otherwise, the partner is informed, through both text and an audible alert, that his peer is requesting the pencil. The chat area, however, allows users to type at the same time, although they are notified by a red circle at the top of the screen when their partner is typing. While, this potentially results in interleaved conversations, it allows for more natural communication between the peers. Using this interface, we collected dialogues for a total of 15 pairs where each pair was presented with five problems. Prior to the collaborative problem solving activities, the participants were individually given pre-tests and at the conclusion of the session, they were each given another test, the posttest. During problem solving the participants were seated in front of computers in separate rooms and all problem solving activity was conducted using the computer-mediated interface. The initial exercise let the users become acquainted with the interface. The participants were allowed to ask questions regarding the interface and were limited to 30 minutes to solve the problem. The remaining exercises had no time limits, however the total session, including pre-test and post-test could not exceed three hours. Therefore not all pairs completed all five problems. Initial Analysis After the completion of data collection, I established that the interface and task were conducive to learning by conducting a paired t-test on the pre-test and post-test scores. This analysis showed that the posttest score was moderately higher than the pre-test score (t(30)=2.83; p=0.007; effect size = 0.3). I then performed an initial analysis of the collected dialogues using linear regression analysis to identify correlations between actions of the dyads and their success at solving the problems presented to them. Besides the post-test, students solutions to the problems were scored, as well; this is what we refer to as problem solving success. The participant actions were also correlated with post-test scores and learning gains (the difference between post-test score and pre-test score). The data that was analyzed came from three of the five problems for all 15 dyads, although not all dyads attempted all three problems. Thus, I analyzed a total of 40 subdialogues. The problems that were analyzed are all error diagnosis problems, but each problem involves a different data structure -linked list, array-based stack and binary search tree. Additionally, I analyzed the relationship between initiative and post-test score, learning gain and successful problem solving. Before embarking on an exhaustive manual annotation of initiative, I chose to get a sense of whether initiative may indeed affect learning in this context by automatically tagging for initiative using an approximation of Walker and Whittaker's utterance based allocation of control rules (Walker and Whittaker, 1990). In this scheme, first each turn in the dialogue must be tagged as either: (1) an assertion, (2) a command, (3) a question or (4) a prompt (turns not expressing propositional content). This was done automatically, by marking turns that end in a question mark as questions, those that start with a verb as commands, prompts from a list of commonly used prompts (e.g. ok, yeah) and the remaining turns as assertions. Control is then allocated by using the following rules based on the turn type: 1. Assertion: Control is allocated to the speaker unless it is a response to a question. 2. Command: Control is allocated to the speaker. 3. Question: Control is allocated to the speaker, unless it is a response to a question or a command. 4. Prompt: Control is allocated to the hearer. Since the dialogues also have a graphics component, all drawing and code change moves had control assigned to the peer drawing or making the code change. The results of the regression analysis are summarized in tables 1 and 2, with blank cells representing non-significant correlations. Pre-test score, which represents the student's initial knowledge and/or aptitude in the area, was selected as a feature because it is important to understand the strength of the correlation between previous knowledge and post test score when identifying additional correlating features (Yap, 1979). The same holds for the time related features (pencil time and total time). The remaining correlations and trends to correlation suggest that participation is an important factor in successful collaboration. Since a student is more likely to take initiative when actively participating in prob- lem solving, potentially there there is a relation between these participation correlations and initiative. An analysis of initiative shows that there is a correlation of initiative and successful collaboration. In problem 3, learning gain positively correlates with the number of turns where a student has initiative (R 2 = 0.156, p = 0.037). And in problem 4, taking initiative through drawing has a positive impact on post-test score (R 2 = 0.155, p = 0.047). Annotation Since the preliminary analysis showed a correlation of initiative with learning gain, I chose to begin a thorough data analysis by annotating the dialogues with initiative shifts. Walker and Whittaker claim that initiative encompasses both dialogue control and task control (Walker and Whittaker, 1990), however, several others disagree. Jordan and Di Eugenio propose that control and initiative are two separate features in collaborative problem solving dialogues (Jordan and Di Eugenio, 1997). While control and initiative might be synonymous for the dialogues analyzed by Walker and Whittaker where a masterslave assumption holds, it is not the case in collaborative dialogues where no such assumption exists. Jordan and Di Eugenio argue that the notion of control should apply to the dialogue level, while initiative should pertain to the problem-solving goals. In a similar vein, Chu-Carroll and Brown also argue for a distinction between control and initiative, which they term task initiative and dialogue initiative (Chu-Carroll and Brown, 1998). Since there is no universally agreed upon definition for initiative, I have decided to annotate for both dialogue initiative and task initiative. For dialogue initiative annotation, I am using Walker and Whittaker's utterance based allocation of control rules (Walker and Whittaker, 1990), which are widely used to identify dialogue initiative. For task initiative, I have derived an annotation scheme based on other research in the area. According to Jordan and Di Eugenio, in problem solving (task) initiative the agent takes it upon himself to address domain goals by either (1)proposing a solution or (2)reformulating goals. In a similar vein, Guinn (Guinn, 1998) defines task initiative as belonging to the participant who dictates which decomposition of the goal will be used by both participants during problem-solving. A third definition is from Chu-Carroll and Brown. They suggest that task initiative tracks the lead in development of the agent's plan. Since the primary goal of the dialogues studied by Chu-Carroll and Brown is to develop a plan, this could be re-worded to state that task initiative tracks the lead in development of the agent's goal. Combining these definitions, task initiative can be defined as any action by a participant to either achieve a goal directly, decompose a goal or reformulate a goal. Since the goals of our problems are understanding and potentially correcting a program, actions in our domain that show task initiative include actions such as explaining what a section of code does or identifying a section of code that is incorrect. Two coders, the author and an outside annotator, have coded 24 dialogues (1449 utterances) for both dialogue and task initiative. This is approximately 45% of the corpus. The resulting intercoder reliability, measured with the Kappa statistic, is 0.77 for dialogue initiative annotation and 0.68 for task initiative, both of which are high enough to support tentative conclusions. Using multiple linear regression analysis on these annotated dialogues, I found that, in a subset of the problems, there was a signicant correlation between post-test score (after removing the effects of pre-test scores) and the number of switches in dialogue initiative (R 2 =0.157, p=0.014). Also, in the same subset, there was a correlation between post-test score and the number of turns that a student had initiative (R 2 =0.077, p=0.065). This suggests that both taking the ini-tiative and taking turns in leading problem solving results in learning. Given my hypothesis that initiative can be used to identify co-construction, the next step is to annotate the dialogues using a subset of the DAMSL scheme (Core and Allen, 1997) to identify episodes of co-construction. Once annotated, I will use machine learning techniques to identify co-construction using initiative as a feature. Since this is a classification problem, algorithms such as Classification Based on Associations (Liu, 2007) will be used. Additionally, I will explore those algorithms that take into account the sequence of actions, such as hidden Markov models or neural networks. Computational Model The model will be implemented as an artificial agent, KSC-PaL, that interacts with a peer in collaborative problem solving using an interface similar to the one that was used in data collection (see Figure 1). This agent will be an extension of the TuTalk system, which is designed to support natural language dialogues for educational applications (Jordan et al., 2006). TuTalk contains a core set of dialogue system modules that can be replaced or enhanced as required by the application. The core modules are understanding and generation, a dialogue manager which is loosely characterized as a finite state machine with a stack and a student model. To implement the peer agent, I will replace TuTalk's student model and add a planner module. Managing the information state of the dialogue (Larsson and Traum, 2000), which includes the beliefs and intentions of the participants, is important in the implementation of any dialogue agent. KSC-PaL will use a student model to assist in management of the information state. This student model tracks the current state of problem solving as well as estimates the student's knowledge of concepts involved in solving the problem by incorporating problem solution graphs (Conati et al., 2002). Solution graphs are Bayesian networks where each node represents either an action required to solve the problem or a concept required as part of problem solving. After analyzing our dialogues, I realized that the solutions to the problems in our domain are different from standard problem-solving tasks. Given that our tasks are program comprehension tasks and that the dialogues are peer led, there can be no assumption as to the order in which a student will analyze code statements. Therefore a graph comprised of connected subgraphs that each represent a section of the code more closely matches what I observed in our dialogues. So, we are using a modified version of solution graphs that has clusters of nodes representing facts that are relevant to the problem. Each cluster contains facts that are dependent on one another. For example, one cluster represents facts related to the push method for a stack. As the code is written, it would be impossible to comprehend the method without understanding the prefix notation for incrementing. A user's utterances and actions can then be matched to the nodes within the clusters. This provides the agent with information related to the student's knowledge as well as the current topic under discussion. A planner module will be added to TuTalk to provide KSC-PaL with a more sophisticated method of selecting scripts. Unlike TuTalk's dialogue manager which uses a simple matching of utterances to concepts in order to determine the script to be followed, KSC-PaL's planner will incorporate the results of the data analysis above and will also include the status of the student's knowledge, as reflected in the student model, in making script selections. This planner will potentially be a probabilistic planner such as the one in (Lu, 2007). Conclusion In conclusion, we are developing a computational model of knowledge construction which incorporates initiative and the balance of initiative. This model will be embedded in an artificial agent that collaborates with students to solve data structure problems. As knowledge construction has been shown to promote learning, this research could have a profound impact on educational applications by changing the way in which they engage students in learning. Figure 1 : 1The data collection interface problems in the domain of computer science data structures. Table 1: Post-test Score Predictors (R 2 )Prob. 3 Prob. 4 Prob. 5 Predictor (Lists) (Stacks) (Trees) Pre-Test 0.530 (p=0.005) 0.657 (p=0.000) 0.663 (p=0.000) Words 0.189 (p=0.021) Words per Turn 0.141 (p=0.049) Pencil Time 0.154 (p=0.039) Total Turns 0.108 (p=0.088) Code Turns 0.136 (p=0.076) Table 2 : 2Problem Score Predictors (R 2 ) AcknowledgmentsThe graphical interface is based on a graphical interface developed by Davide Fossati for an intelligent tutoring system in the same domain. Learning from human tutoring. T H Michelene, Stephanie A Chi, Jeong Siler, Takashi Heisawn, Robert G Yamauchi, Hausmann, Cognitive Science. 254Michelene T. H. Chi, Stephanie A. Siler, Jeong Heisawn, Takashi Yamauchi, and Robert G. Hausmann. 2001. Learning from human tutoring. Cognitive Science, 25(4):471-533. An evidential model for tracking initiative in collaborative dialogue interactions. Jennifer Chu, - Carroll, Michael K Brown, User Modeling and User-Adapted Interaction. 83-4Jennifer Chu-Carroll and Michael K. Brown. 1998. An evidential model for tracking initiative in collabora- tive dialogue interactions. User Modeling and User- Adapted Interaction, 8(3-4):215-253, September. Collaborative response generation in planning dialogues. Jennifer Chu, - Carroll, Sandra Carberry, Computational Linguistics. 243Jennifer Chu-Carroll and Sandra Carberry. 1998. Col- laborative response generation in planning dialogues. Computational Linguistics, 24(3):355-400. Using bayesian networks to manage uncertainty in student modeling. Cristina Conati, Abigail Gertner, Kurt Vanlehn, User Modeling and User-Adapted Interaction. 124Cristina Conati, Abigail Gertner, and Kurt Vanlehn. 2002. Using bayesian networks to manage uncer- tainty in student modeling. User Modeling and User- Adapted Interaction, 12(4):371-417. A coached collaborative learning environment for entity-relationship modeling. Intelligent Tutoring Systems. Angeles María De Los, Constantino-González, D Daniel, Suthers, María de los Angeles Constantino-González and Daniel D. Suthers. 2000. A coached collaborative learning environment for entity-relationship modeling. Intelligent Tutoring Systems, pages 324-333. Coding dialogues with the DAMSL annotation scheme. G Mark, James F Core, Allen, Working Notes: AAAI Fall Symposium on Communicative Action in Humans and Machines. David TraumMenlo Park, CaliforniaMark G. Core and James F. Allen. 1997. Coding dia- logues with the DAMSL annotation scheme. In David Traum, editor, Working Notes: AAAI Fall Symposium on Communicative Action in Humans and Machines, pages 28-35, Menlo Park, California. American Asso- ciation for Artificial Intelligence. Circsimtutor: an intelligent tutoring system using natural language dialogue. Martha W Evens, Ru-Charn Chang, Yoon Hee Lee, Leem Seop Shim, Chong Woo Woo, Yuemei Zhang, Joel A Michael, Allen A Rovick, Proceedings of the fifth conference on Applied natural language processing. the fifth conference on Applied natural language processingSan Francisco, CA, USAMorgan Kaufmann Publishers IncMartha W. Evens, Ru-Charn Chang, Yoon Hee Lee, Leem Seop Shim, Chong Woo Woo, Yuemei Zhang, Joel A. Michael, and Allen A. Rovick. 1997. Circsim- tutor: an intelligent tutoring system using natural lan- guage dialogue. In Proceedings of the fifth conference on Applied natural language processing, pages 13-14, San Francisco, CA, USA. Morgan Kaufmann Publish- ers Inc. Autotutor: A tutor with dialogue in natural language. Arthur C Graesser, Shulan Lu, George Tanner Jackson, Heather Hite Mitchell, Mathew Ventura, Andrew Olney, Max M Louwerse, Behavior Research Methods, Instruments, & Computers. 3613Arthur C. Graesser, Shulan Lu, George Tanner Jackson, Heather Hite Mitchell, Mathew Ventura, Andrew Ol- ney, and Max M. Louwerse. 2004. Autotutor: A tutor with dialogue in natural language. Behavior Research Methods, Instruments, & Computers, 36:180-192(13), May. An analysis of initiative selection in collaborative task-oriented discourse. User Modeling and User-Adapted Interaction. I Curry, Guinn, 8Curry I. Guinn. 1998. An analysis of initiative selection in collaborative task-oriented discourse. User Model- ing and User-Adapted Interaction, 8(3-4):255-314. Learning from collaborative problem solving: An analysis of three hypothesized mechanisms. G M Robert, Hausmann, T H Michelee, Marguerite Chi, Roy, 26th Annual Converence of the Cognitive Science Society. K.D Forbus, D. Gentner, and T. Regier, editorsMahwah, NJRobert G.M. Hausmann, Michelee T.H. Chi, and Mar- guerite Roy. 2004. Learning from collaborative prob- lem solving: An analysis of three hypothesized mech- anisms. In K.D Forbus, D. Gentner, and T. Regier, edi- tors, 26th Annual Converence of the Cognitive Science Society, pages 547-552, Mahwah, NJ. Control and initiative in collaborative problem solving dialogues. Pamela W Jordan, Barbara Di Eugenio, Working Notes of the AAAI Spring Symposium on Computational Models for Mixed Initiative. Menlo Park, CAPamela W. Jordan and Barbara Di Eugenio. 1997. Con- trol and initiative in collaborative problem solving di- alogues. In Working Notes of the AAAI Spring Sympo- sium on Computational Models for Mixed Initiative, pages 81-84, Menlo Park, CA. Rapidly developing dialogue systems that support learning studies. Pamela W Jordan, Michael Ringenberg, Brian Hall, Proceedings of ITS06 Workshop on Teaching with Robots, Agents, and NLP. ITS06 Workshop on Teaching with Robots, Agents, and NLPPamela W. Jordan, Michael Ringenberg, and Brian Hall. 2006. Rapidly developing dialogue systems that sup- port learning studies. In Proceedings of ITS06 Work- shop on Teaching with Robots, Agents, and NLP, pages 1-8. Information state and dialogue management in the trindi dialogue move engine toolkit. Staffan Larsson, R David, Traum, Nat. Lang. Eng. 63-4Staffan Larsson and David R. Traum. 2000. Information state and dialogue management in the trindi dialogue move engine toolkit. Nat. Lang. Eng., 6(3-4):323-340. Web data mining: exploring hyperlinks, contents, and usage data. Bing Liu, SpringerBing Liu. 2007. Web data mining: exploring hyperlinks, contents, and usage data. Springer. Models of plans to support communication: An initial report. Karen E Lochbaum, Candice L Sidner, Proceedings of the Eighth National Conference on Artificial Intelligence. Thomas Dietterich and William Swartoutthe Eighth National Conference on Artificial IntelligenceMenlo Park, CaliforniaAAAI PressKaren E. Lochbaum and Candice L. Sidner. 1990. Mod- els of plans to support communication: An initial re- port. In Thomas Dietterich and William Swartout, ed- itors, Proceedings of the Eighth National Conference on Artificial Intelligence, pages 485-490, Menlo Park, California. AAAI Press. Expert Tutoring and Natural Language Feedback in Intelligent Tutoring Systems. Xin Lu, University of Illinois at ChicagoPh.D. thesisXin Lu. 2007. Expert Tutoring and Natural Language Feedback in Intelligent Tutoring Systems. Ph.D. thesis, University of Illinois at Chicago. Computational modeling and analysis of knowledge sharing in collaborative distance learning. Amy Soller, User Modeling and User-Adapted Interaction. 144Amy Soller. 2004. Computational modeling and analysis of knowledge sharing in collaborative distance learn- ing. User Modeling and User-Adapted Interaction, Volume 14(4):351-381, January. Does talking with peers help learning? the role of expertise and talk in convergent group discussion tasks. Tan Bee Tin, Journal of English for Academic Purposes. 21Tan Bee Tin. 2003. Does talking with peers help learn- ing? the role of expertise and talk in convergent group discussion tasks. Journal of English for Academic Purposes, 2(1):53-66. The architecture of why2-atlas: A coach for qualitative physics essay writing. Kurt Vanlehn, Pamela W Jordan, Carolyn Penstein Rosé, Dumisizwe Bhembe, Michael Böttner, Andy Gaydos, Maxim Makatchev, Umarani Pappuswamy, Michael A Ringenberg, Antonio Roque, Stephanie Siler, Ramesh Srivastava, ITS '02: Proceedings of the 6th International Conference on Intelligent Tutoring Systems. London, UKSpringer-VerlagKurt VanLehn, Pamela W. Jordan, Carolyn Penstein Rosé, Dumisizwe Bhembe, Michael Böttner, Andy Gaydos, Maxim Makatchev, Umarani Pappuswamy, Michael A. Ringenberg, Antonio Roque, Stephanie Siler, and Ramesh Srivastava. 2002. The architec- ture of why2-atlas: A coach for qualitative physics es- say writing. In ITS '02: Proceedings of the 6th Inter- national Conference on Intelligent Tutoring Systems, pages 158-167, London, UK. Springer-Verlag. A simulated student can improve collaborative learning. Aurora Vizcaíno, International Journal of Artificial Intelligence in Education. 151Aurora Vizcaíno. 2005. A simulated student can im- prove collaborative learning. International Journal of Artificial Intelligence in Education, 15(1):3-40. Mixed initiative in dialogue: an investigation into discourse segmentation. Marilyn Walker, Steve Whittaker, Proceedings of the 28th annual meeting on Association for Computational Linguistics. the 28th annual meeting on Association for Computational LinguisticsMorristown, NJ, USAAssociation for Computational LinguisticsMarilyn Walker and Steve Whittaker. 1990. Mixed ini- tiative in dialogue: an investigation into discourse seg- mentation. In Proceedings of the 28th annual meeting on Association for Computational Linguistics, pages 70-78, Morristown, NJ, USA. Association for Com- putational Linguistics. Pretest-posttest correlation and regression models. Yap Kim Onn, Presented at the Annual Meeting of the American Educational Research Association (63rd. San Francisco, CaliforniaKim Onn Yap. 1979. Pretest-posttest correlation and regression models. Presented at the Annual Meet- ing of the American Educational Research Association (63rd, San Francisco, California), April 8-12.
256,739,244
Hierarchical Multi-Task Transformers for Crosslingual Low Resource Phoneme Recognition
This paper proposes a method for multilingual phoneme recognition in unseen, low resource languages. We propose a novel hierarchical multi-task classifier built on a hybrid convolution-transformer acoustic architecture where articulatory attribute and phoneme classifiers are optimized jointly.The model was evaluated on a subset of 24 languages from the Mozilla Common Voice corpus. We found that when using regular multi-task learning, negative transfer effects occurred between attribute and phoneme classifiers. They were reduced by the hierarchical architecture. When evaluating zero-shot crosslingual transfer on a data set with 95 languages, our hierarchical multi-task classifier achieves an absolute PER improvement of 2.78% compared to a phoneme-only baseline.
[ 14289184, 21719838 ]
Hierarchical Multi-Task Transformers for Crosslingual Low Resource Phoneme Recognition Kevin Glocker kevin.glocker@thi.de Technische Hochschule Ingolstadt Research Institute AImotion Bavaria IngolstadtGermany Munir Georges munir.georges@thi.de Technische Hochschule Ingolstadt Research Institute AImotion Bavaria IngolstadtGermany Intel Labs Germany Hierarchical Multi-Task Transformers for Crosslingual Low Resource Phoneme Recognition This paper proposes a method for multilingual phoneme recognition in unseen, low resource languages. We propose a novel hierarchical multi-task classifier built on a hybrid convolution-transformer acoustic architecture where articulatory attribute and phoneme classifiers are optimized jointly.The model was evaluated on a subset of 24 languages from the Mozilla Common Voice corpus. We found that when using regular multi-task learning, negative transfer effects occurred between attribute and phoneme classifiers. They were reduced by the hierarchical architecture. When evaluating zero-shot crosslingual transfer on a data set with 95 languages, our hierarchical multi-task classifier achieves an absolute PER improvement of 2.78% compared to a phoneme-only baseline. Introduction While many highly effective architectures for speech recognition have been introduced in recent years, most require large amounts of languagespecific training data. However, for a substantial portion of the worlds languages, only few or no annotated speech recordings are available for training or fine-tuning. To leverage the accuracy of end-to-end architectures, systems intended for lowresource ASR are often (pre-)trained on large multilingual corpora from mostly high-resource languages such as in Xu et al. (2021), who fine-tune a multilingually pretrained wav2vec 2.0 model for the crosslingual transfer task. They are either finetuned on low resource languages as evaluated by, e.g., Siminyu et al. (2021) or directly applied zeroshot, as outlined by Li et al. (2021a). Several systems have been introduced that use articulatory attribute systems developed by linguists to improve phoneme recognition performance. In such systems, attributes are primarily used as an input in the form of trainable embeddings for each attribute individually as proposed by, e.g., Li et al. (2021a) or for feature vectors as in, e.g., Zhu et al. (2021), or using signature matrices as described by, e.g, Li et al. (2020). In contrast, Lee et al. (2019) applied multi-task learning to train separate articulatory feature classifiers and triphone states using shared layers for Mandarin at the same time with a TDNN architecture on forced alignments. In this work, a multilingual phoneme recognition architecture is introduced. It is derived from a similar architecture applied to computer assisted pronunciation training in Mandarin (Glocker, 2021). Hierarchical multi-task learning is used to learn jointly to classify articulatory attributes and phonemes with an additional direct connection between the attribute and the phoneme classifier. The proposed acoustic model for phoneme recognition is introduced in Section 2. The system is then evaluated in Section 3 in the high resource and zero-shot crosslingual settings. Afterwards, results are discussed and the paper concluded in Section 4. Crosslingual Phoneme Recognition Section 2.1 describes the hybrid transformeracoustic model for encoding frame sequence. The hierarchical multi-task classifier for articulatory attributes and phonemes is introduced in Section 2.2. Transformer Acoustic Model A hybrid convolution and transformer encoder model is used for acoustic sequence modeling as shown in Figure 1. The architecture and hyperparameter choices are derived from the transformer model introduced by Synnaeve et al. (2019). First, the audio is resampled to 16kHz. 40 dimensional MFCC features using 25ms frames with a stride of 10ms are extracted. The features are then passed into two GLU-activated convolution layers to encode local context, with a kernel size of three and 512 and 400 channels respectively. Each convolution layer is preceded by layer normalization and followed by a dropout layer for regularization. A stride of 2 is used in the second GLU layer, increasing the receptive field of the model to 5 frames while keeping the output lengths shorter than the length of phoneme sequences for CTC. Sinusoidal positional encodings as proposed by Vaswani et al. (2017) are added to the output representations of the convolution layers. The sequence is passed through a shallow 2-layer transformer. In the transformer, Pre-LN transformer blocks are used without warmup as proposed by Xiong et al. (2020). Feedforward layers with a hidden size of 2048 and 4 attention heads are used motivated by Vaswani et al. (2017). The dropout rate is 0.2. Hierarchical Multi-Task Classifiers In contrast to previous work (Lee et al., 2019), classifiers are not trained completely independently but are connected in a hierarchical structure. Cascading information between tasks has also previously been successfully applied to jointly optimizing NLP tasks at different "levels" such as POS and dependency parsing (Crawshaw, 2020). In the hierarchy, both the attribute and phoneme classifiers take the normalized output of the transformer acoustic model as an input. In addition to the acoustic representation, the phoneme classifier receives a concatenation of the probability distributions from each articulatory attribute classifier. More specifically, for each time step t given a set of attribute classifier logits A t , the transformer hidden vector h t , and the weights and biases of the phoneme projection layer W and b, the phoneme logits p t are computed as follows: v t = a∈At softmax(a) ⊕ h t (1) p t = W T v t + b(2) Each classification layer is then independently but simultaneously optimized using connectionist temporal classification (CTC; Graves et al. (2006)). For consistency, articulatory attribute vectors are directly mapped to each phoneme without merging repetitions. As a result, there is always a 1:1 correspondence between attribute and feature labels at training time. While the attribute and phoneme classifiers form a flat hierarchy in this work, the hierarchical structure generalizes to any directed acyclic graph representing phonetic feature structures. Evaluation We evaluated the proposed hierarchical multi-task transformer with two experiments. (1) In the "Multi-Task" variant, regular multitask learning is used where attribute probabilities are not used as inputs to the phoneme classifier. (2) In the "Phonemes Only" model, only the phoneme classifier is used and attribute information is only applied to phoneme mapping at test time. Batch sizes are set dynamically for efficiency until the product of the batch and frame sequence dimensions reaches 320,000. The Adam optimizer (Kingma and Ba, 2015) was used for training with β 1 = 0.9 and β 2 = 0.98 as in Vaswani et al. (2017). A learning rate of 0.001 is used. The training was stopped once the average validation set losses did not decrease for more than 3 epochs. The transformer acoustic model was implemented in the PyTorch framework (Paszke et al., 2019), using Torchaudio (Yang et al., 2021 for Audio processing and feature extraction. The data sets for training and evaluation are described in Section 3.1. Section 3.2 presents and analyses the results for phoneme and attribute classification for high and low resource languages. Datasets For training and evaluation in the high resource setting, version 10.0 of the Mozilla Common Voice corpus was used, which contains crowdsourced recordings of sentences. Each sentence is tokenized using Stanza (Qi et al., 2020), after which punctuation is removed and each token is transcribed into phonemes using Epitran (Mortensen et al., 2018). Finally, the transcriptions are segmented according to the IPA segments available in the Panphon database (Mortensen et al., 2016) for the phoneme inventory extracted from the training data for each language. The 24 articulatory attributes from Panphon are used for creating and supervising the attribute classifiers. The multilingual training set was constructed from at most 15,000 sentences from the training sets of 24 languages from Common Voice, for which both a tokenization and a grapheme-tophoneme model is available. The original development and test sets were used unchanged. The first release 1 of the multilingual corpus published by Li et al. (2021b) is used for evaluating zero-shot transfer in this work as in Li et al. (2021a). It provides 5,509 validated utterances with phoneme transcriptions for 95 low-resources languages from five continents. Since recordings for Czech, Dutch, Maltese, Hindi and Hungarian are also included in the training data, they are removed from the test data before computing the averages. To handle different inventories and OOV phonemes in the test languages, phonemes predicted by the model are mapped to each target inventory using the hamming distance between attribute vectors. This corresponds to the "tr2tgt" approach introduced by Xu et al. (2021). For the UCLA Phonetic Corpus, the included inventory files are used for this mapping even if they include a phoneme that doesn't appear in a transcription. Experiments The overall performance on phoneme and articulatory attribute detection on Common Voice can be seen in Table 1. In addition to the phoneme error rate (PER), the attribute error rate (AER) is computed for each attribute individually and then averaged over all attributes. The hierarchical multitask model reaches lower PER and average AER than regular multi-task learning in both the high and low resource setting. The regular multi-task model also performs worse than the phoneme only baseline. This shows, that negative transfer effects are stronger without the hierarchical connection. Compared to the "Phonemes Only" model, the hierarchical model performs almost identically in the high-resource setting. However, as shown in Table 2, there is an improvement to the unseen low-resource languages from the UCLA Phonetic Corpus. In contrast, the regular multi-task model also yields higher PERs in this setting. Figure 2 shows the phoneme and average attribute error rates for the Common Voice test sets of the languages used for training. The variance of PERs between languages is high (σ 2 = 135.03). On the attribute level, the variance of the AER between languages is much less pronounced (σ 2 = 15.61) and lower AER doesn't correlate with higher PER (r 2 = 0.016). For instance, the PER is highest for Arabic and Vietnamese even though their AER are among the lowest in the test set. Since the AER was improved most consistently across languages through the hierarchical architecture, research into better modeling the connection between articulatory attributes and phonemes could lead to larger PER improvements in future work. For Arabic and Urdu, a contributing factor might be Epitran not transcribing short vowels since they are not present in their orthography (Mortensen et al., 2018). For Vietnamese, the higher PER is likely due to it being the second-lowest resource language in the training data with only 2259 validated utterances and one of only two tonal languages alongside Thai. In contrast, phoneme recognition is the most accurate for the five romance languages including Spanish, Italian and Catalan. They likely benefit the most from the multilingual settings since they (Hammarström et al., 2022) are closely related. A possible explanation for the low correlation between AER and PER is, that the frame level probabilities tend to form single frame spikes when trained with CTC (Graves et al., 2006). Since CTC loss is computed for every classifier independently, spikes for attributes of the same phoneme sometimes occur on different frames. As a result, the phoneme classifier is likely to receive high blank probabilities from multiple attribute classifiers. The crosslingual transfer results are further divided into macroregions in Figure 3 based on Glottolog (Hammarström et al., 2022). The model transfers best to the set of 10 languages from the "Papunesia" region, despite there being no lan-guages from this region in the training set. In contrast, the model generalizes poorly to the four American languages. Some outliers with particularly high PER might also be caused by the noisy conditions under which some utterances were recorded (Li et al., 2021b). Conclusion A novel hierarchical multi-task architecture is presented and evaluated together with a hybrid convolution-transformer acoustic model for phoneme classification. In contrast to regular multitask learning, the phoneme classifier receives attribute probabilities as additional inputs. It tackles the crosslingual transfer task for phoneme recognition in low resource languages. For zero-shot classification in such languages, only their phoneme inventory is required. Negative transfer effects observed in regular multi-task learning were reduced. When evaluated on the UCLA Phonetic Corpus, the proposed system yielded an absolute phoneme error rate reduction of 2.78% across 95 unseen languages compared to a phoneme-only baseline. Future work may investigate the low correlation between AER and PER, and further analyse the cause of the high variance of PER between languages. In particular, we plan to investigate and improve the mapping between the shared phoneme inventory and language specific inventories to tackle these challenges. Furthermore, tones could be moved to their own layer in the hierarchy to better reflect their suprasegmental nature. Figure 1 : 1Illustration of the hybrid convolutional transformer phoneme recognition model with the hierarchical connections between attribute and phoneme classifiers Figure 2 : 2es ca it ro pl fr th ug ru id pt de cs tr hu ta uk sv mt nl ur hi Phoneme Error Rates (PER) and the averages over all Attribute Error Rates (AER) on the test sets from Common Voice for the languages used for training Figure 3 : 3Phoneme Error Rates (PER) for the languages in the UCLA Phonetic Corpus grouped into macroregions accordig to Glottolog 1 https://github.com/xinjli/ ucla-phonetic-corpus/releases/tag/v1.0Architecture %PER %AER Phonemes Only 48.96 - Multi-Task 52.19 19.43 Hierarchical Multi-Task 49.11 17.99 Table 1: Average phoneme and attribute error rates for the Common Voice subset representing the high resource setting Architecture %PER %AER Phonemes Only 74.77 - Multi-Task 75.28 34.14 Hierarchical Multi-Task 71.99 30.25 Table 2 : 2Average phoneme and attribute error rates for the UCLA Phonetic Corpus representing the low re- source setting Multi-task learning with deep neural networks: A survey. Michael Crawshaw, 10.48550/ARXIV.2009.09796abs/2009.09796Michael Crawshaw. 2020. Multi-task learning with deep neural networks: A survey. CoRR, abs/2009.09796. Unsupervised end-to-end computer-assisted pronunciation training for mandarin. Master's thesis. Kevin Glocker, Eberhard Karls Universität TübingenKevin Glocker. 2021. Unsupervised end-to-end computer-assisted pronunciation training for man- darin. Master's thesis, Eberhard Karls Universität Tübingen. Connectionist temporal classification: labelling unsegmented sequence data with recurrent neural networks. Alex Graves, Santiago Fernández, Faustino J Gomez, Jürgen Schmidhuber, 10.1145/1143844.1143891Proceedings of the 23rd international conference on Machine learning. the 23rd international conference on Machine learningAlex Graves, Santiago Fernández, Faustino J. Gomez, and Jürgen Schmidhuber. 2006. Connectionist tem- poral classification: labelling unsegmented sequence data with recurrent neural networks. Proceedings of the 23rd international conference on Machine learn- ing. Martin Haspelmath. Harald Hammarström, Robert Forkel, 10.5281/zenodo.6578297and Sebastian Bank. 2022. glottolog/glottolog: Glottolog database 4.6Harald Hammarström, Robert Forkel, Martin Haspel- math, and Sebastian Bank. 2022. glottolog/glottolog: Glottolog database 4.6. Adam: A method for stochastic optimization. P Diederik, Jimmy Kingma, Ba, 10.48550/ARXIV.1412.6980abs/1412.6980CoRRDiederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. CoRR, abs/1412.6980. Multitask learning for acoustic modeling using articulatory attributes. Yueh-Ting Lee, Xuan-Bo Chen, Hung-Shin Lee, Jyh-Shing Roger Jang, Hsin-Min Wang, 10.1109/APSIPAASC47483.2019.90231802019 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC). Yueh-Ting Lee, Xuan-Bo Chen, Hung-Shin Lee, Jyh- Shing Roger Jang, and Hsin-Min Wang. 2019. Multi- task learning for acoustic modeling using articulatory attributes. In 2019 Asia-Pacific Signal and Infor- mation Processing Association Annual Summit and Conference (APSIPA ASC), pages 855-861. Towards zero-shot learning for automatic phonemic transcription. Xinjian Li, Siddharth Dalmia, David R Mortensen, Juncheng Li, Alan W Black, Florian Metze, 10.1609/aaai.v34i05.6341AAAI. Xinjian Li, Siddharth Dalmia, David R. Mortensen, Juncheng Li, Alan W. Black, and Florian Metze. 2020. Towards zero-shot learning for automatic phonemic transcription. In AAAI. Hierarchical Phone Recognition with Compositional Phonetics. Xinjian Li, Juncheng Li, Florian Metze, Alan W Black, 10.21437/Interspeech.2021-1803Proc. Interspeech 2021. Interspeech 2021Xinjian Li, Juncheng Li, Florian Metze, and Alan W. Black. 2021a. Hierarchical Phone Recognition with Compositional Phonetics. In Proc. Interspeech 2021, pages 2461-2465. Multilingual phonetic dataset for low resource speech recognition. Xinjian Li, David R Mortensen, Florian Metze, Alan W Black, 10.1109/ICASSP39728.2021.9413720ICASSP 2021 -2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). Xinjian Li, David R. Mortensen, Florian Metze, and Alan W Black. 2021b. Multilingual phonetic dataset for low resource speech recognition. In ICASSP 2021 -2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 6958- 6962. Epitran: Precision G2P for many languages. David R Mortensen, Siddharth Dalmia, Patrick Littell, Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018). the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)Miyazaki, JapanEuropean Language Resources Association (ELRADavid R. Mortensen, Siddharth Dalmia, and Patrick Littell. 2018. Epitran: Precision G2P for many lan- guages. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018), Miyazaki, Japan. European Language Resources Association (ELRA). Panphon: A resource for mapping IPA segments to articulatory feature vectors. David R Mortensen, Patrick Littell, Akash Bharadwaj, Kartik Goyal, Chris Dyer, Lori S Levin, Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers. COLING 2016, the 26th International Conference on Computational Linguistics: Technical PapersACLDavid R. Mortensen, Patrick Littell, Akash Bharadwaj, Kartik Goyal, Chris Dyer, and Lori S. Levin. 2016. Panphon: A resource for mapping IPA segments to articulatory feature vectors. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, pages 3475-3484. ACL. Pytorch: An imperative style, high-performance deep learning library. Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary Devito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, Soumith Chintala, Advances in Neural Information Processing Systems. H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. GarnettCurran Associates, Inc32Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Te- jani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. 2019. Pytorch: An imperative style, high-performance deep learning library. In H. Wallach, H. Larochelle, A. Beygelz- imer, F. d'Alché-Buc, E. Fox, and R. Garnett, editors, Advances in Neural Information Processing Systems 32, pages 8024-8035. Curran Associates, Inc. Stanza: A Python natural language processing toolkit for many human languages. Peng Qi, Yuhao Zhang, Yuhui Zhang, Jason Bolton, Christopher D Manning, 10.18653/v1/2020.acl-demos.14Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations. the 58th Annual Meeting of the Association for Computational Linguistics: System DemonstrationsPeng Qi, Yuhao Zhang, Yuhui Zhang, Jason Bolton, and Christopher D. Manning. 2020. Stanza: A Python natural language processing toolkit for many human languages. In Proceedings of the 58th Annual Meet- ing of the Association for Computational Linguistics: System Demonstrations. Phoneme Recognition Through Fine Tuning of Phonetic Representations: A Case Study on Luhya Language Varieties. Kathleen Siminyu, Xinjian Li, Antonios Anastasopoulos, David R Mortensen, Michael R Marlo, Graham Neubig, 10.21437/Interspeech.2021-1434Proc. Interspeech 2021. Interspeech 2021Kathleen Siminyu, Xinjian Li, Antonios Anastasopou- los, David R. Mortensen, Michael R. Marlo, and Gra- ham Neubig. 2021. Phoneme Recognition Through Fine Tuning of Phonetic Representations: A Case Study on Luhya Language Varieties. In Proc. Inter- speech 2021, pages 271-275. End-to-end asr: from supervised to semi-supervised learning with modern architectures. Gabriel Synnaeve, Qiantong Xu, Jacob Kahn, Edouard Grave, Tatiana Likhomanenko, Vineel Pratap, 10.48550/ARXIV.1911.08460abs/1911.08460ArXiv. Anuroop Sriram, Vitaliy Liptchinsky, and Ronan CollobertGabriel Synnaeve, Qiantong Xu, Jacob Kahn, Edouard Grave, Tatiana Likhomanenko, Vineel Pratap, Anuroop Sriram, Vitaliy Liptchinsky, and Ronan Col- lobert. 2019. End-to-end asr: from supervised to semi-supervised learning with modern architectures. ArXiv, abs/1911.08460. . Ashish Vaswani, Noam M Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, 10.48550/ARXIV.1706.03762and Illia Polosukhin. 2017. Attention is all you need. ArXiv, abs/1706.03762Ashish Vaswani, Noam M. Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. ArXiv, abs/1706.03762. On layer normalization in the transformer architecture. Ruibin Xiong, Yunchang Yang, Di He, Kai Zheng, Shuxin Zheng, Chen Xing, Huishuai Zhang, Yanyan Lan, Liwei Wang, Tieyan Liu, PMLRProceedings of the 37th International Conference on Machine Learning. the 37th International Conference on Machine Learning119Ruibin Xiong, Yunchang Yang, Di He, Kai Zheng, Shuxin Zheng, Chen Xing, Huishuai Zhang, Yanyan Lan, Liwei Wang, and Tieyan Liu. 2020. On layer normalization in the transformer architecture. In Pro- ceedings of the 37th International Conference on Machine Learning, volume 119 of Proceedings of Machine Learning Research, pages 10524-10533. PMLR. Simple and effective zero-shot cross-lingual phoneme recognition. Qiantong Xu, Alexei Baevski, Michael Auli, 10.48550/ARXIV.2109.11680abs/2109.11680ArXiv. Qiantong Xu, Alexei Baevski, and Michael Auli. 2021. Simple and effective zero-shot cross-lingual phoneme recognition. ArXiv, abs/2109.11680. Moto Yao-Yuan Yang, Zhaoheng Hira, Anjali Ni, Artyom Chourdia, Caroline Astafurov, Ching-Feng Chen, Christian Yeh, David Puhrsch, Dmitriy Pollack, Donny Genzel, Edward Z Greenberg, Jason Yang, Jay Lian, Jeff Mahadeokar, Ji Hwang, Peter Chen, Prabhat Goldsborough, Sean Roy, Shinji Narenthiran, Watanabe, 10.48550/ARXIV.2110.15018arXiv:2110.15018Soumith Chintala, Vincent Quenneville-Bélair, and Yangyang Shi. 2021. Torchaudio: Building blocks for audio and speech processing. arXiv preprintYao-Yuan Yang, Moto Hira, Zhaoheng Ni, Anjali Chour- dia, Artyom Astafurov, Caroline Chen, Ching-Feng Yeh, Christian Puhrsch, David Pollack, Dmitriy Gen- zel, Donny Greenberg, Edward Z. Yang, Jason Lian, Jay Mahadeokar, Jeff Hwang, Ji Chen, Peter Golds- borough, Prabhat Roy, Sean Narenthiran, Shinji Watanabe, Soumith Chintala, Vincent Quenneville- Bélair, and Yangyang Shi. 2021. Torchaudio: Build- ing blocks for audio and speech processing. arXiv preprint arXiv:2110.15018. Multilingual and crosslingual speech recognition using phonological-vector based phone embeddings. Chengrui Zhu, Keyu An, Huahuan Zheng, Zhijian Ou, 10.1109/ASRU51503.2021.96879662021 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU). Chengrui Zhu, Keyu An, Huahuan Zheng, and Zhi- jian Ou. 2021. Multilingual and crosslingual speech recognition using phonological-vector based phone embeddings. In 2021 IEEE Automatic Speech Recog- nition and Understanding Workshop (ASRU), pages 1034-1041.
233,189,537
What Sounds "Right" to Me? Experiential Factors in the Perception of Political Ideology
In this paper, we challenge the assumption that political ideology is inherently built into text by presenting an investigation into the impact of experiential factors on annotator perceptions of political ideology. We construct an annotated corpus of U.S. political discussion, where in addition to ideology labels for texts, annotators provide information about their political affiliation, exposure to political news, and familiarity with the source domain of discussion, Reddit. We investigate the variability in ideology judgments across annotators, finding evidence that these experiential factors may influence the consistency of how political ideologies are perceived. Finally, we present evidence that understanding how humans perceive and interpret ideology from texts remains a challenging task for state-ofthe-art language models, pointing towards potential issues when modeling user experiences that may require more contextual knowledge.
[ 15175552, 814656, 14068874, 201703375, 12422512, 1994584 ]
What Sounds "Right" to Me? Experiential Factors in the Perception of Political Ideology 1771 April 19 -23, 2021 Qinlan Shen qinlans@cs.cmu.edu Carnegie Mellon University Carnegie Mellon University Carolyn P Rosé cprose@cs.cmu.edu Carnegie Mellon University Carnegie Mellon University What Sounds "Right" to Me? Experiential Factors in the Perception of Political Ideology Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics the 16th Conference of the European Chapter of the Association for Computational Linguistics17621771 April 19 -23, 20211762 In this paper, we challenge the assumption that political ideology is inherently built into text by presenting an investigation into the impact of experiential factors on annotator perceptions of political ideology. We construct an annotated corpus of U.S. political discussion, where in addition to ideology labels for texts, annotators provide information about their political affiliation, exposure to political news, and familiarity with the source domain of discussion, Reddit. We investigate the variability in ideology judgments across annotators, finding evidence that these experiential factors may influence the consistency of how political ideologies are perceived. Finally, we present evidence that understanding how humans perceive and interpret ideology from texts remains a challenging task for state-ofthe-art language models, pointing towards potential issues when modeling user experiences that may require more contextual knowledge. Introduction Social media companies, like Twitter, Facebook, and Reddit, play an important role in political discourse by providing a space for users to interact with different viewpoints. Understanding political discussion on these platforms often requires one to identify the ideologies behind texts, as understanding the viewpoints reflected in a text can provide insight into the partisanship of beliefs (Monroe et al., 2008) or the persuasive strategies used by different ideological groups (Tsur et al., 2015). Prior research on political discussion often relies on a "ground-truth" to aid in obtaining ideology labels for social media data. For example, due to the scale of political content on social media, a common paradigm is to obtain some ground-truth labels that are propagated to a larger set of texts using semi-supervised learning (Lin and Cohen, 2010;Zhou et al., 2011). The relationship between a social media artifact and various forms of established political knowledge can also be used to ground or validate ideology labels. Some examples of this include using author interactions with politicians with known party affiliations (Djemili et al., 2014;Barberá, 2015), ideological communities (Chandrasekharan et al., 2017;Shen and Rosé, 2019), and central users (Pennacchiotti and Popescu, 2011; as a starting heuristic, or evaluating a labeling approach by comparing geolocation tags attached to posts with historical voting patterns (Demszky et al., 2019). A limitation of these approaches, however, is that behavior on social media does not evenly or uniformly reflect the held political beliefs of participants. While there is evidence that people tend to engage with others who share similar beliefs (Halberstam and Knight, 2016), people also commonly interact with or even seek out communities and users they do not agree with (Kelly et al., 2005;Tan et al., 2016). Additionally, the practice of displaying one's political beliefs, which many grounding techniques rely on, varies in prevalence across online communities (Lampe et al., 2007;Zhong et al., 2017;Pathak, 2020). The concept of linguistic agency (Goffman et al., 1978) also challenges the idea that individual factors, such as ideology, are predictably presented in text. Based on an author's social goals for participating in political discussion, it may not be contextually relevant to project a strong impression of their political ideology. People engaged in interactive political discussion, however, still form perceptions about the alignments of others based on how they sound, often relying on their own conceptions of ideology in the process. The issue of perceiving ideology also plays a role when ideology labels are obtained using crowdsourced annotators. While making judgments, the annotator plays a similar role to a user participating in the discussion when perceiving the ideology of the speaker behind a text. However, annotators are expected to assign an explicit ideology label to a text with less contextual knowledge about how the text was produced. Thus, annotators may rely heavily on their own experiential factors, such as one's own beliefs or level of political engagement, when considering ideology. As a result, this process may introduce inconsistencies and biases in ideological labels used for political analysis. In this paper, we present an exploration of how experiential factors play a role in how annotators perceive ideology in text. Building upon prior work investigating annotation bias (Zaidan and Callison-Burch, 2011;Waseem, 2016;Joseph et al., 2017;Ross et al., 2017;Schwartz et al., 2017;Geva et al., 2019), we construct an annotated corpus of posts from political subcommunities on Reddit but incorporate additional contextual information about the annotators making ideology judgments. 1 While previous work (Joseph et al., 2017) has shown that source-side contextual features, such as user profiles and previous tweets, can influence label quality in stance annotation, we focus our analyses on contextual factors on the side of annotators. Most similar to our work, Carpenter et al. (2017) and Carpenter et al. (2018) examine the impact of an annotator's identity and openness on their ability to accurately assess author attributes, including political orientation. In our work, however, we examine the impact of an annotator's political beliefs, knowledge, and Reddit familiarity, on their judgments, using factors more specific to political participation on Reddit. We additionally consider the issue of annotator bias in ideology labeling not as an issue of accuracy but rather an issue of social variability. Under this view, we evaluate the performance of a state-of-the-art language model on its capacity to mirror different human perceptions of ideology to examine whether extralinguistic factors introduced through annotation may degrade model performance compared to other labels. Dataset Construction Our dataset is drawn from the popular content aggregation and discussion platform Reddit. Political discussion on Reddit is centered on subreddits, subcommunities centered on support for specific political candidates, organizations, and issues. For our analyses, we aim to label political distinctions on Reddit along the left-right political spectrum in U.S. politics. Using the monthly dumps from May to September 2019 from the Reddit Pushshift API (Baumgartner et al., 2020), we collect all submissions and comments from the top political subreddits 2 by subscriber count. The collected subreddits were manually labeled as left or right, based on the subreddit description and top posts. We then select the top 12 left and top 12 right subreddits from the monthly dumps where discussion is primarily focused on U.S. politics. 3 The selected subreddits are shown in Table 3 (Supplementary Material). Paired Ideology Ranking Task Prior work on annotating viewpoints (Iyyer et al., 2014;Bamman and Smith, 2015) generally presents annotators with texts in isolation to label with an ideology of interest. One drawback of this approach is the high degree of political expertise annotators are required to have to recognize that a text matches an ideology. To reduce the amount of overhead in recruiting and training political annotators, we present annotators instead with a paired ideology ranking task. Rather than examining texts in isolation, annotators are shown two texts and asked to select the text that is more likely to be authored by someone with the ideology of interest. For our setup, our goal is to pair a text authored by a left-leaning user with one by a right-leaning user. We use a heuristic-based semi-supervised approach to label texts based on the subreddit participation patterns of their authors. To expand the set of subreddits with ideological labels, we label all subreddits in the monthly dump data as left, neutral, or right based on user overlap with the 24 political subreddits with a known ideological slant (Section 2). For each subreddit, we calculate the z-score of the log odds ratio of a user participating in that subreddit and a known left-leaning subreddit vs. a right-leaning subreddit. A subreddit is labeled as either "left" or "right" if the calculated z-score satisfies a one-tailed Z test at p = 0.05 in the corresponding direction or "neutral" otherwise. Authors are then labeled based on their distribution of participation on the left vs. right subreddits. While users on Reddit have been shown to primarily engage with pro-social home communities (Datta and Adar, 2019) and similar heuristics have been used in prior work as an indicator of user interests and/or ideology (Olson and Neal, 2015;Chandrasekharan et al., 2017;Shen and Rosé, 2019), we emphasize that we use this heuristic to create a basis of comparison, rather than assuming that it provides "correct" ideology labels. In order to ensure that the text comparison helps annotators to perceive ideological differences, rather than presenting two unrelated texts that are essentially considered in isolation, we want to present paired texts that are similar in content. As a first step for generating comparisons with similar content, we require paired texts to discuss the same entity, since political discussions are primarily centered on the politicians, organizations, and geopolitical entities influencing policy decisions. To identify entities of interest, we use Stanford Core NLP (Manning et al., 2014) to extract occurrences of people, locations, organizations, and ideologies over our corpus of 24 subreddits. We limit entities under consideration to those that have occurred at least 300 times in our corpus and are easy to disambiguate. The considered entities are shown in Table 4 (Supplementary Material). To limit the impact of confounds, such as topic or entity salience, when comparing texts with the same entity, we use propensity score matching (Rosenbaum and Rubin, 1983) to match each leftaligned text with a right-aligned text that discusses the same entity in a similar context. A subset of 65 pairs was manually curated to use as screening questions to ensure that workers had a baseline knowledge of U.S. politics. These screening pairs were selected to be easier than the main task pairsthey are more limited in which entities discussed and express more explicit and/or extreme attitudes. Annotation Task Details We recruit workers on Amazon Mechanical Turk to complete our paired ideological ranking task. Given a pair of texts discussing the same highlighted political entity, we ask annotators to determine which of the two posts is more likely to have been written by someone who is either left-leaning or right-leaning. Annotators were instructed to use as many contextual cues as possible to form an impression of the political views held by the authors of the texts. To provide some guidance to annota-tors for what cues to consider, we train workers to consider the following features in the instructions: • Attitude: evaluation in favor of or against an entity. Ex: I trust Bernie from someone who favors Bernie Sanders (left). • Positioning: situating one's viewpoint with respect to the entity's. Ex: Listen to the Dems refers to Democrats as an out-group (right). • Jargon: use of speciality in-group vocab. Ex: Trump GEOTUS! -"God-Emperor" abbreviation specific to Trump supporters (right). The annotation task is shown in Figure 1 (Supplementary Material). Each worker was asked to annotate 18 pairs from our main task set and 8 screening questions, which were scattered throughout the assignment as an attention check. For each main task pair, we assign up to 5 workers for annotation. We restrict the worker pool to the U.S. and filter out workers who scored less than a 75% on the screening questions. Overall, we collect annotations for 630 non-screening pairs. Annotator Background Post-Survey After the annotation task, workers were asked to complete a survey (questions listed in Supplementary Material A) to assess their political affiliation, exposure to U.S. political news, and familiarity with political discussion on Reddit. Answers to the survey were inspected manually to assign annotators labels along three identifier categories: • Political ideology: This category indicates the annotator's political ideology. Annotators are labeled as left, center, or right based on their self-identified ideology and affiliation with U.S. political parties. • News access: This category indicates the annotator's exposure to political news. Annotators are labeled as news or non-news based on how frequently they access news on the 2020 U.S. presidential election. Dataset Statistics and Analysis Annotator Demographics Of the 180 recruited workers initially recruited for the task, 22 were discarded for answering fewer than 75% of the screening questions correctly, giving us a final pool of 158 annotators. Table 1 illustrates the distribution of the remaining workers across labels within the three categories. Labels across categories do not appear to be correlated (mean variance inflation factor = 1.043). Agreement/Difference Results We use Krippendorf's α (Krippendorff, 2004) to evaluate annotator agreement on our task to account for different user pools for each question. Despite a high degree of agreement across the pool of screening questions (α = 0.7311), the overall agreement across annotators in our general, non-screening set is relatively low (α = 0.3878), suggesting that the task of predicting the ideology of a text is nuanced and open to interpretation. We also calculate agreement for workers within each of our annotator groups (Table 1) in order to examine whether annotators with similar backgrounds are more likely to perceive ideology similarly. Overall, in-group agreement remains around the same level as the general task. However, an interesting pattern across annotator labels is that workers who are less likely to be familiar with the expression of political ideology on Redditnon-redditors (α = 0.359), people who do not frequently read political news (α = 0.336), and people who do not identify with the left or right (α = 0.325) -have lower agreement. This suggests that familiarity with the norms of political discussion on Reddit may contribute to a more consistent perception of ideology for Reddit texts. We additionally use McNemar's chi-squared test over pairwise comparisons of annotator groups under the same category to examine whether annotators with different backgrounds differ in their judgments. To ground the comparison, we evaluate annotator groups based on whether the majority of workers in the group gave the same answer as our semi-supervised labels (Section 2.1). Because these semi-supervised labels only provide a noisy estimate of ideology, we use these labels to create a basis of comparison. Rather than to check how "accurately" each group estimates ideology, this heuristic allows us to specifically quantify differences in judgments between groups. We find that for all comparison pairs, groups differ significantly in their answers over the same questions. In our pairwise comparisons, we also saw that the ideology of the annotator contributes heavily to variability in annotator judgments. The two groups with the highest percentage of questions with mismatched answers are left-leaning and right-leaning annotators, and 3 of the top 4 comparison pairs with the most mismatched answers are between ideology groups (Supplementary Material Table 6). Sources of Variability To examine possible explanations for the variability in annotator judgments across groups, we focus primarily on differences in judgments between left-leaning and right-leaning annotators. When examining differences at the entity-level, we find that the entities with the most mismatches tended to be highly visible entities that had a strong connection to a particular party during the 2020 election, such as highly visible political figures (e.g. Joe Biden, Nancy Pelosi) or the most common ideologies associated with each side (e.g. Republican Party, conservatism, liberalism), compared to less salient entities. This is unsurprising, as we expect people to develop different conceptions of salient entities building up to major events like elections, even with relatively limited media exposure. Finally, to investigate what aspects of the posts themselves contributed to variations in judgments between left-leaning and right-leaning workers, we ran a salience (Monroe et al., 2008) analysis for mismatched question pairs with highly visible entities. We found that annotators were less likely to select a post that expresses explicit abuse towards an opposing entity as being authored by someone with the same political views as themselves. For example, a right-leaning annotator was less likely to consider a post calling Biden a "pedophile" as right-leaning compared to liberal annotator. This may suggest that social desirability bias (Krumpal, 2013), may have an impact on decision-making, even when the task is not directly related to collecting data about the annotator themselves. Perceptions vs. Heuristic Labels Prior work (Castelle, 2018) suggests that deep text classification models perform poorly when labels are influenced by extralinguistic contextual factors. While the semi-supervised labels that we generated are based on a behavioral heuristic outside of the text, our analyses of human judgments suggest that the annotation process introduced additional interactional factors into ideological labeling. We investigate whether these factors influence model performance by evaluating a BERT-based (Devlin et al., 2019) model on its ability to match human judgments on the paired ideology ranking task. For our evaluation model, we finetune BERTmask on the 24 subreddit corpus. Next, for each text, we average its contextual embeddings in two ways: over (a) all tokens in the text and (b) all entity-related tokens in the text. We then concatenate the averaged embeddings, then use the resulting vector as input to a pairwise logistic regression model. For each annotator group, we use the majority answer for each question as the group label. Table 2 shows the performance of the model on the full 630 pair non-screening set. For all annotator groups, we found that the model has a significant drop in performance when asked to match human judgments vs. labels generated through our semi-supervised heuristic on the same dataset. To examine whether this drop in performance was due to inconsistencies in human judgments on particularly difficult or contentious distinctions, we additionally present results on a higher consensus subset (α = 0.6216) of 459 text pairs, where at least 75% of workers select the same answer. We found that while there was a small increase in performance on matching human judgments on the high consensus subset for all groups, performance still dropped compared to the semi-supervised la- SS (F) H (F) SS (C) H (C) Overall Table 2: F1 scores for a BERT-based ranking model on semi-supervised (SS) and human annotator (H) labels for the full non-screening set (F) and a high-consensus subset (C). *p < 0.05 difference in performance between the semi-supervised and human annotator labels bels, suggesting that matching human understanding of ideology is challenging for these models. Conclusion and Future Work In this paper, we reconsider the idea of groundtruth labels of political ideology and investigate the impact of experiential factors on human perception of ideology in text. We construct and analyze an annotated corpus that incorporates experiential information about annotators, finding evidence that annotator backgrounds influence the consistency of political ideology judgments and that current classification models struggle to match human perceptions of ideology across different groups. From our analyses on factors contributing to variations in judgments, there is a greater need for targeted recruiting of annotators that are familiar with and contextualized to the domain being annotated. In future work, we aim to extend our investigation to examine how stylistic elements of text contribute to people's perception of political ideologies in interaction. These analyses may provide further insight into the effectiveness of political communication strategies or the differences in how political groups interact with in-group and out-group members. Selected subreddits Left r/LateStageCapitalism, r/SandersForPresident, r/democrats, r/socialism, r/Liberal, r/VoteBlue, r/progressive, r/ChapoTrapHouse, r/neoliberal, r/esist, r/The Mueller, r/The Mueller Right r/The Donald, r/Libertarian, r/Republican, r/Conservative, r/JordanPeterson, r/TheNewRight, r/Anarcho Capitalism, r/conservatives, r/ShitPoliticsSays, r/POLITIC, r/AskTrumpSupporters, r/AskThe Donald Locations Russia Table 4: Selected entities included in the construction of the dataset. Italicized entities are also included in the screening set. Figure 1: Screenshot of a question in the paired ideological annotation task. Annotators are presented with two texts discussing the same highlighted entity in a similar context, one from a left-leaning user and another from a right-leaning user based on a semi-supervised labeling heuristic. Annotators are asked to select which of the two texts is more likely to be authored by someone with the highlighted ideology. • Reddit familiarity: This category indicates the annotator's familiarity with participation in political discussion on Reddit. Annotators are labeled as a redditor or a non-redditor based on their level of participation on Reddit in the past year. Redditors are further subdivided into political and non-political redditors based on their familiarity with the political subreddits included in our corpus. • Parties -Democratic Party -Republican Party -Libertarian Party -Green Party -Constitution Party -Democratic Socialists of America -Reform Party • Responses -I do not identify with this party -Somewhat identify -Identify -Strongly identify -I don't know • I have never heard of this subreddit • I have heard of but never accessed this subreddit • I have accessed or posted on this subreddit at least once • I sometimes access or post on this subreddit • I often access or post on this subreddit Ivar Krumpal. 2013. Determinants of social desirability bias in sensitive surveys: a literature review. Cliff AC Lampe, Nicole Ellison, and Charles Steinfield. 2007. A familiar face (book) profile elements as signals in an online social network. In Proceedings of the SIGCHI conference on Human Factors in Computing Systems, pages 435-444. Frank Lin and William W Cohen. 2010. Semisupervised classification of network data using very few labels. In 2010 International Conference on Advances in Social Networks Analysis and Mining, pages 192-199. IEEE.Michael Castelle. 2018. The linguistic ideologies of deep abusive language classification. In Proceed- ings of the 2nd Workshop on Abusive Language On- line (ALW2), pages 160-170. Eshwar Chandrasekharan, Umashanthi Pavalanathan, Anirudh Srinivasan, Adam Glynn, Jacob Eisenstein, and Eric Gilbert. 2017. You can't stay here: The efficacy of reddit's 2015 ban examined through hate speech. Proceedings of the ACM on Human- Computer Interaction, 1(CSCW):1-22. Srayan Datta and Eytan Adar. 2019. Extracting inter- community conflicts in reddit. In Proceedings of the International AAAI Conference on Web and Social Media, volume 13, pages 146-157. Dorottya Demszky, Nikhil Garg, Rob Voigt, James Zou, Jesse Shapiro, Matthew Gentzkow, and Dan Juraf- sky. 2019. Analyzing Polarization in Social Me- dia: Method and Application to Tweets on 21 Mass Shootings. In Proceedings of Human Language Technologies: The Annual Conference of the North American Chapter of the Association for Computa- tional Linguistics, pages 2970-3005. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Un- derstanding. In Proceedings of Human Language Technologies: The Annual Conference of the North American Chapter of the Association for Computa- tional Linguistics. Sarah Djemili, Julien Longhi, Claudia Marinica, Dim- itris Kotzinos, and Georges-Elia Sarfati. 2014. What does Twitter have to say about ideology? In NLP 4 CMC: Natural Language Processing for Computer-Mediated Communication/Social Media- Pre-conference workshop at Konvens 2014, vol- ume 1. Universitätsverlag Hildesheim. Mor Geva, Yoav Goldberg, and Jonathan Berant. 2019. Are We Modeling the Task or the Annotator? An In- vestigation of Annotator Bias in Natural Language Understanding Datasets. In Proceedings of the 2019 Conference on Empirical Methods in Natu- ral Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 1161-1166. Erving Goffman et al. 1978. The Presentation of Self in Everyday Life. Harmondsworth London. Yosh Halberstam and Brian Knight. 2016. Homophily, group size, and the diffusion of political information in social networks: Evidence from Twitter. Journal of Public Economics, 143:73-88. Mohit Iyyer, Peter Enns, Jordan Boyd-Graber, and Philip Resnik. 2014. Political ideology detection us- ing recursive neural networks. In Proceedings of the 52nd Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers), pages 1113-1122. Kenneth Joseph, Lisa Friedland, William Hobbs, David Lazer, and Oren Tsur. 2017. ConStance: Model- ing Annotation Contexts to Improve Stance Classi- fication. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1115-1124. John Kelly, Danyel Fisher, and Marc Smith. 2005. De- bate, division, and diversity: Political discourse net- works in USENET newsgroups. In Online Delibera- tion Conference, pages 1-35. Stanford University. Klaus Krippendorff. 2004. Reliability in content analysis: Some common misconceptions and rec- ommendations. Human communication research, 30(3):411-433. Quality & Quantity, 47(4):2025-2047. Table 3 : 3Selected subreddits included in the construction of the dataset and their ideological alignments. Donald Trump, Joe Biden, Bernie Sanders, Barack Obama, Hillary Clinton, Robert Mueller, Nancy Pelosi, Kamala Harris, Alexandria Ocasio-Cortez, Andrew Yang, Elizabeth Warren, Pete Buttigieg Ideologies conservatives/conservatism, liberals/liberalism, libertarians/libertarianism, socialists/socialism, capitalists/capitalism Organizations Republican Party/Republicans, Democratic Party/Democrats, CongressSelected entities A Survey Questions A.1 Political ideology 1. Please indicate where you identify on the liberal-conservative spectrum. Please indicate how strongly you identify with the following U.S. political parties.• Liberal • Somewhat liberal • Moderate • Somewhat conservative • Conservative • I don't know 2. Table 5 : 5Krippendorff's α agreement results for survey categories for the full non-screening annotated set (F), the screening questions (S), and the high-consensus questions subset (C).C Mismatch Statistics Group comparison % mismatch left/right 28.15 right/center 26.97 non-political/non-redditor 26.47 left/center 24.44 right/non-news 23.74 non-political/right 23.17 news/non-news 22.71 non-political/political 22.61 non-political/center 21.05 non-redditor/political 20.40 Table 6 : 6Comparison pairs with highest percentage of questions where the majority gave different answers.Entity % mismatch libertarians/libertarianism 100.0 Republican Party/Republicans 53.85 Russia 43.75 conservatives/conservatism 42.86 Hillary Clinton 39.13 Joe Biden 38.89 Nancy Pelosi 36.36 liberals/liberalism 31.81 Robert Mueller 28.57 Alexandria Ocasio-Cortez 26.67 Table 7 : 7Entities with highest percentage of questions where the left-leaning and right-leaning annotators gave different answers.D Human Judgments vs. Labels Worker Match Overall - 68.53 Ideology left 70.05 right 67.30 center 65.38 News news 69.24 non-news 65.80 Reddit redditor 68.49 -political 69.03 -non-political 66.87 non-redditor 68.65 Table 8 : 8Average percentage of human judgments that match with semi-supervised labels per annotation group. This study was approved by the institutional review board at our institution. https://www.reddit.com/r/redditlists/ comments/josdr/list_of_political_ subreddits/ 3 r/politics was not included due to its initial history as a default subreddit contributing to its high subscriber count. AcknowledgementsThis work was supported in part by NSF Grant IIS 1546393 and the K&L Gates Presidential Fellowship. Open extraction of fine-grained political statements. David Bamman, A Noah, Smith, Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. the 2015 Conference on Empirical Methods in Natural Language ProcessingDavid Bamman and Noah A Smith. 2015. Open extrac- tion of fine-grained political statements. In Proceed- ings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 76-85. Birds of the same feather tweet together: Bayesian ideal point estimation using Twitter data. Pablo Barberá, Political Analysis. 231Pablo Barberá. 2015. Birds of the same feather tweet together: Bayesian ideal point estimation using Twit- ter data. Political Analysis, 23(1):76-91. The pushshift reddit dataset. Jason Baumgartner, Savvas Zannettou, Brian Keegan, Megan Squire, Jeremy Blackburn, Proceedings of the International AAAI Conference on Web and Social Media. the International AAAI Conference on Web and Social Media14Jason Baumgartner, Savvas Zannettou, Brian Keegan, Megan Squire, and Jeremy Blackburn. 2020. The pushshift reddit dataset. In Proceedings of the Inter- national AAAI Conference on Web and Social Media, volume 14, pages 830-839. The impact of actively open-minded thinking on social media communication. Jordan Carpenter, Daniel Preotiuc-Pietro, Jenna Clark, Lucie Flekova, Laura Smith, L Margaret, Anneke Kern, Lyle Buffone, Martin Ungar, Seligman, Judgment and Decision Making. 136562Jordan Carpenter, Daniel Preotiuc-Pietro, Jenna Clark, Lucie Flekova, Laura Smith, Margaret L Kern, An- neke Buffone, Lyle Ungar, and Martin Seligman. 2018. The impact of actively open-minded think- ing on social media communication. Judgment and Decision Making, 13(6):562. Real men don't say "cute" using automatic language analysis to isolate inaccurate aspects of stereotypes. Jordan Carpenter, Daniel Preotiuc-Pietro, Lucie Flekova, Salvatore Giorgi, Courtney Hagan, Margaret L Kern, E K Anneke, Lyle Buffone, Martin Ep Ungar, Seligman, Social Psychological and Personality Science. 83Jordan Carpenter, Daniel Preotiuc-Pietro, Lucie Flekova, Salvatore Giorgi, Courtney Hagan, Mar- garet L Kern, Anneke EK Buffone, Lyle Ungar, and Martin EP Seligman. 2017. Real men don't say "cute" using automatic language analysis to isolate inaccurate aspects of stereotypes. Social Psycholog- ical and Personality Science, 8(3):310-322. The Stanford CoreNLP natural language processing toolkit. D Christopher, Mihai Manning, John Surdeanu, Jenny Rose Bauer, Steven Finkel, David Bethard, Mc-Closky, Proceedings of 52nd Annual Meeting of the Association for Computational Linguistics: System Demonstrations. 52nd Annual Meeting of the Association for Computational Linguistics: System DemonstrationsChristopher D Manning, Mihai Surdeanu, John Bauer, Jenny Rose Finkel, Steven Bethard, and David Mc- Closky. 2014. The Stanford CoreNLP natural lan- guage processing toolkit. In Proceedings of 52nd Annual Meeting of the Association for Computa- tional Linguistics: System Demonstrations, pages 55-60. Fightin'words: Lexical feature selection and evaluation for identifying the content of political conflict. L Burt, Monroe, P Michael, Kevin M Colaresi, Quinn, Political Analysis. 164Burt L Monroe, Michael P Colaresi, and Kevin M Quinn. 2008. Fightin'words: Lexical feature selec- tion and evaluation for identifying the content of po- litical conflict. Political Analysis, 16(4):372-403. Navigating the massive world of reddit: Using backbone networks to map user interests in social media. S Randal, Zachary P Olson, Neal, PeerJ Computer Science. 14Randal S Olson and Zachary P Neal. 2015. Navigat- ing the massive world of reddit: Using backbone net- works to map user interests in social media. PeerJ Computer Science, 1:e4. Extraction and Analysis of Self Identity in Twitter Biographies. Arjunil Pathak, State University of New York at BuffaloPh.D. thesisArjunil Pathak. 2020. Extraction and Analysis of Self Identity in Twitter Biographies. Ph.D. thesis, State University of New York at Buffalo. Democrats, Republicans and Starbucks Afficionados: User Classification in Twitter. Marco Pennacchiotti, Ana-Maria Popescu, Proceedings of the 17th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. the 17th ACM SIGKDD International Conference on Knowledge Discovery and Data MiningMarco Pennacchiotti and Ana-Maria Popescu. 2011. Democrats, Republicans and Starbucks Afficiona- dos: User Classification in Twitter. In Proceedings of the 17th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 430-438. Beyond binary labels: political ideology prediction of twitter users. Daniel Preotiuc-Pietro, Ye Liu, Daniel Hopkins, Lyle Ungar, Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. the 55th Annual Meeting of the Association for Computational LinguisticsLong Papers1Daniel Preotiuc-Pietro, Ye Liu, Daniel Hopkins, and Lyle Ungar. 2017. Beyond binary labels: political ideology prediction of twitter users. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 729-740. The central role of the propensity score in observational studies for causal effects. R Paul, Donald B Rosenbaum, Rubin, Biometrika. 701Paul R Rosenbaum and Donald B Rubin. 1983. The central role of the propensity score in observational studies for causal effects. Biometrika, 70(1):41-55. Measuring the reliability of hate speech annotations: The case of the european refugee crisis. Björn Ross, Michael Rist, Guillermo Carbonell, Benjamin Cabrera, Nils Kurowsky, Michael Wojatzki, arXiv:1701.08118arXiv preprintBjörn Ross, Michael Rist, Guillermo Carbonell, Ben- jamin Cabrera, Nils Kurowsky, and Michael Wo- jatzki. 2017. Measuring the reliability of hate speech annotations: The case of the european refugee crisis. arXiv preprint arXiv:1701.08118. The Effect of Different Writing Tasks on Linguistic Style: A Case Study of the ROC Story Cloze Task. Roy Schwartz, Maarten Sap, Ioannis Konstas, Leila Zilles, Yejin Choi, Noah A Smith, Proceedings of the 21st Conference on Computational Natural Language Learning. the 21st Conference on Computational Natural Language LearningRoy Schwartz, Maarten Sap, Ioannis Konstas, Leila Zilles, Yejin Choi, and Noah A Smith. 2017. The Ef- fect of Different Writing Tasks on Linguistic Style: A Case Study of the ROC Story Cloze Task. In Pro- ceedings of the 21st Conference on Computational Natural Language Learning (CoNLL 2017), pages 15-25. The Discourse of Online Content Moderation: Investigating Polarized User Responses to Changes in Reddit's Quarantine Policy. Qinlan Shen, Carolyn Rosé, Proceedings of the Third Workshop on Abusive Language Online. the Third Workshop on Abusive Language OnlineQinlan Shen and Carolyn Rosé. 2019. The Discourse of Online Content Moderation: Investigating Polar- ized User Responses to Changes in Reddit's Quaran- tine Policy. In Proceedings of the Third Workshop on Abusive Language Online, pages 58-69. Winning arguments: Interaction dynamics and persuasion strategies in good-faith online discussions. Chenhao Tan, Vlad Niculae, Cristian Danescu-Niculescu-Mizil, Lillian Lee, Proceedings of the 25th International Conference on World Wide Web. the 25th International Conference on World Wide WebChenhao Tan, Vlad Niculae, Cristian Danescu- Niculescu-Mizil, and Lillian Lee. 2016. Winning arguments: Interaction dynamics and persuasion strategies in good-faith online discussions. In Pro- ceedings of the 25th International Conference on World Wide Web, pages 613-624. A frame of mind: Using statistical models for detection of framing and agenda setting campaigns. Oren Tsur, Dan Calacci, David Lazer, Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing. the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language ProcessingLong Papers1Oren Tsur, Dan Calacci, and David Lazer. 2015. A frame of mind: Using statistical models for detec- tion of framing and agenda setting campaigns. In Proceedings of the 53rd Annual Meeting of the Asso- ciation for Computational Linguistics and the 7th In- ternational Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1629- 1638. Are You a Racist or Am I Seeing Things? Annotator Influence on Hate Speech Detection on Twitter. Zeerak Waseem, Proceedings of the First Workshop on NLP and Computational Social Science. the First Workshop on NLP and Computational Social ScienceZeerak Waseem. 2016. Are You a Racist or Am I See- ing Things? Annotator Influence on Hate Speech Detection on Twitter. In Proceedings of the First Workshop on NLP and Computational Social Sci- ence, pages 138-142. Crowdsourcing Translation: Professional Quality from Non-Professionals. Omar Zaidan, Chris Callison-Burch, Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies. the 49th Annual Meeting of the Association for Computational Linguistics: Human Language TechnologiesOmar Zaidan and Chris Callison-Burch. 2011. Crowd- sourcing Translation: Professional Quality from Non-Professionals. In Proceedings of the 49th An- nual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 1220-1229. Wearing many (social) hats: How different are your different social network personae. Changtao Zhong, Dmytro Hau Wen Chang, Karamshuk, 11th International Conference on Web and Social Media, ICWSM 2017. AAAI pressDongwon Lee, and Nishanth SastryChangtao Zhong, Hau Wen Chang, Dmytro Karamshuk, Dongwon Lee, and Nishanth Sas- try. 2017. Wearing many (social) hats: How different are your different social network personae? In 11th International Conference on Web and Social Media, ICWSM 2017, pages 397-406. AAAI press. Classifying the political leaning of news articles and users from user votes. Daniel Xiaodan, Paul Zhou, Qiaozhu Resnick, Mei, Proceedings of the International AAAI Conference on Web and Social Media. the International AAAI Conference on Web and Social MediaDaniel Xiaodan Zhou, Paul Resnick, and Qiaozhu Mei. 2011. Classifying the political leaning of news ar- ticles and users from user votes. In Proceedings of the International AAAI Conference on Web and So- cial Media. On average, how often have you posted content to Reddit in the past year?. On average, how often have you posted con- tent to Reddit in the past year? Please indicate your familiarity with the following subreddits. listed in Table 3Please indicate your familiarity with the fol- lowing subreddits (listed in Table 3).
225,062,658
Semi-supervised Domain Adaptation for Semantic Dependency Parsing
Recently, although deep learning has brought significant progress to semantic dependency parsing, the semantic annotation data is very expensive to label, and when a dependency parser with better performance in a single domain is migrated to other domains, its performance will decline largely. Therefore, in order to make it practical, it is necessary to solve the problem of domain adaptation. This paper proposes a new domain adaptation dependency parsing model based on adversarial learning. We proposed a shared dual encoder structure based on adversarial learning, and introduced domain private auxiliary tasks and orthogonal constraints. At the same time, we also explored a variety of pre-training models in the cross domain dependency parsing task about the effectiveness and performance. * 通讯作者 Corresponding Author 计算语言学
[ 17254305, 18828233, 53080778, 5578635, 219308588, 67855733, 9289495, 61274, 12013822 ]
Semi-supervised Domain Adaptation for Semantic Dependency Parsing Dazhan Mao maodazhan@foxmail.com Huayong Li lihuayong@blcu.edu.cnyqshao163@163.com Yanqiu Shao Information Science School Language Resources Monitoring and Research Center Beijing Language and Culture University 15 Xueyuan Road HaiDian District 100083BeijingChina Semi-supervised Domain Adaptation for Semantic Dependency Parsing 半监督跨领域语义依存分析技术研究 毛达展 李华勇 邵艳秋 * 国家语言资源监测与研究平面媒体中心,信息科学学院 北京语言大学,北京市海淀区学院路 15 号,100083,中国 摘要 近年来,尽管深度学习给语义依存分析带来了长足的进步,但由于语义依存分析数据 标注代价非常高昂,并且在单领域上性能较好的依存分析器迁移到其他领域时,其性 能会大幅度下降。因此为了使其走向实用,就必须解决领域适应问题。本文提出一个 新的基于对抗学习的领域适应依存分析模型,我们提出了基于对抗学习的共享双编码 器结构,并引入领域私有辅助任务和正交约束,同时也探究了多种预训练模型在跨领 域依存分析任务上的效果和性能。 关键词: 语义依存分析 ;领域适应 ;对抗学习 ;预训练模型 Recently, although deep learning has brought significant progress to semantic dependency parsing, the semantic annotation data is very expensive to label, and when a dependency parser with better performance in a single domain is migrated to other domains, its performance will decline largely. Therefore, in order to make it practical, it is necessary to solve the problem of domain adaptation. This paper proposes a new domain adaptation dependency parsing model based on adversarial learning. We proposed a shared dual encoder structure based on adversarial learning, and introduced domain private auxiliary tasks and orthogonal constraints. At the same time, we also explored a variety of pre-training models in the cross domain dependency parsing task about the effectiveness and performance. * 通讯作者 Corresponding Author 计算语言学 h i,j = BERT j (x i ) (1) 其中,i 表示第 i 个输入,j 表示第 j 层 BERT,x i 是输入的字符。 BERT 默认选择使用最后一层 BERT 输出作为整体输出,但是已有大量研究表明 BERT 等预训练语言模型每层的编码信息并不相同,一般 BERT 底层涉及一些语言基础知识,BERT 中层编码了一定的句法结构知识,BERT 高层则编码了语义知识,且 BERT 和训练时的任务相 关度很高。因此直接使用最后一层 BERT 输出可能不是最好的方案,为此,本文引入了层加权 机制,以一种可训练的方式加权平均不同 BERT 层的输出。层加权机制可形式化为: h i = c ∑ i BERT j,i · sof tmax(w j ) (2) 其中 w j 是一个可训练的"权重"标量,用来对应每一层 BERT 输出;c 是一个可训练的缩放 标量,用了缩放最后的加权表示;BERT j,i 表示第 j 层 BERT 在第 i 个位置的输出。 经过层加权机制后,可以得到对应输入的字符序列表示,由于依存分析是基于词语级别的, 所以需要从字符序列映射到词语序列,我们采用简单的尾字表示法完成映射,即对于每个词语 只选择尾字对应的表示来作为整个词语的表示。 3.2 领域共享双编码器 在预训练语言模型之后,又连接了两个领域共享编码器,一个是领域无关特征编码器 f E(x) share ,一个是领域私有信息编码器 f E(x) private ,分别负责提取领域无关特征和领域私有特征。两 个编码器均使用两层 Transformer 神经网络实现,每层 Transformer 网络可以形式化地表示为: T ransf ormer(X) = Skip(F F, Skip(M ultiHead, X)) (3) Skip(f, h) = LayerN orm(h + Dropout(f (h))) (4) F F (h) = GELU (hW T 1 + b 1 )W T 2 + b 2(5形式化地,对于源领域的输入数据 X source 和目标领域的输入数据 X target ,经过领域特征 编码器后,我们分别得到对应的表示分布 P s 和 P t ,则 P s 和 P t 之间的 Wasserstein 距离等于: W (P s , P t ) = sup ∥f ∥ L ≤1 E x∼Ps [f (x)] − E x∼Pt [f (x)] (6) 其中,f 是一个 Lipschitz-1 连续函数,注意为了求解 Wasserstein 距离,这里依据 WGAN 对其定义公式做了转换。根据 WGAN 中的要求,我们使用一层全连接神经网络 f W 近似该 Lipschitz-1 连续函数,同时将该网络的参数取值范围固定到 [-0.01,0.01] 之间。 进而可以计算得到 Wasserstein 距离对抗损失 L W adv : L W adv (S s , S t ) = f W (S s ) − f W (S t ) (7) 在训练时,一方面我们需要优化"判别器"以产生最准确的 Wasserstein 距离,为此需要在"判 别器"的参数上最小化 Wasserstein 距离对抗损失 L W adv (S s , S t );另一方面,本文需要使领域无 关特征编码器产生的两个表示分布尽可能"迷惑"Wasserstein 距离判别器,为此,本文需要在 领域无关特征编码器的参数上最大化 Wasserstein 距离对抗损失 L W adv (S s , S t )。 由上述可知基于 Wasserstein 距离的对抗学习过程是一个 minmax 训练,即: min Θ dis max Θ share L W adv (8) 其中,Θ dis 表示判别器的参数,Θ share 表示判别器的参数。 在训练时我们通过先进行 min Θ dis 训练,然后再进行 max Θ share 训练的方式交替完成整个训练过程 3.4 Biaffine 解码层 本文使用双仿解码器来分别预测两个词语之间的依存弧关系和依存标签。首先将编码输出 的词语级别的表示向量 h lstm i 传入两个前馈神经网络层 (FNN),分别得到该词语的"头表示 "和"尾表示" : h edge−head i = F N N edge−head (h lstm i ) (9) h edge−dep i = F N N edge−dep (h lstm i ) (10) 随后使用双仿变换整个句子中可能存在的依存弧的得分矩阵 s edge i,j : Biaf f ine(x 1 , x 2 ) = x T 1 U x 2 + W (x 1 ⊗ x 2 ) + b(11)s edge i,j = Biaf f ine edge (h edge−dep i , h edge−head j )(12)p edge i,j = sigmoid(s edge i,j )(13)训练时,依存弧解码器的损失定义为: J edge (Θ p ) = −p edge i,j log p edge i,j − (1 − p edge i,j ) log(1 − p edge i,j )(14) 依存标签的方式和预测依存弧的方式非常相似,唯一不同的就是两个词语之间的依存标签的分 类空间比较大,因此这里使用 softmax (Grave et al., 2016) 而不是 sigmoid 函数处理,最终得到 依存标签概率 p label i,j 。 p label i,j = sof tmax(s label i,j ) (15) 训练时,依存标签解码的损失定义为: J label (Θ p ) = − ∑ label log p label i,j(16)最后将依存弧概率和依存标签概率传给解码算法,就能得到最后的依存图。 在训练时,通过最小化依存损失 J parser (Θ p ) 从而训练得到一个领域内依存分析器,依存分 析损失由依存弧损失和依存标签损失相加得到: J parser (Θ p ) = βJ label (Θ p ) + (1 − β)J edge (Θ p )(17) 其中,β 是一个超参数,用来控制最终损失中两个解码器损失的相对大小。 领域分类辅助任务 我们希望私有编码器能够提取领域私有的信息,但仅通过最小化依存任务的损失 L parser 无 法保证私有特征编码器真正提取到了对应领域的私有信息,因此本工作又额外引入了一个私有 辅助任务,即领域分类任务,负责判断编码器编码的特征属于哪一个领域。这一辅助任务类似 于文本领域分类,由一个领域分类器 f c (x) 实现,其包括一层全连接神经网络和一个 softmax 层: f c (p T , θ C ) = sof tmax(b + U P T ) (18) 其中,b 和 U 代表全连接层的参数,P 代表私有信息编码层 f E private 的输出特征。 训练时,领域分类器的交叉熵损失 L classif y 定义为: L classif y = − N ∑ i=1 2 ∑ j=1 y j i log(ŷ j i ) (19) 其中,ŷ j i 为 softmax 层的预测标签,ŷ j i 为真实标签。 通过通过最小化 L classif y ,可以迫使领域私有特征编码器编码对应领域的私有特征。 3.6 正交约束 加入辅助任务后可以保证领域私有特征编码器学习到了领域的私有信息,但是私有特征编 码器可能会学习到一部分领域无关特征,造成特征冗余表达。为了确保这两个编码器之间不存 在冗余的特征,本工作为两个编码器之间增加了一个正交约束,在训练时惩罚领域私有编码器 中和"领域无关"编码器重合的特征,从而促使领域私有信息编码器不提取领域间的不变特征。 正交约束损失的定义如下: L dif f = ∥S T P ∥ 2 F (20) 计算语言学 这里 S 代表领域无关编码器 f E share 的输出,P 代表领域私有信息编码层 f E private 的输出,∥·∥ 2 F 代表平方 Frobenius 范数。 矩阵 A 的 Frobenius 范数 ∥A∥ F 定义为: ∥A∥ F = m ∑ i=1 n ∑ j=1 |a i,j | 2 (21) 由上述公式可知,Frobenius 范数代表了矩阵的所有元素平方和的开方。因此,通过最小化正交 约束 L dif f ,就迫使 S T P 的乘积最小化,进而等价于迫使两个矩阵相互"正交" ,而从使得两 个编码器的输出特征互不重叠。 3.7 联合训练 通过将上述的所有任务损失整合起来,得到了总共 4 个损失,分别是对抗损失 L C adv (或 者 L W adv ) 、依存分析任务损失 L parser 、领域私有信息编码层辅助任务损失 L classif y 、领域无关 信息编码器和领域私有信息编码器之间的正交损失 L dif f 。我们定义最终的训练目标损失 L 为: L = L parser + λL adv + γL classif y + ηL dif f (22) 其中,依存分析的任务损失定义为: L parser (Θ p ) = βL label (Θ p ) + (1 − β)L edge (Θ p )(23 现有的依存分析方法主要有两种,分别是基于转移的算法(Chen and Manning, 2014);(Dyer et al., 2015) 和基于图的算法(Chen et al., 2013);(Wang and Chang, 2016)。早期的这两种依存©2020 中国计算语言学大会 根据《Creative Commons Attribution 4.最近,随着 (Peters et al., 2018)ELMO;(Devlin et al., 2018)BERT 等上下文表示的兴起, 大量工作开始研究基于预训练上下文表示的领域适应方法,并且取得了较好的结果,展示了预 训练上下文表示在领域适应任务上的巨大潜力。(Liu et al., 2019a) 分析了上下文表示中的语言 学知识和可迁移性。(Mulcaire et al., 2019) Sato et al., 2017)。(Chen et al., 2017) 针对中文多粒度分词任务,提出了一个 Shared-Private 模型。在这个模型的基础上,本文对私有编码器进行简化,不同领域的私有编 码器统一成一个,并增加领域预测的辅助任务。(Liu et al., 2017);(Shi et al., 2018) 等预训练语言模型对于跨领域迁移 有着很好的帮助,因此本文使用 BERT 作为底层编码。BERT 是多层 Transformer 神经网络 (Vaswani et al., 2017) 的堆叠,形式化地,BERT 每层的处理过程可表示为: 对抗判别器 "领域无关"特征编码器除了连接依存任务所需的解码器 Biaf f ine edge 和 Biaf f ine label 外,还额外连接一个对抗判别器 D adv (x),负责提取领域之间的不变特征。 参考 WGAN 的实现 (Arjovsky et al., 2017);(Arjovsky and Bottou, 2017),本文采用基于 Wasserstein 距离的对抗判别器。在使用基于 Wasserstein 距离的损失作为对抗损失时,对抗判 别器实际上是一个 Wasserstein 距离回归网络。) 其中,(Hendrycks and Gimpel, 2016)GELU 代表高斯误差线性单元激活(Gaussian error linear units)函数。为了保证领域无关特征编码器可以提取到领域共享的特征,我们在领域无关编码 器上额外连接了一个对抗判别器,基于对抗学习的方式强制编码器编码领域无关特征。同时为 了保证领域私有编码器可以提取到每个领域的私有信息,我们在领域私有编码器上也额外连接 了一个领域分类辅助任务。 计算语言学 3.3 ) 从表中可以看出,以上四个组件中,对模型最终效果影响最大的是对抗损失,去掉其之后 模型在 4 个目标领域上平均 LAS 下降了 0.89,这再次证明了对抗学习技术在领域适应任务中 的重要作用;其次影响模型性能的组件是私有特征,去掉其之后模型 LAS 平均下降了 0.525, 致谢 本 成 果 受 国 家 自 然 科 学 基 金 项 目(61872402), 教 育 部 人 文 社 科 规 划 基 金 项 目 (17Y-JAZH068),北京语言大学校级项目(中央高校基本科研业务费专项资金) (18ZDJ03) ,模式识上述 β、λ、γ、η 均为控制损失大小的超参数。注意,当使用目标领域的无标注数据时,L parser 只在源领域的数据上计算。 4 实验部分 4.1 数据集介绍 本研究的源领域数据集来自 the SemEval-2016 task9(Che et al., 2012) 和《博雅汉语》 。经 过调研,选择两大类四小类目标领域,一大类是文学风格,主要包括散文( 《文化苦旅》 ) 、小 说( 《小王子》 、 《少女小渔》 ) 、剧本( 《武林外传》 )三个子目标领域。另一大类是下游应用, 主要包括医疗诊断文本子目标领域。 依据中文语义依存图标注规范,依托语义依存图标注平台,我们组织了 6 名语言学专业的 学生做了数据标注。对于每个目标领域,我们只标注了少部分数据,并将其划分为训练集、验 证集、测试集,并对剩余的无标注数据做了清洗和筛选,如表 1 所示。 表 1. 数据集划分 领域说明 人工标注数据集 无标注数据 训练集 验证集 测试集 源领域 平衡语料 38000 2000 2000 0 目标领域 文学 散文 3000 1000 1000 20000 小说 3000 1000 1000 30000 剧本 3000 1000 1000 8000 应用 医疗 2000 500 500 30000 图 3. 本工作的模型和基线模型的对比 4.2 实验设置 我们尝试了多种预训练语言模型,其层数均为为 12,隐层向量维度均为 768。领域私有特 征编码器和领域无关特征编码器都使用两层 Transformer 神经网络,其中 Transformer 层的注 意力头数为 8,隐层向量维度为 768,dropout 比例为 0.2,使用 Relu 激活函数。对抗损失的 控制参数 λ 为 0.5;领域分类辅助任务损失的控制参数 γ 为 0.05;正交约束损失的控制参数 η 为 0.001;依存损失的控制参数 β 为 0.5。对抗判别器的学习率设置为 0.0001,模型的其他部分 的学习率设置为 0.001。在训练时使用带 L2 正则的 Adam 优化算法,min 训练和 max 训练的 交替比例为 5:1。输入最大句长为 100,超过此长度的句子将被跳过。本文使用 4 张 NVIDIA Tesla V100-16GB 的显卡完成训练,单卡的批量大小设置为 32。 4.3 基线模型 为了更好地比较提出的模型的领域适应能力,我们选择了两个基线模型,分别是迁移模型 Transfer和基于领域分类对抗损失的"共享 -私有"模型 SP-Adv: • Transfer:Transfer 使用基于 LSTM+Biaffine 的单领域依存分析模型,在训练时,Trans- fer 模型先在源领域的数据上预训练,然后再在对应的目标领域上进一步微调。 • SP-Adv:模型使用经典的"共享 -私有"框架,同样使用对抗训练,但是其不采用正交约 束,也不采用领域预测的辅助任务。 此外,为了进一步对比基于预训练语言模型的动态表征和传统的基于词向量的静态表征之 计算语言学 间的差别,我们将预训练语言模型替换为词向量加词性向量,模型其他部分保持不变,得 到另一个基线模型,称为 LSTM-WAdv。 4.4 实验结果 4.4.1 与基线模型的对比 表 2 展示了我们的模型和基线模型在 4 个目标领域上的 LAS 指标,其中 Transfer、 SP-Adv分 别 代 表 两 个 基 线 模 型 的 结 果,LSTM-WAdv代 表 在 本 文 提 出 的 模 型 上 去 掉 预 训练语言模型之后的结果,BERT-Wadv(Devlin et al., 2018);XLNet-WAdv(Yang et al., 2019);RoBERTa-WAdv(Liu et al., 2019b) 分别代表使用 BERT、XLNET、RoBERTa预 训练语言模型的结果。 为了更加直观地比较差异,我们绘制了模型之间的对比折线图(如图 3) ,由图 3 可以看 出,我们提出的基于预训练语言模型和对抗学习的领域适应框架都明显优于两个基线模型。同 时使用预训练语言模型的领域适应框架也要优于使用词向量的框架。同时在三种预训练语言模 型中,RoBERTa展现了最好的领域适应性能。 表 2. 本工作的模型和基线模型在 4 个目标领域上的 LAS 指标 模型 散文 小说 戏剧 医疗 Transfer 70.20 73.49 69.42 68.21 SP-Adv 73.49 74.96 71.71 69.51 LSTM-WAdv 74.09 75.33 72.25 70.19 BERT-WAdv 75.39 76.96 73.47 71.28 XLNet-WAdv 74.86 76.21 72.75 70.64 RoBERTa-WAdv 75.51 76.92 73.56 71.46 4.4.2 无标注数据对领域适应的影响 为了进一步探索无监督数据量在半监督学习中的影响,我们又做了两组对比实验。这两组 实验分别选择前述实验中 LAS 最高的小说目标领域和 LAS 最低的医疗目标领域。本文将这两 个领域的所有无标注数据划分为相等的 10 份,从不使用无标注数据到使用全部的无标注数据, 逐步增加无标注数据的数量训练模型,并记录对应的 LAS 指标。如图 3 所示,无论是医疗领 域还是小说领域,LAS 指标都随着无标注数据量的增加呈现接近线性关系的增长。注意,在小 说领域上,当无标注数据使用超过七成的时候,LAS 指标的提升已经非常微弱,这说明此时两 个编码器已经基本收敛,无法进一步提升。 4.4.3 消融实验 为了进一步分析本文提出的不同组件对最终模型领域适应性能的影响,我们在 LSTM- WAdv的基础上又做了相应的消融实验,如表 3 所示,分别记录了去掉对抗损失、去掉正交约 束、去掉领域预测辅助任务以及去掉私有特征编码器时的实验结果。 图 4. 无标注数据量对领域适应的影响 表 3. 消融实验 实验 散文 小说 剧本 医疗 平均下 降 LSTM-WAdv 74.09 75.33 72.25 70.19 -- 去掉对抗 72.82 74.90 71.10 69.48 0.890 去掉正交约束 73.90 75.16 71.81 69.84 0.288 去掉辅助任务 73.62 75.20 72.15 69.91 0.245 去掉私有特征 73.41 75.01 71.60 69.74 0.525 这里需要注意一旦去掉私有编码器,正交约束和辅助任务也相应地失去了作用,因此私有特征 的影响要大于其他两个组件。同时从表中可以看出,四个组件均对模型最终的性能有积极作用, 其中影响最小的辅助任务也有 0.245 的平均共享。上述实验充分证明了本章提出的模型方法是 有效的。 5 结论 在之前提到的跨领域分析数据集上,本文提出的基于预训练语言模型和对抗学习的领域适 应框架都明显优于两个基线模型,在尝试的三种预训练模型中,RoBERTa 展现了最好的领域 适应性能。在消融实验中,也验证了本文提出的领域适应框架的各个组件对模型最终性能的提 升是有积极作用的。 别国家重点实验室开放课题基金资助。 Towards principled methods for training generative adversarial networks. Martin Arjovsky, Leon Bottou, Stat. 1050Martin Arjovsky and Leon Bottou. 2017. Towards principled methods for training generative adversarial networks. Stat, 1050. . Martin Arjovsky, Soumith Chintala, Leon Bottou, Wasserstein ganMartin Arjovsky, Soumith Chintala, and Leon Bottou. 2017. Wasserstein gan. . Konstantinos Bousmalis, George Trigeorgis, Nathan Silberman, Dilip Krishnan, and Dumitru Erhan. 2016. Domain separation networks. CoRR, abs/1608.06019Konstantinos Bousmalis, George Trigeorgis, Nathan Silberman, Dilip Krishnan, and Dumitru Erhan. 2016. Domain separation networks. CoRR, abs/1608.06019. Semeval-2016 task 9: Chinese semantic dependency parsing. Wanxiang Che, Meishan Zhang, Yanqiu Shao, Ting Liu, Proceedings of the First Joint Conference on Lexical and Computational Semantics. the First Joint Conference on Lexical and Computational Semantics1Proceedings of the Sixth International Workshop on Semantic EvaluationWanxiang Che, Meishan Zhang, Yanqiu Shao, and Ting Liu. 2012. Semeval-2016 task 9: Chinese semantic dependency parsing. In Proceedings of the First Joint Conference on Lexical and Computational Semantics -Volume 1: Proceedings of the main conference and the shared task, and Volume 2: Proceedings of the Sixth International Workshop on Semantic Evaluation. A fast and accurate dependency parser using neural networks. D Chen, C D Manning, D. Chen and C. D. Manning. 2014. A fast and accurate dependency parser using neural networks. Utilizing dependency language models for graphbased dependency parsing models. Wenliang Chen, Zhang Min, Haizhou Li, Meeting of the Association for Computational Linguistics: Long Papers. Wenliang Chen, Zhang Min, and Haizhou Li. 2013. Utilizing dependency language models for graph- based dependency parsing models. In Meeting of the Association for Computational Linguistics: Long Papers. Adversarial multi-criteria learning for Chinese word segmentation. Xinchi Chen, Zhan Shi, Xipeng Qiu, Xuanjing Huang, Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. the 55th Annual Meeting of the Association for Computational LinguisticsVancouver, CanadaAssociation for Computational Linguistics1Xinchi Chen, Zhan Shi, Xipeng Qiu, and Xuanjing Huang. 2017. Adversarial multi-criteria learning for Chinese word segmentation. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1193-1203, Vancouver, Canada, July. Association for Computational Linguistics. Bert: Pre-training of deep bidirectional transformers for language understanding. Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. Deep biaffine attention for neural dependency parsing. Timothy Dozat, D Christopher, Manning, Timothy Dozat and Christopher D Manning. 2016. Deep biaffine attention for neural dependency parsing. Stanford's graph-based neural dependency parser at the conll 2017 shared task. Timothy Dozat, Peng Qi, Christoper Manning, 1Timothy Dozat, Peng Qi, and Christoper Manning. 2017. Stanford's graph-based neural dependency parser at the conll 2017 shared task. pages 20-30, 01. Transitionbased dependency parsing with stack long short-term memory. Chris Dyer, Miguel Ballesteros, Wang Ling, Austin Matthews, Noah A Smith, Computer Science. 372Chris Dyer, Miguel Ballesteros, Wang Ling, Austin Matthews, and Noah A. Smith. 2015. Transition- based dependency parsing with stack long short-term memory. Computer Science, 37(2):321-C332. Unsupervised domain adaptation by backpropagation. Yaroslav Ganin, Victor Lempitsky, Yaroslav Ganin and Victor Lempitsky. 2014. Unsupervised domain adaptation by backpropagation. Efficient softmax approximation for gpus. Edouard Grave, Armand Joulin, Moustapha Cisse, David Grangier, Herve Jegou, Edouard Grave, Armand Joulin, Moustapha Cisse, David Grangier, and Herve Jegou. 2016. Efficient softmax approximation for gpus. Gaussian error linear units (gelus). Dan Hendrycks, Kevin Gimpel, Dan Hendrycks and Kevin Gimpel. 2016. Gaussian error linear units (gelus). Efficient third-order dependency parsers. Terry Koo, Michael Collins, ACL 2010, Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics. Uppsala, SwedenTerry Koo and Michael Collins. 2010. Efficient third-order dependency parsers. In ACL 2010, Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, July 11-16, 2010, Uppsala, Sweden. An introduction to domain adaptation and transfer learning. M Wouter, Marco Kouw, Loog, Wouter M. Kouw and Marco Loog. 2018. An introduction to domain adaptation and transfer learning. Adversarial multi-task learning for text classification. Pengfei Liu, Xipeng Qiu, Xuanjing Huang, Pengfei Liu, Xipeng Qiu, and Xuanjing Huang. 2017. Adversarial multi-task learning for text classifica- tion. F Nelson, Matt Liu, Yonatan Gardner, Belinkov, E Matthew, Noah A Peters, Smith, Linguistic knowledge and transferability of contextual representations. Nelson F Liu, Matt Gardner, Yonatan Belinkov, Matthew E Peters, and Noah A Smith. 2019a. Linguistic knowledge and transferability of contextual representations. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov, Roberta: A robustly optimized bert pretraining approach. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019b. Roberta: A robustly optimized bert pretraining approach. Polyglot contextual representations improve crosslingual transfer. Phoebe Mulcaire, Jungo Kasai, Noah A Smith, Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesMinneapolis, MinnesotaAssociation for Computational Linguistics1Phoebe Mulcaire, Jungo Kasai, and Noah A. Smith. 2019. Polyglot contextual representations improve crosslingual transfer. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 3912-3918, Minneapolis, Minnesota, June. Association for Computational Linguistics. E Matthew, Mark Peters, Mohit Neumann, Matt Iyyer, Christopher Gardner, Kenton Clark, Luke Lee, Zettlemoyer, Deep contextualized word representations. Matthew E Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. Adversarial training for crossdomain universal dependency parsing. Motoki Sato, Hitoshi Manabe, Hiroshi Noji, Yuji Matsumoto, Conll Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies. Motoki Sato, Hitoshi Manabe, Hiroshi Noji, and Yuji Matsumoto. 2017. Adversarial training for cross- domain universal dependency parsing. In Conll Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies. Genre separation network with adversarial training for cross-genre relation extraction. Ge Shi, Chong Feng, Lifu Huang, Boliang Zhang, Heyan Huang, Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. the 2018 Conference on Empirical Methods in Natural Language ProcessingGe Shi, Chong Feng, Lifu Huang, Boliang Zhang, and Heyan Huang. 2018. Genre separation network with adversarial training for cross-genre relation extraction. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. . Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, Illia Polosukhin, Attention is all you needAshish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. Graph-based dependency parsing with bidirectional lstm. Wenhui Wang, Baobao Chang, Meeting of the Association for Computational Linguistics. Wenhui Wang and Baobao Chang. 2016. Graph-based dependency parsing with bidirectional lstm. In Meeting of the Association for Computational Linguistics. Xlnet: Generalized autoregressive pretraining for language understanding. Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, Quoc V Le, Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, and Quoc V Le. 2019. Xlnet: Generalized autoregressive pretraining for language understanding.
21,723,747
A Unified Model for Extractive and Abstractive Summarization using Inconsistency Loss
We propose a unified model combining the strength of extractive and abstractive summarization. On the one hand, a simple extractive model can obtain sentence-level attention with high ROUGE scores but less readable. On the other hand, a more complicated abstractive model can obtain word-level dynamic attention to generate a more readable paragraph. In our model, sentence-level attention is used to modulate the word-level attention such that words in less attended sentences are less likely to be generated. Moreover, a novel inconsistency loss function is introduced to penalize the inconsistency between two levels of attentions. By end-to-end training our model with the inconsistency loss and original losses of extractive and abstractive models, we achieve state-of-theart ROUGE scores while being the most informative and readable summarization on the CNN/Daily Mail dataset in a solid human evaluation.
[ 6532096, 1499080, 21850704, 8174613, 6857205, 1918428, 10480989 ]
A Unified Model for Extractive and Abstractive Summarization using Inconsistency Loss Wan-Ting Hsu National Tsing Hua University Chieh-Kai Lin National Tsing Hua University Ming-Ying Lee National Tsing Hua University Kerui Min Cheetah Mobile Jing Tang tangjing@cmcm.com Cheetah Mobile Min Sun sunmin@ee.nthu.edu.tw National Tsing Hua University A Unified Model for Extractive and Abstractive Summarization using Inconsistency Loss We propose a unified model combining the strength of extractive and abstractive summarization. On the one hand, a simple extractive model can obtain sentence-level attention with high ROUGE scores but less readable. On the other hand, a more complicated abstractive model can obtain word-level dynamic attention to generate a more readable paragraph. In our model, sentence-level attention is used to modulate the word-level attention such that words in less attended sentences are less likely to be generated. Moreover, a novel inconsistency loss function is introduced to penalize the inconsistency between two levels of attentions. By end-to-end training our model with the inconsistency loss and original losses of extractive and abstractive models, we achieve state-of-theart ROUGE scores while being the most informative and readable summarization on the CNN/Daily Mail dataset in a solid human evaluation. Introduction Text summarization is the task of automatically condensing a piece of text to a shorter version while maintaining the important points. The ability to condense text information can aid many applications such as creating news digests, presenting search results, and generating reports. There are mainly two types of approaches: extractive and abstractive. Extractive approaches assemble summaries directly from the source text typically selecting one whole sentence at a time. In contrast, abstractive approaches can generate novel words and phrases not copied from the source text. Original Article: McDonald's says...... The company says it expects the new 'Artisan Grilled Chicken' to be in its more than 14,300 U.S. stores by the end of next week, in products including a new sandwich, as well as existing sandwiches, wraps and salads. It says the biggest change is the removal of sodium phosphates, which it said was used to keep the chicken moist, in favor of vegetable starch. The new recipe also does not use maltodextrin, which Mc-Donald's said is generally used as a sugar to increase browning or as a carrier for seasoning. Jessica Foust, director of culinary innovation at McDonald's, said the changes were made because customers said they want 'simple, clean ingredients' they are familiar with...... And Panera Bread has said it plans to purge artificial colors, flavors and preservatives from its food by 2016...... Extractive Approach: The company says it expects the new 'Artisan Grilled Chicken' to be in its more than 14,300 U.S. stores by the end of next week, in products including a new sandwich, as well as existing sandwiches, wraps and salads. It says the biggest change is the removal of sodium phosphates, which it said was used to keep the chicken moist, in favor of vegetable starch. The new recipe also does not use maltodextrin, which Mc-Donald's said is generally used as a sugar to increase browning or as a carrier for seasoning. Abstractive Approach: McDonald's says it expects the new 'Artisan Grilled Chicken' to be in its more than 14,300 U.S. stores by the end of next week. The company says the changes were made because customers said they want 'simple, clean ingredients' they are familiar with. McDonald's said it plans to purge artificial colors, flavors and preservatives from its food by 2016. Unified Approach: McDonald's says it expects the new 'Artisan Grilled Chicken' to be in its more than 14,300 U.S. stores by the end of next week, in products including a new sandwich, as well as existing sandwiches, wraps and salads. It says the biggest change is the removal of sodium phosphates. The new recipe also does not use maltodextrin, which McDonald's said is generally used as a sugar to increase browning or as a carrier for seasoning. Figure 1: Comparison of extractive, abstractive, and our unified summaries on a news article. The extractive model picks most important but incoherent or not concise (see blue bold font) sentences. The abstractive summary is readable, concise but still loses or mistakes some facts (see red italics font). The final summary rewritten from fragments (see underline font) has the advantages from both extractive (importance) and abstractive advantage (coherence (see green bold font)). Hence, abstractive summaries can be more coherent and concise than extractive summaries. Extractive approaches are typically simpler. They output the probability of each sentence to be selected into the summary. Many earlier works on summarization (Cheng and Lapata, 2016;Nallapati et al., 2016aNallapati et al., , 2017Narayan et al., 2017;Yasunaga et al., 2017) (Nallapati et al., 2016b;See et al., 2017;Paulus et al., 2017;Fan et al., 2017; typically involve sophisticated mechanism in order to paraphrase, generate unseen words in the source text, or even incorporate external knowledge. Neural networks (Nallapati et al., 2017;See et al., 2017) based on the attentional encoder-decoder model (Bahdanau et al., 2014) were able to generate abstractive summaries with high ROUGE scores but suffer from inaccurately reproducing factual details and an inability to deal with outof-vocabulary (OOV) words. Recently, See et al. (2017) propose a pointer-generator model which has the abilities to copy words from source text as well as generate unseen words. Despite recent progress in abstractive summarization, extractive approaches (Nallapati et al., 2017;Yasunaga et al., 2017) and lead-3 baseline (i.e., selecting the first 3 sentences) still achieve strong performance in ROUGE scores. We propose to explicitly take advantage of the strength of state-of-the-art extractive and abstractive summarization and introduced the following unified model. Firstly, we treat the probability output of each sentence from the extractive model (Nallapati et al., 2017) as sentence-level attention. Then, we modulate the word-level dynamic attention from the abstractive model (See et al., 2017) with sentence-level attention such that words in less attended sentences are less likely to be generated. In this way, extractive summarization mostly benefits abstractive summarization by mitigating spurious word-level attention. Secondly, we introduce a novel inconsistency loss function to encourage the consistency between two levels of attentions. The loss function can be computed without additional human annotation and has shown to ensure our unified model to be mutually beneficial to both extractive and abstractive summarization. On CNN/Daily Mail dataset, our unified model achieves state-of-theart ROUGE scores and outperforms a strong extractive baseline (i.e., lead-3). Finally, to ensure the quality of our unified model, we conduct a solid human evaluation and confirm that our method significantly outperforms recent state-ofthe-art methods in informativity and readability. To summarize, our contributions are twofold: • We propose a unified model combining sentence-level and word-level attentions to take advantage of both extractive and abstractive summarization approaches. • We propose a novel inconsistency loss function to ensure our unified model to be mutually beneficial to both extractive and abstractive summarization. The unified model with inconsistency loss achieves the best ROUGE scores on CNN/Daily Mail dataset and outperforms recent state-of-the-art methods in informativity and readability on human evaluation. Related Work Text summarization has been widely studied in recent years. We first introduce the related works of neural-network-based extractive and abstractive summarization. Finally, we introduce a few related works with hierarchical attention mechanism. Extractive summarization. Kågebäck et al. (2014) and Yin and Pei (2015) use neural networks to map sentences into vectors and select sentences based on those vectors. Cheng Figure 2: Our unified model combines the word-level and sentence-level attentions. Inconsistency occurs when word attention is high but sentence attention is low (see red arrow). (Vinyals et al., 2015) into their models to deal with out-of-vocabulary (OOV) words. Chen et al. (2016) and See et al. (2017) restrain their models from attending to the same word to decrease repeated phrases in the generated summary. Paulus et al. (2017) use policy gradient on summarization and state out the fact that high ROUGE scores might still lead to low human evaluation scores. Fan et al. (2017) apply convolutional sequenceto-sequence model and design several new tasks for summarization. achieve high readability score on human evaluation using generative adversarial networks. Hierarchical attention. Attention mechanism was first proposed by Bahdanau et al. (2014). Yang et al. (2016) proposed a hierarchical attention mechanism for document classification. We adopt the method of combining sentence-level and word-level attention in Nallapati et al. (2016b). However, their sentence attention is dynamic, which means it will be different for each generated word. Whereas our sentence attention is fixed for all generated words. Inspired by the high performance of extractive summarization, we propose to use fixed sentence attention. Our model combines state-of-the-art extractive model (Nallapati et al., 2017) and abstractive model (See et al., 2017) by combining sentencelevel attention from the former and word-level attention from the latter. Furthermore, we design an inconsistency loss to enhance the cooperation between the extractive and abstractive models. Our Unified Model We propose a unified model to combine the strength of both state-of-the-art extractor (Nallapati et al., 2017) and abstracter (See et al., 2017). Before going into details of our model, we first define the tasks of the extractor and abstracter. Problem definition. The input of both extrac-tor and abstracter is a sequence of words w = [w 1 , w 2 , ..., w m , ...], where m is the word index. The sequence of words also forms a sequence of sentences s = [s 1 , s 2 , ..., s n , ...], where n is the sentence index. The m th word is mapped into the n(m) th sentence, where n(·) is the mapping function. The output of the extractor is the sentencelevel attention β = [β 1 , β 2 , ..., β n , ...], where β n is the probability of the n th sentence been extracted into the summary. On the other hand, our attention-based abstractor computes word-level attention α t = α t 1 , α t 2 , ..., α t m , ... dynamically while generating the t th word in the summary. The output of the abstracter is the summary text y = y 1 , y 2 , ..., y t , ... , where y t is t th word in the summary. In the following, we introduce the mechanism to combine sentence-level and word-level attentions in Sec. 3.1. Next, we define the novel inconsistency loss that ensures extractor and abstracter to be mutually beneficial in Sec. 3.2. We also give the details of our extractor in Sec. 3.3 and our abstracter in Sec. 3.4. Finally, our training procedure is described in Sec. 3.5. Combining Attentions Pieces of evidence (e.g., Vaswani et al. (2017)) show that attention mechanism is very important for NLP tasks. Hence, we propose to explicitly combine the sentence-level β n and word-level α t m attentions by simple scalar multiplication and renormalization. The updated word attentionα t m isα t m = α t m × β n(m) m α t m × β n(m) . (1) The multiplication ensures that only when both word-level α t m and sentence-level β n attentions are high, the updated word attentionα t m can be high. Since the sentence-level attention β n from the extractor already achieves high ROUGE scores, β n intuitively modulates the word-level attention α t m to mitigate spurious word-level attention such that words in less attended sentences are less likely to be generated (see Fig. 2). As highlighted in Sec. 3.4, the word-level attentionα t m significantly affects the decoding process of the abstracter. Hence, an updated word-level attention is our key to improve abstractive summarization. Inconsistency Loss Instead of only leveraging the complementary nature between sentence-level and word-level attentions, we would like to encourage these two-levels of attentions to be mostly consistent to each other during training as an intrinsic learning target for free (i.e., without additional human annotation). Explicitly, we would like the sentence-level attention to be high when the word-level attention is high. Hence, we design the following inconsistency loss, L inc = − 1 T T t=1 log( 1 |K| m∈K α t m × β n(m) ),(2) where K is the set of top K attended words and T is the number of words in the summary. This implicitly encourages the distribution of the wordlevel attentions to be sharp and sentence-level attention to be high. To avoid the degenerated solution for the distribution of word attention to be one-hot and sentence attention to be high, we include the original loss functions for training the extractor ( L ext in Sec. 3.3) and abstracter (L abs and L cov in Sec. 3.4). Note that Eq. 1 is the only part that the extractor is interacting with the abstracter. Our proposed inconsistency loss facilitates our end-to-end trained unified model to be mutually beneficial to both the extractor and abstracter. Extractor Our extractor is inspired by Nallapati et al. (2017). The main difference is that our extractor does not need to obtain the final summary. It mainly needs to obtain a short list of important sentences with a high recall to further facilitate the abstractor. We first introduce the network architecture and the loss function. Finally, we define our ground truth important sentences to encourage high recall. Architecture. The model consists of a hierarchical bidirectional GRU which extracts sentence representations and a classification layer for predicting the sentence-level attention β n for each sentence (see Fig. 3). Extractor loss. The following sigmoid cross entropy loss is used, L ext = − 1 N N n=1 (g n log β n + (1 − g n ) log(1 − β n )),(3) where g n ∈ {0, 1} is the ground-truth label for the n th sentence and N is the number of sentences. When g n = 1, it indicates that the n th sentence should be attended to facilitate abstractive summarization. Ground-truth label. The goal of our extractor is to extract sentences with high informativity, which means the extracted sentences should contain information that is needed to generate an abstractive summary as much as possible. To obtain the ground-truth labels g = {g n } n , first, we measure the informativity of each sentence s n in the article by computing the ROUGE-L recall score (Lin, 2004) between the sentence s n and the reference abstractive summaryŷ = {ŷ t } t . Second, we sort the sentences by their informativity and select the sentence in the order of high to low informativity. We add one sentence at a time if the new sentence can increase the informativity of all the selected sentences. Finally, we obtain the ground-truth labels g and train our extractor by minimizing Eq. 3. Note that our method is different from Nallapati et al. (2017) who aim to extract a final summary for an article so they use ROUGE F-1 score to select ground-truth sentences; while we focus on high informativity, hence, we use ROUGE recall score to obtain as much information as possible with respect to the reference summaryŷ. Abstracter The second part of our model is an abstracter that reads the article; then, generate a summary In the decoder step t, our updated word attentionα t is used to generate context vector h * (α t ). Hence, it updates the final word distribution P f inal . word-by-word. We use the pointer-generator network proposed by See et al. (2017) and combine it with the extractor by combining sentence-level and word-level attentions (Sec. 3.1). Pointer-generator network. The pointergenerator network (See et al., 2017) is a specially designed sequence-to-sequence attentional model that can generate the summary by copying words in the article or generating words from a fixed vocabulary at the same time. The model contains a bidirectional LSTM which serves as an encoder to encode the input words w and a unidirectional LSTM which serves as a decoder to generate the summary y. For details of the network architecture, please refer to See et al. (2017). In the following, we describe how the updated word attentionα t affects the decoding process. Notations. We first define some notations. h e m is the encoder hidden state for the m th word. h d t is the decoder hidden state in step t. h * (α t ) = M mα t m × h e m is the context vector which is a function of the updated word attentionα t . P vocab (h * (α t )) is the probability distribution over the fixed vocabulary before applying the copying mechanism. P vocab (h * (α t )) (4) = softmax(W 2 (W 1 [h d t , h * (α t )] + b 1 ) + b 2 ), where W 1 , W 2 , b 1 and b 2 are learnable parame- ters. P vocab = {P vocab w } w where P vocab w (h * (α t )) is the probability of word w being decoded. p gen (h * (α t )) ∈ [0, 1] is the generating probability (see Eq.8 in See et al. (2017)) and 1 − p gen (h * (α t )) is the copying probability. Final word distribution. P f inal w (α t ) is the final probability of word w being decoded (i.e., y t = w). It is related to the updated word attentionα t as follows (see Fig. 4), P f inal w (α t ) = p gen (h * (α t ))P vocab w (h * (α t )) (5) + (1 − p gen (h * (α t ))) m:wm=wα t m . Note that P f inal = {P f inal w } w is the probability distribution over the fixed vocabulary and out-ofvocabulary (OOV) words. Hence, OOV words can be decoded. Most importantly, it is clear from Eq. 5 that P f inal w (α t ) is a function of the updated word attentionα t . Finally, we train the abstracter to minimize the negative log-likelihood: L abs = − 1 T T t=1 log P f inal y t (α t ) ,(6) whereŷ t is the t th token in the reference abstractive summary. Coverage mechanism. We also apply coverage mechanism (See et al., 2017) to prevent the abstracter from repeatedly attending to the same place. In each decoder step t, we calculate the coverage vector c t = t−1 t =1α t which indicates so far how much attention has been paid to every input word. The coverage vector c t will be used to calculate word attentionα t (see Eq.11 in See et al. (2017)). Moreover, coverage loss L cov is calculated to directly penalize the repetition in updated word attentionα t : L cov = 1 T T t=1 M m=1 min(α t m , c t m ) .(7) The objective function for training the abstracter with coverage mechanism is the weighted sum of negative log-likelihood and coverage loss. Training Procedure We first pre-train the extractor by minimizing L ext in Eq. 3 and the abstracter by minimizing L abs and L cov in Eq. 6 and Eq. 7, respectively. When pre-training, the abstracter takes ground-truth extracted sentences (i.e., sentences with g n = 1) as input. To combine the extractor and abstracter, we proposed two training settings : (1) two-stages training and (2) end-to-end training. Two-stages training. In this setting, we view the sentence-level attention β from the pre-trained extractor as hard attention. The extractor becomes a classifier to select sentences with high attention (i.e., β n > threshold). We simply combine the extractor and abstracter by feeding the extracted sentences to the abstracter. Note that we finetune the abstracter since the input text becomes extractive summary which is obtained from the extractor. End-to-end training. For end-to-end training, the sentence-level attention β is soft attention and will be combined with the word-level attention α t as described in Sec. 3.1. We end-to-end train the extractor and abstracter by minimizing four loss functions: L ext , L abs , L cov , as well as L inc in Eq. 2. The final loss is as below: L e2e = λ 1 L ext + λ 2 L abs + λ 3 L cov + λ 4 L inc ,(8) where λ 1 , λ 2 , λ 3 , λ 4 are hyper-parameters. In our experiment, we give L ext a bigger weight (e.g., λ 1 = 5) when end-to-end training with L inc since we found that L inc is relatively large such that the extractor tends to ignore L ext . Experiments We introduce the dataset and implementation details of our method evaluated in our experiments. Dataset We evaluate our models on the CNN/Daily Mail dataset (Hermann et al., 2015;Nallapati et al., 2016b;See et al., 2017) which contains news stories in CNN and Daily Mail websites. Each article in this dataset is paired with one humanwritten multi-sentence summary. This dataset has two versions: anonymized and non-anonymized. The former contains the news stories with all the named entities replaced by special tokens (e.g., @entity2); while the latter contains the raw text of each news story. We follow See et al. (2017) and obtain the non-anonymized version of this dataset which has 287,113 training pairs, 13,368 validation pairs and 11,490 test pairs. Implementation Details We train our extractor and abstracter with 128dimension word embeddings and set the vocabulary size to 50k for both source and target text. We follow Nallapati et al. (2017) and See et al. (2017) and set the hidden dimension to 200 and 256 for the extractor and abstracter, respectively. We use Adagrad optimizer (Duchi et al., 2011) and apply early stopping based on the validation set. In the testing phase, we limit the length of the summary to 120. Pre-training. We use learning rate 0.15 when pretraining the extractor and abstracter. For the extractor, we limit both the maximum number of sentences per article and the maximum number of tokens per sentence to 50 and train the model for 27k iterations with the batch size of 64. For the abstracter, it takes ground-truth extracted sentences (i.e., sentences with g n = 1) as input. We limit the length of the source text to 400 and the length of the summary to 100 and use the batch size of 16. We train the abstracter without coverage mechanism for 88k iterations and continue training for 1k iterations with coverage mechanism (L abs : L cov = 1 : 1). Two-stages training. The abstracter takes extracted sentences with β n > 0.5, where β is obtained from the pre-trained extractor, as input during two-stages training. We finetune the abstracter for 10k iterations. End-to-end training. During end-to-end training, we will minimize four loss functions (Eq. 8) with λ 1 = 5 and λ 2 = λ 3 = λ 4 = 1. We set K to 3 for computing L inc . Due to the limitation of the memory, we reduce the batch size to 8 and thus use a smaller learning rate 0.01 for stability. The abstracter here reads the whole article. Hence, we increase the maximum length of source text to 600. We end-to-end train the model for 50k iterations. Results Our unified model not only generates an abstractive summary but also extracts the important sentences in an article. Our goal is that both of the two types of outputs can help people to read and understand an article faster. Hence, in this section, we evaluate the results of our extractor in Sec. 5.1 and unified model in Sec. 5.2. Furthermore, in Sec. 5.3, we perform human evaluation and show that our model can provide a better abstractive summary than other baselines. Results of Extracted Sentences To evaluate whether our extractor obtains enough information for the abstracter, we use full-length ROUGE recall scores 1 between the extracted sentences and reference abstractive summary. High ROUGE recall scores can be obtained if the extracted sentences include more words or sequences overlapping with the reference abstractive summary. For each article, we select sentences with the sentence probabilities β greater than 0.5. We show the results of the ground-truth sentence labels (Sec. 3.3) and our models on the In addition, our model trained end-to-end with inconsistency loss exceeds the lead-3 baseline. All our ROUGE scores have a 95% confidence interval with at most ±0.24. ' * ' indicates the model is trained and evaluated on the anonymized dataset and thus is not strictly comparable with ours. test set of the CNN/Daily Mail dataset in Table 1. Note that the ground-truth extracted sentences can't get ROUGE recall scores of 100 because reference summary is abstractive and may contain some words and sequences that are not in the article. Our extractor performs the best when end-toend trained with inconsistency loss. Results of Abstractive Summarization We use full-length ROUGE-1, ROUGE-2 and ROUGE-L F-1 scores to evaluate the generated summaries. We compare our models (two-stage and end-to-end) with state-of-the-art abstractive summarization models (Nallapati et al., 2016b;Paulus et al., 2017;See et al., 2017; and a strong lead-3 baseline which directly uses the first three article sentences as the summary. Due to the writing style of news articles, the most important information is often written at the beginning of an article which makes lead-3 a strong baseline. The results of ROUGE F-1 scores are shown in Table 2. We prove that with help of the extractor, our unified model can outperform pointer-generator (the third row in Table 2) even with two-stages training (the fifth row in Table 2). After end-to-end training without inconsistency loss, our method already achieves better ROUGE scores by cooperating with each other. Moreover, our model end-to-end trained with inconsistency loss achieves state-of-the-art ROUGE scores and exceeds lead-3 baseline. In order to quantify the effect of inconsistency loss, we design a metric -inconsistency rate R inc -to measure the inconsistency for each generated summary. For each decoder step t, if the word with maximum attention belongs to a sentence with low attention (i.e., β n(argmax(α t )) < mean(β)), we define this step as an inconsistent step t inc . The inconsistency rate R inc is then defined as the percentage of the inconsistent steps in the summary. R inc = Count(t inc ) T ,(9) where T is the length of the summary. The average inconsistency rates on test set are shown in Table 4. Our inconsistency loss significantly decrease R inc from about 20% to 4%. An example of inconsistency improvement is shown in Fig. 5. Method informativity conciseness readability DeepRL (Paulus et al., 2017) 3.23 2.97 2.85 pointer-generator (See et al., 2017) 3.18 3.36 3.47 GAN 3 Figure 5: Visualizing the consistency between sentence and word attentions on the original article. We highlight word (bold font) and sentence (underline font) attentions. We compare our methods trained with and without inconsistency loss. Inconsistent fragments (see red bold font) occur when trained without the inconsistency loss. Human Evaluation We perform human evaluation on Amazon Mechanical Turk (MTurk) 2 to evaluate the informativity, conciseness and readability of the summaries. We compare our best model (end2end with inconsistency loss) with pointer-generator (See et al., 2017), generative adversarial network ) and deep reinforcement model (Paulus et al., 2017). For these three models, we use the test set outputs provided by the authors 3 . 2 https://www.mturk.com/ 3 https://github.com/abisee/ pointer-generator and https://likicode.com for the first two. For DeepRL, we asked through email. We randomly pick 100 examples in the test set. All generated summaries are re-capitalized and de-tokenized. Since Paulus et al. (2017) trained their model on anonymized data, we also recover the anonymized entities and numbers of their outputs. We show the article and 6 summaries (reference summary, 4 generated summaries and a random summary) to each human evaluator. The random summary is a reference summary randomly picked from other articles and is used as a trap. We show the instructions of three different aspects as: (1) Informativity: how well does the summary capture the important parts of the article? (2) Conciseness: is the summary clear enough to explain everything without being redundant? (3) Readability: how well-written (fluent and grammatical) the summary is? The user interface of our human evaluation is shown in the supplementary material. We ask the human evaluator to evaluate each summary by scoring the three aspects with 1 to 5 score (higher the better). We reject all the evaluations that score the informativity of the random summary as 3, 4 and 5. By using this trap mechanism, we can ensure a much better quality of our human evaluation. For each example, we first ask 5 human evaluators to evaluate. However, for those articles that are too long, which are always skipped by the evaluators, it is hard to collect 5 reliable evaluations. Hence, we collect at least 3 evaluations for every example. For each summary, we average the scores over different human evaluators. The results are shown in Table 3. The reference summaries get the best score on conciseness since the recent abstractive models tend to copy sentences from the input articles. However, our model learns well to select important information and form complete sentences so we even get slightly better scores on informativity and readability than the reference summaries. We show a typical example of our model comparing with other state-of-Original article (truncated): A chameleon balances carefully on a branch, waiting calmly for its prey... except that if you look closely, you will see that this picture is not all that it seems. For the 'creature' poised to pounce is not a colourful species of lizard but something altogether more human. Featuring two carefully painted female models, it is a clever piece of sculpture designed to create an amazing illusion. It is the work of Italian artist Johannes Stoetter. Scroll down for video. Can you see us? Italian artist Johannes Stoetter has painted two naked women to look like a chameleon. The 37-year-old has previously transformed his models into frogs and parrots but this may be his most intricate and impressive piece to date. Stoetter daubed water-based body paint on the naked models to create the multicoloured effect, then intertwined them to form the shape of a chameleon. To complete the deception, the models rested on a bench painted to match their skin and held the green branch in the air beneath them. Stoetter can take weeks to plan one of his pieces and hours to paint it. Speaking about The Chameleon, he said: 'I worked about four days to design the motif bigger and paint it with colours. The body painting took me about six hours with the help of an assistant. I covered the hair with natural clay to make the heads look bald.' Camouflage job: A few finishing touches are applied to the two naked models to complete the transformation. 'There are different difficulties on different levels as in every work, but I think that my passion and love to my work is so big, that I figure out a way to deal with difficulties. My main inspirations are nature, my personal life-philosophy, every-day-life and people themselves.' However, the finished result existed only briefly before the models were able to get up and wash the paint off with just a video and some photographs to record it. (...) Figure 6: Typical Comparison. Our model attended at the most important information (blue bold font) matching well with the reference summary; while other state-of-the-art methods generate repeated or less important information (red italic font). the-art methods in Fig. 6. More examples (5 using CNN/Daily Mail news articles and 3 using nonnews articles as inputs) are provided in the supplementary material. Conclusion We propose a unified model combining the strength of extractive and abstractive summarization. Most importantly, a novel inconsistency loss function is introduced to penalize the inconsistency between two levels of attentions. The inconsistency loss enables extractive and abstractive summarization to be mutually beneficial. By end-to-end training of our model, we achieve the best ROUGE-recall and ROUGE while being the most informative and readable summarization on the CNN/Daily Mail dataset in a solid human evaluation. Figure 3 : 3Architecture of the extractor. We treat the sigmoid output of each sentence as sentencelevel attention ∈ [0, 1]. Figure 4 : 4Decoding mechanism in the abstracter. focus on extractive summarization. Among them, Nallapati et al. arXiv:1805.06266v2 [cs.CL] 5 Jul 2018 (2017) have achieved high ROUGE scores. On the other hand, abstractive approaches Table 2 : 2ROUGE F-1 scores of the generated abstractive summaries on the CNN/Daily Mail test set. Our two-stages model outperforms pointer-generator model on ROUGE-1 and ROUGE-2. Table 3 : 3Comparing human evaluation results with state-of-the-art methods.Method avg. R inc w/o incon. loss 0.198 w/ incon. loss 0.042 Table 4 : 4If that was a tornado, it was one monster of one. Luckily, so far it looks like no one was hurt. With tornadoes touching down near Dallas on Sunday, Ryan Shepard snapped a photo of a black cloud formation reaching down to the ground. He said it was a tornado. It wouldn't be an exaggeration to say it looked half a mile wide. More like a mile, said Jamie Moore, head of emergency management in Johnson County, Texas. It could have been one the National Weather Service warned about in a tweet as severe thunderstorms drenched the area, causing street flooding. (...)Inconsistency rate of our end-to-end trained model with and without inconsistency loss. Without inconsistency loss: If that was a tornado, it was one monster of one. Luckily, so far it looks like no one was hurt. With tornadoes touching down near Dallas on Sun- day, Ryan Shepard snapped a photo of a black cloud formation reach- ing down to the ground. He said it was a tornado. It wouldn't be an exaggeration to say it looked half a mile wide. More like a mile, said Jamie Moore, head of emergency management in Johnson County, Texas. It could have been one the National Weather Service warned about in a tweet as severe thunderstorms drenched the area, causing street flooding. (...) With inconsistency loss: All our ROUGE scores are reported by the official ROUGE script. We use the pyrouge package. https://pypi.org/project/pyrouge/0.1.3/ AcknowledgmentsWe thank the support from Cheetah Mobile, National Taiwan University, and MOST 107-2634-F-007-007, 106-3114-E-007-004, 107-2633-E-002-001. We thank Yun-Zhu Song for assistance with useful survey and experiment on the task of abstractive summarization. Completing the deception, models rested on bench painted to match skin. DeepRL: Italian artist Johannes Stoetter has painted female models to look like a chameleon. The 37-year-old has previously transformed his models into frogs and parrots but this may be the work of Italian artist. He has painted nude models and it is a clever piece of sculpture designed to create an amazing illusion. Johannes Stoetter's artwork features two carefully painted female models. The 37-year-old has previously transformed models into frogs and parrots. Daubed water-based body paint on naked models to create the effect. It is work of artist Johannes Stoetter. GAN: Italian artist Johannes Stoetter has painted two naked women to look like a chameleon. The 37-year-old has previously transformed his models into frogs and parrots but this may be his most intricate and impressive piece to dateJohannes Stoetter's artwork features two carefully painted female models. The 37-year-old has previously transformed models into frogs and parrots. Daubed water-based body paint on naked models to create the effect. Completing the deception, models rested on bench painted to match skin. DeepRL: Italian artist Johannes Stoetter has painted female models to look like a chameleon. The 37-year-old has previously transformed his models into frogs and parrots but this may be the work of Italian artist. He has painted nude models and it is a clever piece of sculpture designed to create an amazing illusion. It is work of artist Johannes Stoetter. GAN: Italian artist Johannes Stoetter has painted two naked women to look like a chameleon. The 37-year-old has previously transformed his models into frogs and parrots but this may be his most intricate and impressive piece to date. Pointer-generator: Italian artist Johannes Stoetter has painted two naked women to look like a chameleon. It is the work of Italian artist Johannes Stoetter. Stoetter daubed water-based body paint on the naked models to create the multicoloured effect, then intertwined them to form the shape of a chameleon. Our unified model (with inconsistency lossPointer-generator: Italian artist Johannes Stoetter has painted two naked women to look like a chameleon. It is the work of Italian artist Johannes Stoetter. Stoetter daubed water-based body paint on the naked models to create the multicoloured effect, then intertwined them to form the shape of a chameleon. Our unified model (with inconsistency loss): The 37-year-old has previously transformed his models into frogs and parrots. Stoetter daubed water-based body paint on the naked models to create the multicoloured effect, then intertwined them to form the shape of a chameleon. References Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Proceedings of the 2015 International Conference on Learning Representations. the 2015 International Conference on Learning RepresentationsNeural machine translation by jointly learning to align and translateJohannes Stoetter has painted two naked women to look like a chameleon. The 37-year-old has previously transformed his models into frogs and parrots. Stoetter daubed water-based body paint on the naked models to create the multicoloured effect, then intertwined them to form the shape of a chameleon. References Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2014. Neural machine translation by jointly learning to align and translate. In Proceedings of the 2015 International Conference on Learning Repre- sentations. Distraction-based neural networks for modeling documents. Qian Chen, Xiaodan Zhu, Zhenhua Ling, Si Wei, Hui Jiang, Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence (IJCAI-16). the Twenty-Fifth International Joint Conference on Artificial Intelligence (IJCAI-16)Qian Chen, Xiaodan Zhu, Zhenhua Ling, Si Wei, and Hui Jiang. 2016. Distraction-based neural networks for modeling documents. In Proceedings of the Twenty-Fifth International Joint Conference on Ar- tificial Intelligence (IJCAI-16). Neural summarization by extracting sentences and words. Jianpeng Cheng, Mirella Lapata, Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. the 54th Annual Meeting of the Association for Computational Linguistics1Jianpeng Cheng and Mirella Lapata. 2016. Neural summarization by extracting sentences and words. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 484-494. Adaptive subgradient methods for online learning and stochastic optimization. John Duchi, Elad Hazan, Yoram Singer, Journal of Machine Learning Research. 12John Duchi, Elad Hazan, and Yoram Singer. 2011. Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research, 12(Jul):2121-2159. Angela Fan, David Grangier, Michael Auli, arXiv:1711.05217Controllable abstractive summarization. arXiv preprintAngela Fan, David Grangier, and Michael Auli. 2017. Controllable abstractive summarization. arXiv preprint arXiv:1711.05217. Incorporating copying mechanism in sequence-to-sequence learning. Jiatao Gu, Zhengdong Lu, Hang Li, O K Victor, Li, Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. the 54th Annual Meeting of the Association for Computational Linguistics1Jiatao Gu, Zhengdong Lu, Hang Li, and Victor OK Li. 2016. Incorporating copying mechanism in sequence-to-sequence learning. In Proceedings of the 54th Annual Meeting of the Association for Com- putational Linguistics (Volume 1: Long Papers), volume 1, pages 1631-1640. Teaching machines to read and comprehend. Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, Phil Blunsom, Advances in Neural Information Processing Systems. Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Su- leyman, and Phil Blunsom. 2015. Teaching ma- chines to read and comprehend. In Advances in Neu- ral Information Processing Systems, pages 1693- 1701. Extractive summarization using continuous vector space models. Mikael Kågebäck, Olof Mogren, Nina Tahmasebi, Devdatt Dubhashi, Proceedings of the 2nd Workshop on Continuous Vector Space Models and their Compositionality (CVSC). the 2nd Workshop on Continuous Vector Space Models and their Compositionality (CVSC)Mikael Kågebäck, Olof Mogren, Nina Tahmasebi, and Devdatt Dubhashi. 2014. Extractive summariza- tion using continuous vector space models. In Pro- ceedings of the 2nd Workshop on Continuous Vector Space Models and their Compositionality (CVSC), pages 31-39. Rouge: A package for automatic evaluation of summaries. Text Summarization Branches Out. Chin-Yew Lin, Chin-Yew Lin. 2004. Rouge: A package for auto- matic evaluation of summaries. Text Summarization Branches Out. Generative adversarial network for abstractive text summarization. Linqing Liu, Yao Lu, Min Yang, Qiang Qu, Jia Zhu, Hongyan Li, Proceddings of the 2018 Association for the Advancement of Artificial Intelligence. eddings of the 2018 Association for the Advancement of Artificial IntelligenceLinqing Liu, Yao Lu, Min Yang, Qiang Qu, Jia Zhu, and Hongyan Li. 2017. Generative adversarial net- work for abstractive text summarization. In Proced- dings of the 2018 Association for the Advancement of Artificial Intelligence. Language as a latent variable: Discrete generative models for sentence compression. Yishu Miao, Phil Blunsom, Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. the 2016 Conference on Empirical Methods in Natural Language ProcessingYishu Miao and Phil Blunsom. 2016. Language as a latent variable: Discrete generative models for sen- tence compression. In Proceedings of the 2016 Con- ference on Empirical Methods in Natural Language Processing, pages 319-328. Summarunner: A recurrent neural network based sequence model for extractive summarization of documents. Ramesh Nallapati, Feifei Zhai, Bowen Zhou, Proceddings of the 2017 Association for the Advancement of Artificial Intelligence. eddings of the 2017 Association for the Advancement of Artificial IntelligenceRamesh Nallapati, Feifei Zhai, and Bowen Zhou. 2017. Summarunner: A recurrent neural network based se- quence model for extractive summarization of doc- uments. In Proceddings of the 2017 Association for the Advancement of Artificial Intelligence, pages 3075-3081. Classify or select: Neural architectures for extractive document summarization. Ramesh Nallapati, Bowen Zhou, Mingbo Ma, arXiv:1611.04244arXiv preprintRamesh Nallapati, Bowen Zhou, and Mingbo Ma. 2016a. Classify or select: Neural architectures for extractive document summarization. arXiv preprint arXiv:1611.04244. Abstractive text summarization using sequence-tosequence rnns and beyond. Ramesh Nallapati, Bowen Zhou, Caglar Cicero Dos Santos, Bing Gulcehre, Xiang, Proceedings of The 20th SIGNLL Conference on Computational Natural Language Learning. The 20th SIGNLL Conference on Computational Natural Language LearningRamesh Nallapati, Bowen Zhou, Cicero dos San- tos, Caglar Gulcehre, and Bing Xiang. 2016b. Abstractive text summarization using sequence-to- sequence rnns and beyond. In Proceedings of The 20th SIGNLL Conference on Computational Natu- ral Language Learning, pages 280-290. Shashi Narayan, Nikos Papasarantopoulos, Mirella Lapata, Shay B Cohen, arXiv:1704.04530Neural extractive summarization with side information. arXiv preprintShashi Narayan, Nikos Papasarantopoulos, Mirella La- pata, and Shay B Cohen. 2017. Neural extrac- tive summarization with side information. arXiv preprint arXiv:1704.04530. A deep reinforced model for abstractive summarization. Romain Paulus, Caiming Xiong, Richard Socher, Proceedings of the 2018 International Conference on Learning Representations. the 2018 International Conference on Learning RepresentationsRomain Paulus, Caiming Xiong, and Richard Socher. 2017. A deep reinforced model for abstractive sum- marization. In Proceedings of the 2018 Interna- tional Conference on Learning Representations. Aurelio Marc, Sumit Ranzato, Michael Chopra, Wojciech Auli, Zaremba, arXiv:1511.06732Sequence level training with recurrent neural networks. arXiv preprintMarc'Aurelio Ranzato, Sumit Chopra, Michael Auli, and Wojciech Zaremba. 2015. Sequence level train- ing with recurrent neural networks. arXiv preprint arXiv:1511.06732. A neural attention model for abstractive sentence summarization. Sumit Alexander M Rush, Jason Chopra, Weston, Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. the 2015 Conference on Empirical Methods in Natural Language ProcessingAlexander M Rush, Sumit Chopra, and Jason Weston. 2015. A neural attention model for abstractive sen- tence summarization. In Proceedings of the 2015 Conference on Empirical Methods in Natural Lan- guage Processing, pages 379-389. Get to the point: Summarization with pointergenerator networks. Abigail See, J Peter, Christopher D Liu, Manning, Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. the 55th Annual Meeting of the Association for Computational Linguistics1Abigail See, Peter J Liu, and Christopher D Manning. 2017. Get to the point: Summarization with pointer- generator networks. In Proceedings of the 55th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 1073-1083. Attention is all you need. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, Illia Polosukhin, Advances in Neural Information Processing Systems. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Pro- cessing Systems, pages 6000-6010. Pointer networks. Oriol Vinyals, Meire Fortunato, Navdeep Jaitly, Advances in Neural Information Processing Systems. Oriol Vinyals, Meire Fortunato, and Navdeep Jaitly. 2015. Pointer networks. In Advances in Neural In- formation Processing Systems, pages 2692-2700. Hierarchical attention networks for document classification. Zichao Yang, Diyi Yang, Chris Dyer, Xiaodong He, Alex Smola, Eduard Hovy, Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesZichao Yang, Diyi Yang, Chris Dyer, Xiaodong He, Alex Smola, and Eduard Hovy. 2016. Hierarchi- cal attention networks for document classification. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, pages 1480-1489. Graph-based neural multi-document summarization. Michihiro Yasunaga, Rui Zhang, Kshitijh Meelu, Ayush Pareek, Krishnan Srinivasan, Dragomir Radev, Proceedings of the 21st Conference on Computational Natural Language Learning. the 21st Conference on Computational Natural Language LearningMichihiro Yasunaga, Rui Zhang, Kshitijh Meelu, Ayush Pareek, Krishnan Srinivasan, and Dragomir Radev. 2017. Graph-based neural multi-document summarization. In Proceedings of the 21st Confer- ence on Computational Natural Language Learning (CoNLL 2017), pages 452-462. Optimizing sentence modeling and selection for document summarization. Wenpeng Yin, Yulong Pei, Proceedings of the 24th International Joint Conference on Artificial Intelligence. the 24th International Joint Conference on Artificial IntelligenceAAAI PressWenpeng Yin and Yulong Pei. 2015. Optimizing sen- tence modeling and selection for document summa- rization. In Proceedings of the 24th International Joint Conference on Artificial Intelligence, pages 1383-1389. AAAI Press.
1,462,388
LeXFlow: a System for Cross-fertilization of Computational Lexicons
This demo presents LeXFlow, a workflow management system for crossfertilization of computational lexicons. Borrowing from techniques used in the domain of document workflows, we model the activity of lexicon management as a set of workflow types, where lexical entries move across agents in the process of being dynamically updated. A prototype of LeXFlow has been implemented with extensive use of XML technologies (XSLT, XPath, XForms, SVG) and open-source tools (Cocoon, Tomcat, MySQL). LeXFlow is a web-based application that enables the cooperative and distributed management of computational lexicons.
[]
LeXFlow: a System for Cross-fertilization of Computational Lexicons Association for Computational LinguisticsCopyright Association for Computational LinguisticsJuly 2006. 2006 Maurizio Tesconi maurizio.tesconi@iit.cnr.it CNR-IIT Via Moruzzi 156024PisaItaly Andrea Marchetti andrea.marchetti@iit.cnr.it CNR-IIT Via Moruzzi 156024PisaItaly Francesca Bertagna francesca.bertagna@ilc.cnr.it CNR-ILC Via Moruzzi 156024PisaItaly Monica Monachini monica.monachini@ilc.cnr.it CNR-ILC Via Moruzzi 156024PisaItaly Claudia Soria claudia.soria@ilc.cnr.it CNR-ILC Via Moruzzi 156024PisaItaly Nicoletta Calzolari nicoletta.calzolari@ilc.cnr.it CNR-ILC Via Moruzzi 156024PisaItaly LeXFlow: a System for Cross-fertilization of Computational Lexicons Proceedings of the COLING/ACL 2006 Interactive Presentation Sessions the COLING/ACL 2006 Interactive Presentation SessionsSydneyAssociation for Computational LinguisticsJuly 2006. 2006 This demo presents LeXFlow, a workflow management system for crossfertilization of computational lexicons. Borrowing from techniques used in the domain of document workflows, we model the activity of lexicon management as a set of workflow types, where lexical entries move across agents in the process of being dynamically updated. A prototype of LeXFlow has been implemented with extensive use of XML technologies (XSLT, XPath, XForms, SVG) and open-source tools (Cocoon, Tomcat, MySQL). LeXFlow is a web-based application that enables the cooperative and distributed management of computational lexicons. Introduction LeXFlow is a workflow management system aimed at enabling the semi-automatic management of computational lexicons. By management we mean not only creation, population and validation of lexical entries but also integration and enrichment of different lexicons. A lexicon can be enriched by resorting to automatically acquired information, for instance by means of an application extracting information from corpora. But a lexicon can be enriched also by resorting to the information available in another lexicon, which can happen to encode different types of information, or at different levels of granularity. LeXFlow intends to address the request by the computational lexicon community for a change in perspective on computa-tional lexicons: from static resources towards dynamically configurable multi-source entities, where the content of lexical entries is dynamically modified and updated on the basis of the integration of knowledge coming from different sources (indifferently represented by human actors, other lexical resources, or applications for the automatic extraction of lexical information from texts). This scenario has at least two strictly related prerequisites: i) existing lexicons have to be available in or be mappable to a standard form enabling the overcoming of their respective differences and idiosyncrasies, thus making their mutual comprehensibility a reality; ii) an architectural framework should be used for the effective and practical management of lexicons, by providing the communicative channel through which lexicons can really communicate and share the information encoded therein. For the first point, standardization issues obviously play the central role. Important and extensive efforts have been and are being made towards the extension and integration of existing and emerging open lexical and terminological standards and best practices, such as EAGLES, ISLE, TEI, OLIF, Martif (ISO 12200), Data Categories (ISO 12620), ISO/TC37/SC4, and LIRICS. An important achievement in this respect is the MILE, a meta-entry for the encoding of multilingual lexical information ; in our approach we have embraced the MILE model. As far as the second point is concerned, some initial steps have been made to realize frameworks enabling inter-lexica access, search, integration and operability. Nevertheless, the general impression is that little has been made towards the development of new methods and techniques for the concrete interoperability among lexical and textual resources. The intent of LeXFlow is to fill in this gap. LeXFlow Design and Application LeXFlow is conceived as a metaphoric extension and adaptation to computational lexicons of XFlow, a framework for the management of document workflows (DW, Marchetti et al., 2005). A DW can be seen as a process of cooperative authoring where the document can be the goal of the process or just a side effect of the cooperation. Through a DW, a document life-cycle is tracked and supervised, continually providing control over the actions leading to document compilation In this environment a document travels among agents who essentially carry out the pipeline receive-process-send activity. Each lexical entry can be modelled as a document instance (formally represented as an XML representation of the MILE lexical entry), whose behaviour can be formally specified by means of a document workflow type (DWT) where different agents, with clear-cut roles and responsibilities, act over different portions of the same entry by performing different tasks. Two types of agents are envisaged: external agents are human or software actors which perform activities dependent from the particular DWT, and internal agents are software actors providing general-purpose activities useful for any DWT and, for this reason, implemented directly into the system. Internal agents perform general functionalities such as creating/converting a document belonging to a particular DWT, populating it with some initial data, duplicating a document to be sent to multiple agents, splitting a document and sending portions of information to different agents, merging duplicated documents coming from multiple agents, aggregating fragments, and finally terminating operations over the document. An external agent executes some processing using the document content and possibly other data, e.g. updates the document inserting the results of the preceding processing, signs the updating and finally sends the document to the next agent(s). The state diagram in Figure 1 describes the different states of the document instances. At the starting point of the document life cycle there is a creation phase, in which the system raises a new instance of a document with information attached. The document instance goes into pending state. When an agent gets the document, it goes into processing state in which the agent compiles the parts under his/her responsibility. If the agent, for some reason, doesn't complete the instance elaboration, he can save the work performed until that moment and the document instance goes into freezing state. If the elaboration is completed (submitted), or cancelled, the instance goes back into pending state, waiting for a new elaboration. Borrowing from techniques used in DWs, we have modelled the activity of lexicon management as a set of DWT, where lexical entries move across agents and become dynamically updated. Lexical Workflow General Architecture As already written, LeXFlow is based on XFlow which is composed of three parts: i) the Agent Environment, i.e. the agents participating to all DWs; ii) the Data, i.e. the DW descriptions plus the documents created by the DW and iii) the Engine. Figure 2 illustrates the architecture of the framework. Figure 2. General Architecture. The DW environment is the set of human and software agents participating to at least one DW. The description of a DW can be seen as an extension of the XML document class. A class of documents, created in a DW, shares the schema of their structure, as well as the definition of the procedural rules driving the DWT and the list of the agents attending to it. Therefore, in order to describe a DWT, we need four components: • a schema of the documents involved in the DWT; • the agent roles chart, i.e. the set of the external and internal agents, operating on the document flow. Inside the role chart these agents are organized in roles and groups in order to define who has access to the document. This component constitutes the DW environment; • a document interface description used by external agents to access the documents. This component also allows checking access permissions to the document; • a document workflow description defining all the paths that a document can follow in its life-cycle, the activities and policies for each role. The document workflow engine constitutes the run-time support for the DW, it implements the internal agents, the support for agents' activities, and some system modules that the external agents have to use to interact with the DW system. Also, the engine is responsible for two kinds of documents useful for each document flow: the documents system logs and the documents system metadata. The lexicon Augmentation Workflow Type In this section we present a first DWT, called "lexicon augmentation", for dynamic augmentation of semantic MILE-compliant lexicons. This DWT corresponds to the scenario where an entry of a lexicon A becomes enriched via basically two steps. First, by virtue of being mapped onto a corresponding entry belonging to a lexicon B, the entry (A) inherits the semantic relations available in the mapped entry (B) . Second, by resorting to an automatic application that acquires information about semantic relations from corpora, the acquired relations are integrated into the entry and proposed to the human encoder. In order to test the system we considered the Simple/Clips (Ruimy et al., 2003) and ItalWord-Net (Roventini et al., 2003) lexicons. An overall picture of the flow is shown in Figure 3, illustrating the different agents participating to the flow. Rectangles represent human actors over the entries, while the other figures symbolize software agents: ovals are internal agents and octagons external ones. The functionality offered to human agents are: display of MILE-encoded lexical entries, selection of lexical entries, mapping between lexical entries belonging to different lexicons 1 , automatic calculations of new semantic relations (either automatically derived from corpora and mutually inferred from the mapping) and manual verification of the newly proposed semantic relations. Implementation Overview Our system is currently implemented as a webbased application where the human external agents interact with system through a web browser. All the human external agents attending the different document workflows are the users of system. Once authenticated through username and password the user accesses his workload area where the system lists all his pending documents (i.e. entries) sorted by type of flow. The system shows only the flows to which the user has access. From the workload area the user such as: selecting and processing pending document; creating a new document; displaying a graph representing a DW of a previously created document; highlighting the current position of the document. This information is rendered as an SVG (Scalable Vector Graphics) image. Figure 5 illustrates the overall implementation of the system. The Client Side: External Agent Interaction The form used to process the documents is rendered with XForms. Using XForms, a browser can communicate with the server through XML documents and is capable of displaying the document with a user interface that can be defined for each type of document. A browser with XForms capabilities will receive an XML document that will be displayed according to the specified template, then it will let the user edit the document and finally it will send the modified document to the server. The Server Side The server-side is implemented with Apache Tomcat, Apache Cocoon and MySQL. Tomcat is used as the web server, authentication module (when the communication between the server and the client needs to be encrypted) and servlet container. Cocoon is a publishing framework that uses the power of XML. The entire functioning of Cocoon is based on one key concept: component pipelines. The pipeline connotes a series of events, which consists of taking a request as in-put, processing and transforming it, and then giving the desired response. MySQL is used for storing and retrieving the documents and the status of the documents. Each software agent is implemented as a webservice and the WSDL language is used to define its interface. Figure 1 . 1Document State Diagram. Figure 3 .Figure 4 . 34Lexicon LeXFlow User Activity State Diagram. We hypothesize a human agent, but the same role could be performed by a software agent. To this end, we are investigating the possibility of automatically exploiting the procedure described in(Ruimy and Roventini, 2005). Standards and Best Practice for Multilingual Computational Lexicons. MILE (the Multilingual ISLE Lexical Entry). Nicoletta Calzolari, Francesca Bertagna, Alessandro Lenci and Monica MonachiniISLE Deliverable D2.2 & 3.2. PisaNicoletta Calzolari, Francesca Bertagna, Alessandro Lenci and Monica Monachini, editors. 2003. Stan- dards and Best Practice for Multilingual Computa- tional Lexicons. MILE (the Multilingual ISLE Lexical Entry). ISLE Deliverable D2.2 & 3.2. Pisa. XFlow: An XML-Based Document-Centric Workflow. Andrea Marchetti, Maurizio Tesconi, Salvatore Minutoli, Proceedings of WI-SE'05. WI-SE'05New York, NY, USAAndrea Marchetti, Maurizio Tesconi, and Salvatore Minutoli. 2005. XFlow: An XML-Based Docu- ment-Centric Workflow. In Proceedings of WI- SE'05, pages 290-303, New York, NY, USA. ItalWordNet: Building a Large Semantic Database for the Automatic Treatment of Italian. Adriana Roventini, Antonietta Alonge, Francesca Bertagna, Nicoletta Calzolari, Christian Girardi, Bernardo Magnini, Rita Marinelli, Antonio Zampolli, Computational Linguistics in Pisa. Antonio Zampolli, Nicoletta Calzolari, and Laura CignoniPisa-RomaIstituto Editoriale e Poligrafico InternazionaleAdriana Roventini, Antonietta Alonge, Francesca Bertagna, Nicoletta Calzolari, Christian Girardi, Bernardo Magnini, Rita Marinelli, and Antonio Zampolli. 2003. ItalWordNet: Building a Large Semantic Database for the Automatic Treatment of Italian. In Antonio Zampolli, Nicoletta Calzolari, and Laura Cignoni, editors, Computational Lingui- stics in Pisa, Istituto Editoriale e Poligrafico Inter- nazionale, Pisa-Roma, pages 745-791. A Computational Semantic Lexicon of Italian: SIMPLE. Nilda Ruimy, Monica Monachini, Elisabetta Gola, Nicoletta Calzolari, Cristina Del Fiorentino, Marisa Ulivieri, Sergio Rossi, Computational Linguistics in Pisa. Antonio Zampolli, Nicoletta Calzolari, and Laura CignoniPisa-RomaIstituto Editoriale e Poligrafico InternazionaleNilda Ruimy, Monica Monachini, Elisabetta Gola, Nicoletta Calzolari, Cristina Del Fiorentino, Marisa Ulivieri, and Sergio Rossi. 2003. A Computational Semantic Lexicon of Italian: SIMPLE. In Antonio Zampolli, Nicoletta Calzolari, and Laura Cignoni, editors, Computational Linguistics in Pisa, Istituto Editoriale e Poligrafico Internazionale, Pisa-Roma, pages 821-864. Towards the linking of two electronic lexical databases of Italian. Nilda Ruimy, Adriana Roventini, Proceedings of L&T'05 -Language Technologies as a Challenge for Computer Science and Linguistics. L&T'05 -Language Technologies as a Challenge for Computer Science and LinguisticsPoznan, PolandFigure 5. Overall System ImplementationNilda Ruimy and Adriana Roventini. 2005. Towards the linking of two electronic lexical databases of Italian. In Proceedings of L&T'05 -Language Technologies as a Challenge for Computer Science and Linguistics, pages 230-234, Poznan, Poland. Figure 5. Overall System Implementation.
28,841,202
Music, Language, and Computational Modeling: Lessons from the Key-Finding Problem
Recent research in computational music research, including my own, has been greatly influenced by methods in computational linguistics. But I believe the influence could also go the other way: Music may offer some interesting lessons for language research, particularly with regard to the modeling of cognition.
[]
Music, Language, and Computational Modeling: Lessons from the Key-Finding Problem David Temperley dtemperley@esm.rochester.edu Eastman School of Music University of Rochester 26 Gibbs St. Rochester14604NY Music, Language, and Computational Modeling: Lessons from the Key-Finding Problem Recent research in computational music research, including my own, has been greatly influenced by methods in computational linguistics. But I believe the influence could also go the other way: Music may offer some interesting lessons for language research, particularly with regard to the modeling of cognition. In this talk I will focus on an important problem in music cognition: the problem of key identification. I will argue that this problem is in some ways analogous to the problem of syntactic parsing in language. I will present a simple Bayesian model that performs well at the key-finding task. I will then consider some implications of the model for other issues. The model represents moment-to-moment changes in key over time and captures "reanalysis" effects in key perception. The model can be used to estimate the tonal ambiguity of a musical passage, and can also be used to estimate the probability of note patterns (just as a probabilistic grammar can be used to estimate the probability of word strings). An interesting question here concerns expectation: In forming expectations for the next surface element (note or word), do we consider all possible structures (syntactic structures or keys) or just the most probable one? Finally, the model sheds light on the concept of "information flow." It has been suggested that language reflects a tendency towards uniform density of information, in that less probable elements are spread out or elongated; I will suggest that the same may be true in music. Slides for the talk will be available at my website, <www.theory.esm.rochester.edu/temperley>.
12,066,739
SPARSAR: An Expressive Poetry Reader
We present SPARSAR, a system for the automatic analysis of poetry(and text) style which makes use of NLP tools like tokenizers, sentence splitters, NER (Name Entity Recognition) tools, and taggers. In addition the system adds syntactic and semantic structural analysis and prosodic modeling. We do a dependency mapping to analyse the verbal complex and determine Discourse Structure. Another important component of the system is a phonological parser to account for OOVWs, in the process of grapheme to phoneme conversion of the poem. We also measure the prosody of the poem by associating mean durational values in msecs to each syllable from a database of syllable durations; to account for missing syllables we built a syllable parser with the aim to evaluate durational values for any possible syllable structure. A fundamental component for the production of emotions is the one that performs affective and sentiment analysis. This is done on a line by line basis. Lines associated to specific emotions are then marked to be pronounced with special care for the final module of the system, which is reponsible for the production of expressive reading by a TTS module, in our case the one made available by Apple on their computers. Expressive reading is allowed by the possibility to interact with the TTS.
[ 1260035 ]
SPARSAR: An Expressive Poetry Reader April 26-30 Rodolfo Delmonte delmont@unive.it Department of Language Studies Department of Computer Science Ca' Foscari University -30123 VeneziaItaly Anton Maria Prati Department of Language Studies Department of Computer Science Ca' Foscari University -30123 VeneziaItaly SPARSAR: An Expressive Poetry Reader Proceedings of the Demonstrations at the 14th Conference of the European Chapter of the Association for Computational Linguistics the Demonstrations at the 14th Conference of the European Chapter of the Association for Computational LinguisticsGothenburg, SwedenApril 26-30 We present SPARSAR, a system for the automatic analysis of poetry(and text) style which makes use of NLP tools like tokenizers, sentence splitters, NER (Name Entity Recognition) tools, and taggers. In addition the system adds syntactic and semantic structural analysis and prosodic modeling. We do a dependency mapping to analyse the verbal complex and determine Discourse Structure. Another important component of the system is a phonological parser to account for OOVWs, in the process of grapheme to phoneme conversion of the poem. We also measure the prosody of the poem by associating mean durational values in msecs to each syllable from a database of syllable durations; to account for missing syllables we built a syllable parser with the aim to evaluate durational values for any possible syllable structure. A fundamental component for the production of emotions is the one that performs affective and sentiment analysis. This is done on a line by line basis. Lines associated to specific emotions are then marked to be pronounced with special care for the final module of the system, which is reponsible for the production of expressive reading by a TTS module, in our case the one made available by Apple on their computers. Expressive reading is allowed by the possibility to interact with the TTS. Introduction We present SPARSAR, a system for poetry (and text) style analysis by means of parameters derived from deep poem (and text) analysis. We use our system for deep text understanding called VENSES (XXX,2005) for that aim. SPAR-SAR(XXX,2013a) works on top of the output provided by VENSES and is organized in three main modules which can be used also to analyse similarities between couples of poems by the same or different poet and similarities between collections of poems by a couple of poets. In addition to what is usually needed to compute text level semantic and pragmatic features, poetry introduces a number of additional layers of meaning by means of metrical and rhyming devices. For these reasons more computation is required in order to assess and evaluate the level of complexity that a poem objectively contains. We use prosodic durational parameters from a database of English syllables we produced for a prosodic speech recognizer (XXX,1990). These parameters are used to evaluate objective presumed syllable and feet prosodic distribution at line level. The sum of all of these data is then used to create a parameterized version of the poem to be read by a TTS, with an appropriate expressivity. Expressive reading is generated by combining syntactic, semantic, lexical and prosodic information. It is a well-known fact that TTS systems are unable to produce utterances with appropriate prosody (van Santen et al.,2003) 1 . Besides the general problems related to TTS reading normal texts, when a poem is inputted to the TTS the result is worsened by the internal rules which compute stanza boundaries as sentence delimiters. So every time there are continuations or enjambements from one stanza to the next the TTS will not be able to see it, and will produce a long pause. The TTS is also blind to line boundaries. More importantly, the TTS reads every sentence with the same tone, thus contributing an unpleasant repeated overall boring sense which does not correspond to the contents read. This is why sentiment analysis can be of help, together with semantic processing at discourse level. As regards affective or emotional reading, then, the prosody of current TTS systems is neutral, and generally uses flat intonation contours. Producing "expressive" prosody will require modifying rhythm, stress patterns and intonation as described in section 4(see Kao & Jurafsky,2012). The paper is organized as follows: here below a subsection contains a short state of the art limited though to latest publications; section 2 shortly presents SPARSAR; section 3 is dedicated to Prosody, Rhyming and Metrical Structure; a short state of the art of expressive reading is presented in section 4, which is devoted to TextTo-Speech and parameters induction from the analysis. Eventually we present an evaluation, a conclusion and work for the future. SPARSAR[8] produces a deep analysis of each poem at different levels: it works at sentence level at first, than at line level and finally at stanza level. The structure of the system is organized as follows: at first syntactic, semantic and grammatical functions are evaluated. Then the poem is translated into a phonetic form preserving its visual structure and its subdivision into lines and stanzas. Phonetically translated words are associated to mean duration values taking into account position in the word and stress. Taking into account syntactic and semantic information, we then proceed to "demote" word stress of dependent or functional words. At the end of the analysis of the poem, the system can measure the following parameters: mean verse length in terms of msec. and in number of feet. The latter is derived by a line and stanza representation of metrical structure. More on this topic below. Another important component of the analysis of rhythm is constituted by the algorithm that measures and evaluates rhyme schemes at stanza level and then the overall rhyming structure at poem level. As regards syntax, we build chunks and dependency structures. To complete our work, we introduce semantics at two levels. On the one hand, we isolate verbal complex in order to verify propositional properties, like presence of negation, computing factuality from a crosscheck with modality, aspectuality -that we derive from our lexica -and tense. We also classify referring espressions by distinguishing concrete from abstract nouns, identifying highly ambiguous from singleton concepts (from number of possible meanings from WordNet and other similar repositories). Eventually, we carry out a sentiment analysis of every poem, thus contributing a three-way classification: neutral, negative, positive that can be used as a powerful tool for expressive purposes. Rhetoric Devices, Metrical and Prosodic Structure The second module takes care of rhetorical devices, metrical structure and prosodic structure. This time the file is read on a line by line level by simply collecting strings in a sequence and splitting lines at each newline character. In a subsequent loop, whenever two newlines characters are met, a stanza is computed. In order to compute rhetorical and prosodic structure we need to transform each word into its phonetic counterpart, by accessing the transcriptions available in the CMU dictionary. The Carnegie Mellon Pronouncing Dictionary is freely available online and includes American English pronunciation 2 . We had available a syllable parser which was used to build the VESD database of English syllables (XXX, 1999a) (Venice English Syllable Database) to be used in the Prosodic Module of SLIM, a system for prosodic selflearning activities(XXX,2010), which we use whenever we have a failure of our pronunciation dictionary which covers some 170,000 entries. Remaining problems to be solved are related to ambiguous homographs like "import" (verb) and "import" (noun) and are treated on the basis of their lexical category derived from previous tagging; and Out Of Vocabulary Words (OOVW). If a word is not found in the dictionary, we try different capitalizations, as well as breaking apart hyphenated words, and then we check with simple heuristics, differences in spelling determined by British vs. American pronunciation. Then we proceed by morphological decomposition, splitting at first the word from its prefix and if that still does not work, its derivational suffix. As a last resource, we use an orthographically based version of the same dictionary to try and match the longest possible string in coincidence with our OOVW. Some words we had to reconstruct are: wayfare, gangrened, krog, copperplate, splendor, filmy, seraphic, unstarred, shrive, slipstream, fossicking, unplotted, corpuscle, thither, wraiths, etc. In some cases, the problem that made the system fail was the syllable which was not available in our database of syllable durations, VESD 3 . This problem has been coped with by launching the syllable parser and then computing durations from the component phonemes, or from the closest similar syllable available in the database. We only had to add 12 new syllables for a set of approximately 500 poems that we computed to test the system. Computing Metrical Structure and Rhyming Scheme Any poem can be characterized by its rhythm which is also revealing of the poet's peculiar style. In turn, the poem's rhythm is based mainly on two elements: meter, that is distribution of stressed and unstressed syllables in the verse, presence of rhyming and other poetic devices like alliteration, assonance, consonance, enjambements, etc. which contribute to poetic form at stanza level. We follow Hayward (1991) to mark a poetic foot by a numerical sequence that is an alternation of 0/1: "0" for unstressed and "1" for stressed syllables. The sequence of these sings makes up the foot and depending on number of feet one can speak of iambic, trochaic, anapestic, dactylic, etc. poetic style. But then we deepen our analysis by considering stanzas as structural units in which rhyming plays an essential role. Secondly we implement a prosodic acoustic measure to get a precise definition of rhythm. Syllables are not just any combination of sounds, and their internal structure is fundamental to the nature of the poetic rhythm that will ensue. The use of duration has allowed our system to produce a model of a poetry reader that we implement by speech synthesis. To this aim we assume that syllable acoustic identity changes as a function of three parameters: -internal structure in terms of onset and rhyme which is characterized by number consonants, consonant clusters, vowel or diphthong -position in the word, whether beginning, end or middle -primary stress, secondary stress or unstressed TTS and Modeling Poetry Reading The other important part of the work regards using the previous analyses to produce intelligible, tute half of the total number of words in the corpus amounting to 133,080. We ended up with 113,282 syllables and 287,734 phones. The final typology is made up of 44 phones, 4393 syllable types and 11,712 word types. From wordlevel and phoneme-level transcriptions we produced syllables automatically by means of a syllable parser. The result was then checked manually. correct, appropriate and possibly pleasant or catchy poetry reading by a TextToSpeech system. In fact, the intention was more ambitious and was producing an "expressive" reading of a poem in the sense also intended by work reported in Ovesdotter & Sprout(2005), ), Scherer(2003. In Ovesdotter & Sprout(2005), the authors present work on fairy tales, intended to use positive vs negative classification of sentences to produce a better reading. To that aim they used a machine learning approach, based on the manual annotation of some 185 children stories 4 . They reported accuracy results around 63% and F-score around 70%, which they explain may be due to a very low interannotator agreement, and to the fact that the dataset was too small. In the author presents work on the perception of emotion based again on fairy tales reading by human readers. The experiment had the goal of checking the validity of the association of acoustic parameters to emotion types. Global acoustic features included F0, intensity, speech rate in number of words, feet, syllables per minute, fluency, i.e. number of pauses or silences. The results show some contradictory data for ANGRY state, but fully compliant data for HAPPY 5 . These data must be regarded as tendencies and are confirmed by experiments reported also in Scherer(2003) and Schröder(2001). However, it must be underlined that all researchers confirm the importance of semantic content, that is the meaning as a means for transmitting affective states. The TTS we are now referring to is the one freely available under Mac OSX in Apple's devices. In fact, the output of our system can be used to record .wav or .mpeg files that can then be played by any sound player program. The information made available by the system is sufficiently deep to allow for Mac TTS interactive program to adapt the text to be read and model it 4 Features used to learn to distinguish "emotional" from "neutral" sentences, include (ibid., 582): first sentence in the story; direct speech; thematic story type (animal tale, ordinary folk-tale, jokes and anecdotes); interrogative and exclamative punctuation marks; sentence length in words; ranges of story progress; percent of semantic words (JJ, N, V, RB); V count in sentence, excluding participles; positive and negative words; WordNet emotion words; interjections and affective words; content BOW: N,V,JJ,RB words by POS. 5 In particular, "angry" was associated with "decreased F0" and "decreased speech rate", but also an increased "pausing". On the contrary, "happy" showed an "increased F0, intensity, pausing" but a "decreased speech rate". "Happy" is similar to "surprised", while "angry" is similar to "sad". accurately. We used the internal commands which can modify sensibly the content of the text to be read. The voices now available are pleasant and highly intelligible. We produced a set of rules that take into account a number of essential variables and parameter to be introduced in the file to be read. Parameters that can be modified include: Duration as Speaking Rate; Intonation from first word marked to a Reset mark; Silence introduced as Durational value; Emphasis at word level increasing Pitch; Volume from first word marked to a Reset mark, increasing intensity. We discovered that Apple's TTS makes mistakes when reading some specific words, which we then had to input to the system in a phonetic format, using the TUNE modality. The rules address the following information: -the title -the first and last line of the poem -a word is one of the phonetically spelled out words -a word is the last word of a sentence and is followed by an exclamation/interrogative mark -a word is a syntactic head (either at constituency or dependency level) -a word is a quantifier, or marks the beginning of a quantified expression -a word is a SUBJect head -a word marks the end of a line and is (not) followed by punctuation -a word is the first word of a line and coincides with a new stanza and is preceded by punctuation -a line is part of a sentence which is a frozen or a formulaic expression with specific pragmatic content specifically encoded -a line is part of a sentence that introduces new Topic, a Change, Foreground Relevance as computed by semantics and discourse relations -a line is part of a sentence and is dependent in Discourse Structure and its Move is Down or Same Level -a discourse marker indicates the beginning of a subordinate clause Evaluation, Conclusion and Future Work We have done a manual evaluation by analysing a randomly chosen sample of 50 poems out of the 500 analysed by the system. The evaluation has been made by a secondary school teacher of English literature, expert in poetry 6 . We asked the teacher to verify the following four levels of analysis: 1. phonetic translation; 2. syllable division; 3. feet grouping; 4. metrical rhyming structure. Results show a percentage of error which is around 5% as a whole, in the four different levels of analysis. A first prototype has been presented in(XXX,2013a), and improvements have been done since then; but more work is needed to tune prosodic parameters for expressivity rendering both at intonational and rhythmic level. The most complex element to control seems to be variations at discourse structure which are responsible for continuation intonational patterns vs. beginning of a new contour. Analysis of Poetic Structure and Rhythm with Syntax, Semantics and Phonology as he puts it, "The wrong words are emphasized, phrase boundaries are not appropriately indicated, and there is no prosodic structure for longer stretches of speech. As a result, comprehension is difficult and the overall listening experience is disconcerting…" (ibid.,1657). It is available online at <http://www.speech.cs.cmu.edu/cgi-bin/cmudict/>.3 In VESD, syllables have been collected from WSJCAM, the Cambridge version of the continuous speech recognition corpus produced from the Wall Street Journal, distributed by the Linguistic Data Consortium (LDC). We worked on a subset of 4165 sentences, with 70,694 words which consti- I here acknowledge the contribution of XXX and thank her for the effort. Prosodic Modeling for Syllable Structures from the VESD -Venice English Syllable Database. Xxx, Atti 9° Convegno GFS-AIA. VeneziaXXX. 1999. "Prosodic Modeling for Syllable Structures from the VESD -Venice English Syllable Database", in Atti 9° Convegno GFS-AIA, Venezia, 161-168. The Path of Speech Technologies in Computer Assisted Language Learning. Xxx, V.Melissa Holland & F.Pete FisherNew YorkRoutledge -Taylor and Francis Group-Speech Synthesis for Language Tutoring SystemsXXX. 2008. "Speech Synthesis for Language Tutoring Sys- tems", in V.Melissa Holland & F.Pete Fisher(eds.), (2008), The Path of Speech Technologies in Computer Assisted Language Learning, Routledge -Taylor and Francis Group-, New York, 123-150. Prosodic tools for language learning. Xxx, International Journal of Speech Technology. 124XXX, 2010. "Prosodic tools for language learning", Inter- national Journal of Speech Technology. 12(4):161- 184. SPARSAR: a System for Poetry Automatic Rhythm and Style AnalyzeR, SLATE 2013, Demonstration Track. XXX, 2013a. SPARSAR: a System for Poetry Automatic Rhythm and Style AnalyzeR, SLATE 2013, Demon- stration Track. VENSES -a Linguistically-Based System for Semantic Evaluation. Xxx, Machine Learning Challenges. J. Quiñonero-Candela et al.BerlinSpringerXXX. 2005. "VENSES -a Linguistically-Based System for Semantic Evaluation", in J. Quiñonero-Candela et al.(eds.), 2005. Machine Learning Challenges. LNCS, Springer, Berlin, 344-371. A connectionist model of poetic meter. M Hayward, Poetics. 20M. Hayward. 1991. "A connectionist model of poetic me- ter". Poetics, 20, 303-317. A Computational Analysis of Style, Affect, and Imagery in Contemporary Poetry. Justine Kao, Dan Jurafsky, Proc. NAACL Workshop on Computational Linguistics for Literature. NAACL Workshop on Computational Linguistics for LiteratureJustine Kao and Dan Jurafsky. 2012. "A Computational Analysis of Style, Affect, and Imagery in Contempo- rary Poetry". in Proc. NAACL Workshop on Computa- tional Linguistics for Literature. Emotional sequencing and development in fairy tales. Cecilia Ovesdotter Alm, Richard Sproat, Proceedings of the First International Conference on Affective Computing and Intelligent Interaction, ACII '05. the First International Conference on Affective Computing and Intelligent Interaction, ACII '05Cecilia Ovesdotter Alm, Richard Sproat, 2005. "Emotional sequencing and development in fairy tales", In Procee- dings of the First International Conference on Affective Computing and Intelligent Interaction, ACII '05. Emotions from text: Machine learning for text-based emotion prediction. Cecilia Ovesdotter, Alm , Proceedings of HLT/EMNLP. HLT/EMNLPCecilia Ovesdotter Alm, 2005. "Emotions from text: Ma- chine learning for text-based emotion prediction", In Proceedings of HLT/EMNLP, 347-354. Applications of Computer Generated Expressive Speech for Communication Disorders. Jan Van Santen, Lois Black, Gilead Cohen, Alexander Kain, Esther Klabbers, Taniya Mishra, Jacques De Villiers, Xiaochuan Niu, Proc. Eurospeech. EurospeechGenevaJan van Santen, Lois Black, Gilead Cohen, Alexander Kain, Esther Klabbers,Taniya Mishra, Jacques de Villiers, and Xiaochuan Niu. 2003. "Applications of Computer Gene- rated Expressive Speech for Communication Disor- ders", in Proc. Eurospeech, Geneva, 1657-1660. Vocal communication of emotions: a review of research paradigms. K R Scherer, Speech Communication. 401-2K. R. Scherer. 2003. "Vocal communication of emotions: a review of research paradigms", Speech Communication, 40(1-2):227-256.
11,742,913
Decision Trees for Sense Disambiguation of Prepositions: Case of Over
This paper proposes two decision trees for determining the meanings of the prepositional uses of over by using the contextual information. It first examines the meanings of the prepositional uses of over and then aims at identifying the contexts for interpreting the meanings. Some contexts are complementary features, and that makes the decision trees simple. The trees have been tested on a corpus, and the results are encouraging. {place, physical object} over_ across put events, stay events, be verb {place, physical object} over_ control control events over_ prefer prefer verbs over_ about +duration communication events, agree events, psychological events/states, cognitive events/states, over_ because -duration psychological events/states, cognitive events/states
[ 3264475 ]
Decision Trees for Sense Disambiguation of Prepositions: Case of Over Yukiko Sasaki Alam sasaki@k.hosei.ac.jp Dept. of Digital Media Science Hosei University Tokyo Decision Trees for Sense Disambiguation of Prepositions: Case of Over This paper proposes two decision trees for determining the meanings of the prepositional uses of over by using the contextual information. It first examines the meanings of the prepositional uses of over and then aims at identifying the contexts for interpreting the meanings. Some contexts are complementary features, and that makes the decision trees simple. The trees have been tested on a corpus, and the results are encouraging. {place, physical object} over_ across put events, stay events, be verb {place, physical object} over_ control control events over_ prefer prefer verbs over_ about +duration communication events, agree events, psychological events/states, cognitive events/states, over_ because -duration psychological events/states, cognitive events/states Introduction Prepositions have been studied from a variety of perspectives. The syntactic status has been probed by such linguists as Jackendoff [77], Emonds [85], Rauh [93] and Pullum and Huddleston [02]. Cognitive theorists have paid attention to the polysemous nature of prepositions and explored the conceptual relationships of the polysemy, often proposing the graphical mental images (Lakoff and Johnson [80], Brugman [81,88], Herskovits [86], Langacker [87], Tyler and Evans [01]). Pragmatic aspects of prepositions have been studied by such scholars as Fauconnier [94] and Visetti and Cadiot [02]. The deictic properties of spatial prepositions have been examined by Hill [82], while the geographical information provided by them was an interest of computational research (Xu and Badler [00], Tezuka et al [01]). A practical study of the usage of prepositions was carried out for the purpose of teaching English as a second language (Wahlen [95], Lindstromberg [97], Yates [99]). In the fields related to natural language processing, prepositional phrase attachment has been a topic for research for quite a long time, and in recent years, the problem was explored with a neural network-based approach (Sopena,LLoberas and Moliner [98]) and with a syntax-based trainable approach (Yeh and Valin [98]). Although past research has revealed various aspects of prepositions, to my knowledge there is not much semantic research of prepositions available for computational use, which requires a vigorous formalization of representing the semantics. A recent semantic study of prepositions for computational use is found in Voss [02], with a focus on spatial prepositions. Spatial prepositions are divided into three categories according to which one of the two thematic meanings between place and path they acquire when they are in argument, adjunct and non-subcategorized positions of particular types of verbs. The semantics of spatial prepositions dealt with in Voss [02] is not lexical but thematic. The present study places more focus on the lexical meanings of prepositions rather than on the thematic meanings because it is intended for use in machine translation (MT), where the meaning of a sentence, a phrase or a lexical entry of a source language must be preserved in the target language, even though it may take a different syntactic form in the source and target languages. The preservation of meaning is even more important at an Interlingua-based MT, because meaning is a medium of translation between source and target languages. The current research deals with the prepositional uses of over, but not with the uses of over in different syntactic categories such as the use as adjective (as in The initial test is over), as part of a phrasal verb (as in He handed over his notes to me), as part of an idiomatic phrase (as in She said it over and over) and as a modifier of a quantity (as in over forty years ago). These uses could be identified in terms of their syntactic characteristics. On the other hand, the prepositional uses are grouped under the same syntactic category, but exhibit different meanings in different semantic contexts, requiring semantic treatments as well for the disambiguation of the senses. This paper examines the meanings of the prepositional uses of over, identifies the semantic contexts, and proposes two decision trees for interpreting the different meanings. The second section will divide the prepositional uses of over into two groups according to which one of the two functional units between Head and Complement is likely to identity the meanings. It will also bring into light the semantic features of the Head and Complement components that make the interpretations of the uses possible. The third section will discusses the two decision trees proposed in this paper, and the fourth section gives an evaluation of the trees, before concluding this paper. Meanings of the prepositional uses Prepositional uses of over could be divided into two groups: those whose meaning is likely to be identified by the Complement noun phrases and those that need semantic information in the Head components. The noun phrase following a preposition is the Complement of the preposition, whereas the verb, verb phrase, noun or noun phrase governing a preposition or a prepositional phrase is the Head. Over identified by its Complement Unlike the uses of over governed or required by the Head verb or noun, over the weekend in They have been unwell over the weekend can appear with almost all semantic classes of verbs, as illustrated below. (1) a. I heard over the weekend of a terrible fuss. b. He thought about it over the weekend and accepted. c. Talks with bankers were taking place over the weekend. d. It probably began over the weekend of March 27. e. His father had died over the weekend. f. American diplomats arrived here over the weekend. The phrase over the weekend appears with a sensation verb (1a), a cognition verb (1b), an occurrence verb (1c), an aspectual verb (1d), a change of state verb (1e) and a movement verb (1f). This suggests that over the weekend is not required by a particular semantic class of verbs. At the same time, it is likely to be identified from the semantic features of the Complement noun phrase, which denotes a definite period of time in discourse during which an event takes place. This use of over is called over_during because it is similar to the usage of during. On the other hand, over can appear with the Complement denoting an indefinite period of time in discourse, as in: (2) Altogether, tutorials take place over a period of about twenty-four weeks. The Complements of over_during in (1) and this use share the semantic characteristic by referring to time of duration, but differ in that the former refers to a specific period of time in discourse while the latter only a length of time over which an event continues. This use is named over_duration. Another use that denotes a length of time is as follows: (3) Skimming a chapter for its main ideas may be done over coffee. Unlike the other two we have seen above, the Complement does not refer directly to a space of time, but still the prepositional phrase implies an interval of time, in particular the duration of drinking coffee. Like the use of over_duration, the Complement does not refer to a definite period in discourse because the semantic function is to indicate a length of time. The Complement in this type is characterized by its meaning denoting meal or drink as well as by referring to a nonspecific meal or drink in discourse. This use is termed over_coffee. Like the three uses we have seen above, the following is also likely to be identified by the syntax and semantics of the Complements: (4) We heard the news over the radio at lunch break. The phrase over the radio indicates a means of hearing the news. The Complement in this use is often a sequence of the definite particle the followed by a noun denoting a device of communication such as radio and telephone. Although the Head verb or noun tends to refer to a communicative act, as this use takes a Complement with distinctive semantic features, it should belong to this group. It is called over_means. Another use of over that could belong to this group is the following: (5) You can go all over the place. The Complement denotes a place in many cases, but sometimes a physical object, and this use of over is always preceded by all. Although the Head verb or noun tends to denote movement, this use also appears with other types of predicates, as illustrated below: (6) a. Paint all over the studio. b. He is known all over Europe. c. Tapes crash all over my desk. The following is a list of the prepositional uses of over that are likely to be identified by the semantic features of the Complements: Table 1. Uses identifiable by the Complements The following is a list of the features identifying the uses of over by the Complements. Over identifiable by its Head Unlike the prepositional uses of over that are likely to be identified by the semantics of the Complements alone, the following are the uses that would require the semantic information of the Heads as well for identification. Head denoting a physical event When the Head of over denotes movement and the Complement a place or a physical object, the overprepositional phrase indicates a place above which and across which an object moves, as given below: (7) a. The bullet goes flying over my head and lands in the field behind me. b. Safe, efficient movement over snow and ice is impossible without suitable crampons. The prepositional phrase indicates a path over which an object moves, and therefore it is termed over_path. Another use of over indicating place is illustrated by the following example: (8) After spreading her napkin over her lap, Alice began to eat. The prepositional phrase implies a place on which or above which an object sits. In this example, her napkin is placed over her lap. This use is called over_locus. The Head refers to an event denoted by a verb or noun belonging to the put verb class (cf. Levin [93]) while the Complement a place or a physical object. In addition, this use is also used with a Head verb denoting the presence of an object as in: (9) An occasional loop stays over the needle after knitting the row. Indeed, this use is more complex than it appears. In a sentence with the BE verb, we find the following examples: (10) a. My painting is over the sofa. b. There were no taps over the sink. The use of over in this construction is tricky because it is difficult to distinguish from the use of over meaning about, as illustrated below: (11) The disagreement here is over the substitutability of assets. As complex as such, this construction would need a special syntactic treatment separately from other uses. That is, in this construction, the subject noun phrase is the Head of over, and thus the semantics of the subject along with that of the Complement must be examined for proper identification. Another special syntactic treatment should be given to an over-prepositional phrase modifying a noun phrase or noun, as in: (12) I can just see the light over the door from where I'm standing. In this example, the Head is a noun phrase the light and the Complement the door. Since the Head is a physical object, and the Complement a physical object, the use of over is over_locus, which means that the Head entity is located at the Complement entity. When the Head in this nominal modifier construction denotes an event or state, as in his return flight over the North Sea, there would be no special treatment required. It can be treated in the same manner as when the verb is the Head. A special pragmatic knowledge should come into play in the use of over_locus. It is not straight-forward, in fact, to determine this use only by the semantics of the Head and Complement. In every construction for this use, there is a possibility that the prepositional phrase indicates the meaning of across, as illustrated in: As an apartment can be located over a flat in the real world, the meaning of the prepositional phrase in (13a) implies a place where an object is located (i.e. over_locus). On the other hand, as an apartment is usually not located above the river, the meaning in (13b) implies a place across where an object is situated. This latter use of over is named over_across. To distinguish between over_locus and over_across requires more than the semantics of the Head and Complement components. The distinction would require encyclopedic world knowledge. Without having such a knowledge system, this paper will treat both over_locus and over_across in the same manner except for the fact that the default choice between the two is over_locus due to the much higher frequency in its use. Head denoting a nonphysical event There are four uses of over whose Head denotes a nonphysical event. The use we examine at first is illustrated in the following examples: (14) a. Marcos ruled over the Philippines for 20 years. b. He has considerable control over her activities. c. He had one huge advantage over everyone else. The Head denotes an act of control or having more power. Among verbs of such a class are preside, rule, win, excel, dominate and tower. This use metaphorically implies that an object is above the other object in terms of power. The next use of over is found in: (15) A general dictionary was preferred over the domain-specific dictionaries. The Head is a verb or noun denoting selection. Among them are prefer, choose, select, favor and the deverbal nouns. This use is called over_prefer. The noun counterpart of a verb in this class, however, should be treated with care, because the over may imply the meaning of about, as in (16) They are organized to allow the users choice over what to see, in the correct time sequence. What is going on is that over_prefer requires two entities in order to choose one over the other. When there are no such two entities between which to choose, as in (16), the meaning of over turns out to be that of about. On the other hand, when there are, the meaning is over_prefer, as illustrated in: (17) One of the most basic freedoms anywhere is the right to peace, and the choice of quiet over noise. Since verbs of this family are transitive verbs requiring the Object, when the sentence contains both the Object noun phrase and an over-prepositional phrase, the mean-ing of over_prefer obtains. In the proposed decision tree involving this use, the Head is limited to a verb, not a noun because of this syntactic freedom of a noun in this category. (To handle cases involving such a noun, a question as to the number of the arguments governed by the noun must be added.) A final discussion is about two uses named over_about and over_because. The use of over_because is illustrated in the following examples: (18) a. Teachers were suspended over war posters. b. He quit over a bribery scandal. The over-prepositional phrase implies a cause of the event or state. It seems that the meaning of over_because requires a punctual event as in (18a) and (18b). On the other hand, over with durational verbs in this class gives rise to the meaning of over_about, as illustrated by the following examples: (19) a. He talked over the issue for three hours. b. Disputes over authorship are fiercely fought. c. There is growing concern over his illness. d. They thought over all the possible errors. The Head of this use denotes events of communication (such as hear and chat), agree events (such as disagree and fight), psychological events or states (such as worry, cry and agonize) and cognitive events or states (such as think and know). Table 3 shows the uses of over the meanings of which are identified mainly by the semantics of the Head. Decision Trees for Disambiguation How do we distinguish computationally the meanings of the prepositional uses of over listed in Tables 1 and 3? Based on Tables 2 and 4, which are lists of the semantic features in the Complements and Heads that are likely to identify the meanings of over, two decision trees are proposed. One is used to identify the meanings of over from the semantic features of the Complements ( Figure 1 in the Appendix), and the other, from the semantic features of the Heads (Figure 2 in the Appendix). The first search for the meaning of over should start with the decision tree by its Complement (illustrated in Figure 1), because most prepositional uses of over characterized by the Complements denote time over event, and that means that they are the modifiers of events (at the level of verb phrases) rather than the modifiers of individual verbs (located inside verb phrases). After failing in the initial search at the decision tree by its Complement, another search for the meaning of over should take place at the decision tree by its Head in Figure 2. The decision tree in Figure 1 asks whether the Complement of over refers and/or contains such features as listed in the diamonds, while in the decision tree in Figure 2, the question concerns the Head of over. The Head component of over can be a verb governing the over-prepositional phrase (as in I would PREFER coffee over tea), a verb phrase modified by the over-prepositional phrase (as in A BIG EARTHQUAKE OCCURED over the weekend) or a noun (as in He has considerable CONTROL over her activities). In the BE verb construction (such as The issue is over the election of the chairman), the subject is the Head of over. The Complement of over is a noun phrase governed by over (as in over THE SPRING BREAK). Evaluation An evaluation of the two decision trees was conducted manually by using 500 sentences containing over produced by the British National Corpus (BNC). Among the 500 instances of over, there are 382 prepositional uses. Among the remaining, fifty six (56) instances are those meaning more than or excess, which are located characteristically immediately before a quantity word or phrase. Examples are given below: (20) (a) We have visited over 90 schools and reached 36,000 pupils. (b) Anything over this amount is, basically liable to inheritance tax at 40%. Twenty (20) instances of over are adjectives meaning completion, as in: (21) My war seemed to be over. Fifteen (15) instances are those used as adverb, an example of which is illustrated: (22) … , and read the list over and over again. The sentences including over and over again are repeated twice in the data because of the two instances of over in the phrase. The data contains six such sentences, resulting in 12 counts of over and over again. Seven (7) instances are used right before the to-prepositional phrase or toward-prepositional phrase, as in: (23) She went over to the Cookery and Refreshments Tent, but … The distinction between such instances of over as in (21), (22) and (23) on one hand and prepositional uses on the other could be made by purely syntactic treatments because of their different syntactic characteristics. Most of instances meaning more than or excess as illustrated in (20) are morphologically distinguishable from prepositional uses, because most of quantity nouns or noun phrases after over refer to something other than time. However, among the total 56 instances implying more than or excess, thirteen (13) quantity phrases refer to time. As such a phrase is composed of the same structure (over + noun/noun phrase) as an over-prepositional phrase, this appears to cause difficulty. But a careful examination of these constructions reveals otherwise. Such a phrase of quantity occurs in a syntactic position requiring a noun phrase, as illustrated in: (24) a. … after 12 years of living in those conditions, I would probably want to die myself. family for over ten years. c. … since they were written over 10 years ago, … All these phrases of quantity are located either after a preposition such as for, in, within and after or in an adverbial phrase ending with such a word as ago and late. Therefore, a good syntactic parser should be capable of distinguishing these quantity phrases with over from over-prepositional phrases. Two instances of over are found in quoted phrases in sentences. Eighteen (18) sentences are either incomplete (as in PUB REFURBISHMENTS -OVER THE LIMIT and Over the wrenched?) or difficult to parse (as in First part over, second part starts and … moved hard over in the direction of the spin). With the non-prepositional uses and anomalous instances of over in the data aside, the next topic of discussion is about the three hundred and eighty two (382) prepositional uses found in the data. Among the 382, eighty seven (87) instances of over are parts of phrasal verbs or idiomatic phrases. Among the 87, fifty (50) instances of over do NOT govern a prepositional phrase, as illustrated below: (25) a. And Dicky can take over as a footman. b. She went over to fetch a cup of coffee. c. I explain the position to the daughters and hand Mum over to them. d. … there is no reason for their practiced eyes to cloud over when … Thirty seven (37) instances of over are parts of phrasal verbs governing an over-prepositional phrase. Examples are given below: (26) a. Go over the launch failure procedures after any long lay-off from flying. b. … and failed promptly to hand over detainees to the police -… With the lexicon containing sufficient information on such phrasal verbs, there would remain two hundred and ninety five (295) prepositional uses of over, the meanings of which must be distinguished. The decision trees proposed in this paper succeeded in determining the meanings of two hundred and seventy six (276) instances, and failed to interpret the meanings of nineteen (19), resulting in a precision of 93.5 percent. However, if the system does not recognize phrasal verbs, the value of the precision will be lower. Some more problems and possible solutions are discussed here. It would be difficult to determine the use of over when the verbs in the sentence are conjoined, as in We finished our coffee and watched the seagulls whirling and shrieking over the harbour in the spring sunshine. When the verb shriek is categorized as a psychological verb, the use of over will be interpreted as over_about, which is a wrong interpretation in this case. The decision tree in Figure 1 will fail to recognize the following example because a search for the use of over_means requires the semantic feature +definite: … the cruel, brutal voices that bellow over loudhailers about injustice …. The Complement of over, loudhailers is not preceded by the definite article the, which is a requirement for identifying over_means. The feature +definite may be dispensable for sense identification, but it will be needed when the decision tree is used for generation, because the definite article often occurs in this type of prepositional phrase as in over the phone. A difficulty in determining whether a noun refers to a physical object or an abstract entity causes a problem. The example … with the lying motto over its gate … is hard to analyze. The use in this example is not over_about, but over_locus. With the correct interpretation of motto either as a physical object or a statement, the decision tree could lead to the right meaning of over. If motto refers to a physical object, the use of over is over_locus, whereas if it is not, the use is over_about. In the example such as The drag of the cable over one wing can make …, the deverbal noun drag must be properly parsed as the Head of over in order to interpret the meaning of over correctly. Based on the assumption (a) that a syntactic parsing is successful, (b) that the over in phrasal verbs are recognized beforehand, and (c) that verbs and nouns are properly categorized into classes, the two decision trees could identify the meanings of most prepositional uses of over in the corpus. In addition, information on whether the Head event refers to a durational event or not would enable the tree to distinguish between over_about and over_because. Conclusion After examining the meanings, the prepositional uses of over have been divided into two groups: those whose meanings are identifiable by their Complements and those by their Heads. Previous studies have focused on the identification of the meanings of over, whereas the present paper not only on the meanings, but on the contexts for interpreting the meanings. Two decision trees have been proposed for determining the meanings of the prepositional uses of over by using the semantic features of the Head and Complement components. The decision trees are fairly simple and easy to implement in a system. The decision trees have been successful in identifying the meanings of the prepositional uses of over in data taken from BNC. Table 2 . 2The features of the Complements distinguishing the meanings of over a. His apartment is over my flat. b. His apartment is over the river. Although both sentences have the same syntactic structure, they have noun phrases referring to different places.meanings examples over_during Over the next decade a global approach is going to be essen- tial. over_ duration Their flowers appear over sev- eral weeks in summer. over_coffee There's a lot for you to talk about over lunch. over_means Lewd songs are sung over the microphone. over_many- parts The dog ran all over the parking lot. meanings features of the Heads features of the Com- plements over_ during time +definite, +duration, over_ duration time -definite, +duration over_ coffee {meal, drink} -definite over_ means communication tool, +definite over_ many- parts {place, physical object} [syntactic features: all over + NP] Table 3 . 3The meanings of over identified by its HeadTable 4lists the semantic features in the Head that contribute to identifying the meanings of the prepositional uses of over.meanings examples over_path Let the eye move over it. over_across The gallery is over the front door. I admit that part of the ball was over the line, but that's not enough. over_locus Jessica sprayed paint over the furniture. over_control It was Prime Minister Yoshida who presided over Japan's post-war eco- nomic boom. over_prefer I would prefer coffee over tea. over_about He's mulling over an offer from NBC to star in his own series. At the counter, a couple puzzled over which lottery numbers to choose. over_ because Teachers were suspended over war posters. He quit over a bribery scandal. Table 4 . 4Semantic features in the Head identifying the meanings of over Table 5 Table 5 . 55gives a breakdown of the instances of over found in the data (PP stands for prepositional phrase):over included in or used as No. PP (over not in phrasal verbs) 295 all over again, etc.) 15 over followed by a directional PP (over other than parts of phrasal verbs) Breakdown of over in 500 instances(19 failed) The story of over: Polysemy, semantics and the structure of the lexicon. Claudia Brugman, UC-Berkely MA thesisNew York; Berkely, CAThe story of overBrugman, Claudia. 1988. The story of over: Polysemy, semantics and the structure of the lexicon. New York: Garland Press. [1981. The story of over. Berkely, CA: UC-Berkely MA thesis.] A unified theory of syntactic categories. Joseph Emonds, ForisDordrechtEmonds, Joseph. 1985. A unified theory of syntactic categories. Dordrecht: Foris. Mental spaces. Gilles Fauconnier, Cambridge University PressCambridgeFauconnier, Gilles. 1994. Mental spaces. Cambridge: Cambridge University Press. Prepositions in their syntactic, semantic and pragmatic context. Feigenbaum, Susanne and Dennis KurzonJohn BenjaminsAmsterdamFeigenbaum, Susanne and Dennis Kurzon (eds.). 2002. Prepositions in their syntactic, semantic and prag- matic context. Amsterdam: John Benjamins. Language and spatial cognition An interdisciplinary study of the prepositions in English. Annette Herskovits, Cambridge University PressCambridgeHerskovits, Annette. 1986. Language and spatial cogni- tion An interdisciplinary study of the prepositions in English. Cambridge: Cambridge University Press. Up/down, front/back, left/right. A contrastive study of Hausa and English. Cliffort Hill, Weissenborn and Klein. Hill, Cliffort 1982. Up/down, front/back, left/right. A contrastive study of Hausa and English. In Weissen- born and Klein, 13-42. The Cambridge grammar of the English language. Rodney Huddleston, Geoffrey Pullum, Cambridge University PressCambridgeHuddleston, Rodney and Geoffrey Pullum. 2002. The Cambridge grammar of the English language. Cam- bridge: Cambridge University Press. The architecture of the language. Ray Jackendoff, MIT PressCambridge, MAJackendoff, Ray. 1977. The architecture of the language. Cambridge, MA: MIT Press. Readings in English Transformational Grammar. Roderick Jacobs, Peter Rosenbaum, Ginn & CompanyWaltham, MassJacobs, Roderick and Peter Rosenbaum. 1970. Readings in English Transformational Grammar. Waltham, Mass.: Ginn & Company. Metaphors we live by. George Lakoff, Mark Johnson, University of Chicago PressChicagoLakoff, George and Mark Johnson. 1980. Metaphors we live by. Chicago: University of Chicago Press. . Ronald Langacker, Foundations of cognitive grammar. 1Stanford University PressLangacker, Ronald. 1987. Foundations of cognitive grammar, vol. 1. Stanford, CA: Stanford University Press. English verb classes and alternations. Beth Levin, The University of Chicago PressChicago/LondonLevin, Beth. 1993. English verb classes and alterna- tions. Chicago/London: The University of Chicago Press. English prepositions explained. Seth Lindstromberg, John BenjaminsAmsterdamLindstromberg, Seth. 1998. English prepositions ex- plained. Amsterdam: John Benjamins. Prepositions and prepositional phrases. Geoffrey Pullum, Rodney Huddleston, Huddleston and PullumPullum, Geoffrey and Rodney Huddleston. 2002. Prepositions and prepositional phrases. In Huddle- ston and Pullum (eds.), 597-661. On the grammar of lexical and nonlexical prepositions in English. Gisa Rauh, Zelinskiy-WibbeltRauh, Gisa. 1993. On the grammar of lexical and non- lexical prepositions in English. In Zelinskiy-Wibbelt (eds.), 99-150. A connectionist approach to prepositional phrase attachment for real world texts. Joseph M Sopena, Agusti Lloberas, Joan L Moliner, COLING-ACL '98. Sopena, Joseph M., Agusti LLoberas and Joan L. Moliner. 1998. A connectionist approach to preposi- tional phrase attachment for real world texts. In COLING-ACL '98, 1233-1237. Web-based inference rules for processing conceptual geographical relationships. Taro Tezuka, Ryong Lee, Yahiko Kambayashi, Hiroki Takakura, Proceedings of Web Information Systems Engineering. Web Information Systems EngineeringTezuka, Taro, Ryong Lee, Yahiko Kambayashi and Hiroki Takakura. 2001. Web-based inference rules for processing conceptual geographical relationships. Proceedings of Web Information Systems Engineer- ing, 14-21. Reconsidering prepositional polysemy networks: The case of over. Andrea Tyler, Vyvyan Evans, Language. 77Tyler, Andrea and Vyvyan Evans. 2001. Reconsidering prepositional polysemy networks: The case of over. Language 77.724-765. Instability and the theory of semantic forms Starting from the case of prepositions. Yves-Marie Visetti, Pierre Cadiot, Feigenbaum, Susanne and Dennis KurzonVisetti, Yves-Marie and Pierre Cadiot. 2002. Instability and the theory of semantic forms Starting from the case of prepositions. In Feigenbaum, Susanne and Dennis Kurzon (eds.), 9-39. Interlingua-based machine translation of spatial expressions. Clare Voss, University of Maryland: Ph.D. DissertationVoss, Clare. 2002. Interlingua-based machine transla- tion of spatial expressions. University of Maryland: Ph.D. Dissertation. Prepositions illustrated. Michigan: The University of. Gloria Wahlen, Michigan PressWahlen, Gloria. 1995. Prepositions illustrated. Michi- gan: The University of Michigan Press. Here and there. Cross-linguistic studies on deixis and demonstration. Juergen Weissenborn, Wolfgang Klein, BenjaminsAmsterdam/PhiladelphiaWeissenborn, Juergen and Wolfgang Klein. 1982. Here and there. Cross-linguistic studies on deixis and demonstration. Amsterdam/Philadelphia: Benjamins. Algorithms for generating motion trajectories described by prepositions. Yilun Xu, Norman Dianna, Badler, Proceedings of Computer Animation. Computer AnimationXu, Yilun Dianna and Norman Badler. 2000. Algo- rithms for generating motion trajectories described by prepositions. Proceedings of Computer Anima- tion 2000, 30-35. The ins and outs of prepositions A guidebook for ESL students. Jean Yates, Barron'sNew YorkYates, Jean. 1999. The ins and outs of prepositions A guidebook for ESL students. New York: Barron's. Some properties of preposition and subordinate conjunction attachments. Alexander S Yeh, B Marc, Vilain, COLING-ACL '98. Yeh, Alexander S. and Marc B. Vilain. 1998. Some properties of preposition and subordinate conjunc- tion attachments. In COLING-ACL '98, 1436-1442. The semantics of prepositions: From mental processing to natural language processing. Zelinsky-Wibbelt, Mouton de GruyterCornelia; BerlinZelinsky-Wibbelt, Cornelia (ed.). 1993. The semantics of prepositions: From mental processing to natural language processing. Berlin: Mouton de Gruyter.
8,248,505
Arabic to English Person Name Transliteration using Twitter
Social media outlets are providing new opportunities for harvesting valuable resources. We present a novel approach for mining data from Twitter for the purpose of building transliteration resources and systems. Such resources are crucial in translation and retrieval tasks. We demonstrate the benefits of the approach on Arabic to English transliteration. The contribution of this approach includes the size of data that can be collected and exploited within the span of a limited time; the approach is very generic and can be adopted to other languages and the ability of the approach to cope with new transliteration phenomena and trends. A statistical transliteration system built using this data improved a comparable system built from Wikipedia wikilinks data.
[ 2456677, 8424232, 10603986, 14047990, 1304185, 1540379, 10685951 ]
Arabic to English Person Name Transliteration using Twitter Hamdy Mubarak hmubarak@qf.org.qa Qatar Computing Research Institute Hamad Bin Khalifa University (HBKU) DohaQatar Ahmed Abdelali aabdelali@qf.org.qa Qatar Computing Research Institute Hamad Bin Khalifa University (HBKU) DohaQatar Arabic to English Person Name Transliteration using Twitter TransliterationNamed EntitiesSocial MediaTweet NormalizationArabic Language Variations Social media outlets are providing new opportunities for harvesting valuable resources. We present a novel approach for mining data from Twitter for the purpose of building transliteration resources and systems. Such resources are crucial in translation and retrieval tasks. We demonstrate the benefits of the approach on Arabic to English transliteration. The contribution of this approach includes the size of data that can be collected and exploited within the span of a limited time; the approach is very generic and can be adopted to other languages and the ability of the approach to cope with new transliteration phenomena and trends. A statistical transliteration system built using this data improved a comparable system built from Wikipedia wikilinks data. Introduction With the emergence of social media outlets, millions of users exchange messages daily. This rapid expansion raises new challenges related to retrieval and extraction in a multilingual scope. Named Entities processing has been recognized as a key technique that supports a number of Natural Language Processing fields (Callan and Mitamura, 2002) and (Khalid et al., 2008). Using traditional approaches for building transliteration resources (Kirschenbaum and Wintner, 2010;Hálek et al., 2011) or mining them from text and news (Darwish et al., 2012;Kumaran et al., 2010;Sajjad et al., 2011) might not keep the pace with rapid expansion of information form such outlets. The social media outlets are providing large volume, high-value, content that is being sought by researchers, both in business and academia. Opinion mining (Lukasik et al., 2015;Manoochehr et al., 2013;Agarwal et al., 2011), customer relation, eBusiness, eHealth (Paul and Mark, 2011;Luis et al., 2011) are examples for disciplines that are exploiting these resources. The amount of data generated from the tweets only surpasses 500 millions tweets per day 1 , as such, it presents a unprecedented type of versatile resource that can be utilized namely for transliteration. Unlike similar resources, Twitter data includes explicit data about user, location, language, social network,..etc. In our paper, we present results of experiments for harnessing large number of tweeps 2 information to build a transliteration module that can be used to support translation as well as cross-language information retrieval. The advantage of using tweets versus other methods is the accuracy as well as the freshness. While linguistic resources such as Encyclopedia, Onomasticons might require time to maintain and update. Social media are becoming a faster way to get large amount of information. The occurrence frequency of a given item reflect well the accuracy and its standard use. For our case-study language "Arabic", we were able to collect over 880,000 unique Arabic users with their transliteration to English in a period of few months. This is 500% more than all the data extracted from Wikipedia (WK) (see Table 1). Even though, data from Twitter might not totally substitute high-quality, consistent and collaboratively edited data from WK. It is common to note variations within a language, Researchers have studied and documented such phenomena in corpora (Abdelali, 2004;Abdelali and Cowie, 2005). The large amount of data from Twitter persistently disclose current trends and methods used to transcribe names. Given the Arabic name " (AHmd) 3 ", Wikipedia accounts for 56% of the times the name is transliterated as "Ahmed", 40% "Ahmad", 4% to "Ahmet, Akhmad, Akhmet, Achmad". For the name " (A$rf)" 93.5% "Ashraf", 7% "Achraf". Twitter data proved to be far more richer and new phenomena and trends were observed and learned from these data. We note that the former names were transliterated in further more ways. " (AHmd) was transliterated into "ahmed, ahmad, ahmd, a7mad, a7med, a7mmd, a7md, and ahmmd" and " (A$rf)" transliterated into "ashraf, ashref, ashrf, shrf, achraf, aschraf". The study provides details for collecting, processing and validation for the usability of this resource which is being made publicly 4 . We built a transliteration model using character-based model and we were able to achieve higher scores in BLEU comparing to an equivalent set from WK data (Kirschenbaum and Wintner, 2010). The remainder of this paper is organized into the following sections: Review for the state-of-the-art and related research, Twitter data collection and pre-processing, followed by experiments and lastly results and a conclusion. Related Work WK as a free multilingual encyclopedia, provides a valuable resource for parallel information that can be easily processed and deployed in cross-language Named Entity (NE) disambiguation, resolution and translation. Wentland et al. (2008) Sajjad et al. (2011; mined transliteration from parallel corpora to improve SMT system. Their unsupervised transliteration mining system uses a parallel corpus to generate a list of word pairs and filters transliteration pairs from that. The system will be retrained on the filtered dataset and this process is iterated several times until all transliteration word pairs were detected. The approach proved fruitful with a BLEU improvement of up to 0.4 points. Yoon et al. (2007) proposed a phonetic method for multilingual transliteration. The approach exploits the string alignment and linear classifiers that were trained using the Winnow algorithm to learn transliteration characteristics. The results achieved were improved over earlier results reported by Tao et al. (2006). methods built using pure linguistic knowledge. Yoon et al. (2007) used Mean Reciprocal Rank (MRR) to measure the performance of the transliteration system tested on Arabic, Chinese, Hindi and Korean. The main challenges with former approaches is both unrobustness or dependability on scares resources that are not easy to find. Data collected from Twitter can expand rapidly and complement the resources in WK. Collecting Names from Twitter When creating a new account on Twitter, user fills full name (in any characters; less than 20 characters), and an email. Twitter might suggest some user names (unique account names) based on the combinations of the user's full name and email. User may select from the suggested names or write a new one (in alphanumeric characters only) as shown in Figure 1. This restriction compels the user to transliterate his/her name. Hence, for our case-study, we proceed to collect full names written in Arabic with their transliterations using Twitter user ID (username field). Figure 1: Creating a new account on Twitter; user is required to provide an alphanumeric username. Figure 2 shows some of the name-pairs that can be collected using the above approach. In profile, a user can also provide a location which can be a country name, city name, or a landmark name. To map user locations to Arab countries, we used a list which contains the top unique 10K user locations with their mapping to Arab countries by the aid of GeoNames 5 geographical database (Mubarak and Darwish, 2014). In our experiment, we collected Arabic tweets by issuing the query "lang:ar" against Twitter API 6 . We extracted user's full name, username, and user location. The language filter can be changed to collect names in other languages along with their transliterations. Between Mar. 2014 to Aug. 2014, we collected approximately 7.3M tweets written by 936K unique users, and 557K (or 60%) of their names have Arabic characters in the full name field. We cleaned the data as it will be detailed further and extracted full name written in Arabic (N ame arb ) that has an overlap above a certain threshold with username written in Latin characters (N ame trans ), along with user location (loc). Sample results are shown in Table 2 7 where we can note that the transliteration uses standard mapping such as UNGEGN romanization standard (UNGEGN, 2003); additionally, other non-standard transliterations are used such as the case of using numbers "7" and "3" instead of letters " " respectively, and also transliterating the Arabic letter " " to "c" which is not very common. Data Collection and Preprocessing Using the data collected; a number of steps were used to process this data including: • N ame arb , N ame trans , and loc are normalized as described in Darwish et al. (2012) (ex: convert letters " (>, <, |, p, Y)" to " (A, A, A, h, y)" in order, and map non-Arabic decoration characters to their equivalents). In addition to using decoration for Arabic characters, we observed that users sometimes use decoration for Latin characters. So, we calculated frequencies of all characters and revised the top 2,000 (99.99%) and mapped them to their regular counterparts 8 . The character "α" for example is used (as a decoration of "a") more frequently than any of the capital letters "P, Q, V, W, Y, X, or Z" in user full name field. Table 3 shows selected examples for cleaning characters decoration for names written in Arabic and English. Table 3: Name cleaning of characters decoration. • Titles are removed, ex: " (d., Al$yx), meaning Dr., Sheikh", also Mr, Miss, etc. Informal Character Writings N ame trans sometimes have numbers to represent Arabic letters that have no exact sounds in Latin languages. These numbers are similar in shapes to Arabic characters as shown in Table 4. Dialectal Variations in Names From names that are mapped to Arab countries (using user location), we extracted variations of mapping Arabic characters to Latin equivalents in different countries or regions 9 . graphically, i.e. inferring a country or a region given only the full username written in Arabic on Latin characters (Mubarak and Darwish, 2015). Transliteration Similarity Score Our hypothesis for name transliteration between N ame arb and N ame trans needed a gauge to measure and quantify the similarity between them. Given a N ame arb is transliterated using elaborate mapping scheme similar to Buckwalter transliteration. We took into consideration removing of name title, informal writings and dialectal variations, some characters are considered equivalent (ex: k=q, gh=g, dh=d, sh= ch), vowels are removed from N ame arb and N ame trans , and than similarity score is calculated using Levenshtein edit distance. For example, names " (fAlH AlrwDAn) and DrFale7Alrawdhan" will be converted to "flhrdn", so the edit distance between these names equal to "zero" and hence similarity score is 100%. Inspecting the Data Using the collection from Twitter that was compiled between Mar. 2014 to Aug. 2014, we extracted a total of 881K tweeps with a similarity score threshold of 70% or above. We found experimentally that the threshold of 70% gives adequate results both in coverage and quality. Table 6 shows samples of collected names with different ranges of thresholds (from 100% to 70%), for example name pairs with similarity score threshold = 100% represent 44% of all collected name pairs. Figure 3 shows statistics for the progress of the collection over time. We started by collecting 320K transliteration name pairs in 1 month, and ended by 880K name pairs in 6 months. Large Data Collection When inspecting the collected data, we noted that on the average, names written in Arabic represent 55% of all names, see Figure 3, extracted name pairs are 10% of Arabic names, and 21% of the extracted Arabic names are mapped to Arab countries. For the extracted names, we noticed that names having length (number of words) equals 1 or 2 words represent 97% of all names (due to length limitation during account creation) while lengths of 3 words and above represent the remaining 3%. Comparing with Wikipedia For all Arabic-English name pairs in WK (154K), only 63K names (or 41%) passed the threshold of 70% overlap in transliteration. This is because many names are rather translated, for example, the pairs "Republic, (jmhwryp)" will have a score of 0%, as there is no overlap in the pair. Resource Description The data released from this task includes 881,310 name pairs that can be used for Arabic to English person name transliteration with their respective score. For each name pair, we have the original username, normalized username (Arabic name), user screen name (English transliteration), one of the Arab countries (if possible) according to user location, name tokenization, and similarity score (transliteration accuracy). The published resource includes also a list of 719 character mapping . The resources are publicly available from http://alt.qcri.org/resources/TwitterAr2EnTranslit.tgz Evaluation and Results To assess the quality of this resource, we randomly selected 1,000 name pairs from the original names having Arabic characters, and counted how many of these names are extracted as valid transliteration name pairs using our system. The precision (P) was 0.96, the Recall (R) was 0.97, and F1-Measure was 0.965. For example, the system gave the name pairs "awaadotaibi, (EwAD AlEtyby)" a score of 50% due to the fact that the letters " " both were mapped to "a" which impacted the scoring algorithm. Therefore, the name pair will be ignored because it's under acceptance threshold. On the other side, human judgment accepted this name pair. To further explore the potentials of using the resource in Machine Translation; We used a statistical phrase-based MT system to build a character-based translation model to experiment with different data processing schemes and evaluate the new data. The system was built with the Moses (Koehn et al., 2007) toolkit default settings. The language model used in the system was implemented as a five-gram model using the SRILM-Toolkit (Stolcke and others, 2002). We compiled three datasets. T100 uses only Twitter data with a threshold of 100. T50 data with threshold greater or equal to 50. in addition to data from WK. We build an additional dataset that was the combination of T50 and WK. For the data used to build the models for evaluation, we randomly extracted two sets of 2000 pairs and used one set for development and the other for evaluation. The remaining data was held for training and building the models. The same approach was applied uniformly on WK data. The results in Table 7 the difference between these two data. On the other hand combining both data proves to be beneficial for processing both datasets. This could be explained by the richness of the twitter data and the consistency of WK data. Conclusion In this paper, we presented a methodology for harvesting valuable data from Twitter and used it for person name transliteration from Arabic to English. The collected data, that is being made publicly available, improved transliteration system. Additionally, when compared to collected data from WK; Twitter data has supplementary benefits: 1) Huge amount of parallel data, 2) Dialectal variations coverage, and 3) Informal writings. Our future work will aim to extend this approach to other languages with focus on languages with low presence in WK. Figure 2 : 2Collecting username information from Twitter in different languages. Figure 3 : 3Collected names growth over time between Mar. 2014 to Aug. 2014. Table 2 : 2Samples of extracted names from Twitter Collected data along with their countries. Table 5 5lists common variations for characters that are affected by the dialects used in Arab countries or regions. These variations are used to classify Arabic names geo-8 The list of characters mapping is available at http://alt.qcri.org/resources/TwitterAr2EnTranslit.tgz 9 Regions: Gulf (GLF), Egypt (EG), Levant (LEV), and Maghreb (MGR) Table 4 : 4Mapping of numbers (digits) used instead of Arabic characters.Char Country N ame arb N ametrans /Region (j) EG (jmAl) Gamal GLF,LEV,MGR Jamal ( * ) EG, LEV (ZAkr) Zaker GLF, MGR Thaker ($) EG,GLF,LEV (A$rf) Ashraf MGR Achraf (D) EG,LEV (DyA') Diyaa GLF,LEV Dhiyaa (f) EG,GLF,LEV (mSTfY) Mostafa MGR Mostapha (q) ALL (rfyq) Rafik, Rafiq GLF Rafig MGR Rafic (Al) EG,LEV (AlHrby) El Harby GLF,MGR Al Harby (p) EG,GLF,MGR (hnyp) Haniyya LEV Haniyyeh Table 5 : 5Samples of Arabic names that are transliterated differently according to regional dialectal variations. Table 6 : 6Examples of collected name pairs according to different thresholds. shows that the data collected from twitter cannot be transliterated using model trained on Wikipedia. A Strong indication of WK T100 T50 Comb.∆ WK test 43.3 27.8 28.1 44.3 2.4% Twitter test 28.9 40.3 40.4 52.3 29.6% Table 7 : 7BLEU results for experiments with different thresholds using WK and Twitter data sets and their respective percentage gain ∆. See http://www.internetlivestats.com/twitter-statistics/ 2 Tweep: A person who uses the Twitter online message service to send and receive tweets. Buckwalter Transliteration 4 http://alt.qcri.org/resources/ http://www.geonames.org/ 6 http://dev.twitter.com 7 "ISO 3166-1 alpha-2 codes" is used for country codes. Regional corpus of modern standard arabic. In 2ème Congrès International sur l'Ingénierie de l'Arabe et l'Ingénierie de la langue. A Abdelali, J Cowie, AlgeriaAbdelali, A. and Cowie, J. (2005). Regional corpus of modern standard arabic. In 2ème Congrès International sur l'Ingénierie de l'Arabe et l'Ingénierie de la langue. Algeria, pages 1-11. Localization in modern standard arabic. A Abdelali, Wiley Subscription Services, Inc., A Wiley Company55Abdelali, A. (2004). Localization in modern standard ara- bic. volume 55, pages 23-28. Wiley Subscription Ser- vices, Inc., A Wiley Company. Sentiment analysis of twitter data. A Agarwal, B Xie, I Vovsha, O Rambow, R Passonneau, Proceedings of the Workshop on Languages in Social Media. the Workshop on Languages in Social MediaAgarwal, A., Xie, B., Vovsha, I., Rambow, O., and Pas- sonneau, R. (2011). Sentiment analysis of twitter data. In Proceedings of the Workshop on Languages in Social Media, pages 30-38. Knowledge-based extraction of named entities. J Callan, T Mitamura, Proceedings of the eleventh international conference on Information and knowledge management. the eleventh international conference on Information and knowledge managementACMCallan, J. and Mitamura, T. (2002). Knowledge-based ex- traction of named entities. In Proceedings of the eleventh international conference on Information and knowledge management, pages 532-537. ACM. Language processing for arabic microblog retrieval. K Darwish, W Magdy, A Mourad, Proceedings of the 21st ACM international conference on Information and knowledge management. the 21st ACM international conference on Information and knowledge managementACMDarwish, K., Magdy, W., and Mourad, A. (2012). Lan- guage processing for arabic microblog retrieval. In Pro- ceedings of the 21st ACM international conference on Information and knowledge management, pages 2427- 2430. ACM. Named entities from wikipedia for machine translation. O Hálek, R Rosa, A Tamchyna, O Bojar, Conference on Theory and Practice of Information Technologies. Vrátna dolina, Slovak RepublicHálek, O., Rosa, R., Tamchyna, A., and Bojar, O. (2011). Named entities from wikipedia for machine translation. In Conference on Theory and Practice of Information Technologies, pages 23-30, Vrátna dolina, Slovak Re- public. The impact of named entity normalization on information retrieval for question answering. M A Khalid, V Jijkoun, M De Rijke, Advances in Information Retrieval. SpringerKhalid, M. A., Jijkoun, V., and De Rijke, M. (2008). The impact of named entity normalization on information re- trieval for question answering. In Advances in Informa- tion Retrieval, pages 705-710. Springer. A general method for creating a bilingual transliteration dictionary. A Kirschenbaum, S Wintner, LREC10. Valletta, MaltaKirschenbaum, A. and Wintner, S. (2010). A general method for creating a bilingual transliteration dictionary. In LREC10, pages 273-276, Valletta, Malta. Moses: Open source toolkit for statistical machine translation. P Koehn, H Hoang, A Birch, C Callison-Burch, M Federico, N Bertoldi, B Cowan, W Shen, C Moran, R Zens, C Dyer, O Bojar, A Constantin, E Herbst, Prague, Czech RepublicKoehn, P., Hoang, H., Birch, A., Callison-Burch, C., Fed- erico, M., Bertoldi, N., Cowan, B., Shen, W., Moran, C., Zens, R., Dyer, C., Bojar, O., Constantin, A., and Herbst, E. (2007). Moses: Open source toolkit for statistical ma- chine translation. (ACL'07), Prague, Czech Republic. Whitepaper on news 2010 shared task on transliteration mining. A Kumaran, M M Khapra, Li , H , Proceedings of the 2010 Named Entities Workshop: Shared Task on Transliteration Mining. ACL. the 2010 Named Entities Workshop: Shared Task on Transliteration Mining. ACLKumaran, A., Khapra, M. M., and Li, H. (2010). Whitepa- per on news 2010 shared task on transliteration mining. In Proceedings of the 2010 Named Entities Workshop: Shared Task on Transliteration Mining. ACL. Review of extracting information from the social web for health personalization. F.-L Luis, K Randi, Jason , B , Journal of Medical Internet Research. 131Luis, F.-L., Randi, K., and Jason, B. (2011). Review of extracting information from the social web for health personalization. Journal of Medical Internet Research, 13(1). Estimating collective judgement of rumours in social media. M Lukasik, T Cohn, K Bontcheva, arXiv:1506.00468arXiv preprintLukasik, M., Cohn, T., and Bontcheva, K. (2015). Esti- mating collective judgement of rumours in social media. arXiv preprint arXiv:1506.00468. Twitter brand sentiment analysis: A hybrid system using n-gram analysis and dynamic artificial neural network. G Manoochehr, S James, David , Z , Expert Systems with applications. 4016Manoochehr, G., James, S., and David, Z. (2013). Twitter brand sentiment analysis: A hybrid system using n-gram analysis and dynamic artificial neural network. Expert Systems with applications, 40(16):6266-6282. Using twitter to collect a multidialectal corpus of arabic. H Mubarak, K Darwish, Proceedings of the EMNLP 2014 Workshop on Arabic Natural Language Processing (ANLP). the EMNLP 2014 Workshop on Arabic Natural Language Processing (ANLP)Doha, QatarMubarak, H. and Darwish, K. (2014). Using twitter to col- lect a multidialectal corpus of arabic. In Proceedings of the EMNLP 2014 Workshop on Arabic Natural Language Processing (ANLP), pages 1-7, Doha, Qatar. Classifying arab names geographically. H Mubarak, K Darwish, Proceedings of the ACL 2015 Workshop on Arabic Natural Language Processing (ANLP). the ACL 2015 Workshop on Arabic Natural Language Processing (ANLP)Beijing, ChinaMubarak, H. and Darwish, K. (2015). Classifying arab names geographically. In Proceedings of the ACL 2015 Workshop on Arabic Natural Language Processing (ANLP), pages 1-8, Beijing, China. You are what you tweet: Analyzing twitter for public health. M J Paul, D Mark, ICWSM. Paul, M. J. and Mark, D. (2011). You are what you tweet: Analyzing twitter for public health. In ICWSM, pages 265-272. An algorithm for unsupervised transliteration mining with an application to word alignment. H Sajjad, A Fraser, H Schmid, ACL-HLT'11. Portland, OR, USASajjad, H., Fraser, A., and Schmid, H. (2011). An al- gorithm for unsupervised transliteration mining with an application to word alignment. ACL-HLT'11, Portland, OR, USA. A statistical model for unsupervised and semi-supervised transliteration mining. H Sajjad, A Fraser, H Schmid, Jeju, KoreaSajjad, H., Fraser, A., and Schmid, H. (2012). A statistical model for unsupervised and semi-supervised translitera- tion mining. (ACL'12), Jeju, Korea. SRILM -an extensible language modeling toolkit. A Stolcke, Proceedings of the International Speech Communication Association (INTER-SPEECH'02). the International Speech Communication Association (INTER-SPEECH'02)Denver, CO, USAStolcke, A. et al. (2002). SRILM -an extensible lan- guage modeling toolkit. In Proceedings of the Inter- national Speech Communication Association (INTER- SPEECH'02), Denver, CO, USA. Unsupervised named entity transliteration using temporal and phonetic correlation. T Tao, S.-Y Yoon, A Fister, R Sproat, C Zhai, EMNLP. Association for Computational LinguisticsTao, T., Yoon, S.-Y., Fister, A., Sproat, R., and Zhai, C. (2006). Unsupervised named entity transliteration us- ing temporal and phonetic correlation. In EMNLP, pages 250-257. Association for Computational Linguistics. Report on the current status of united nations romanization systems for geographical names. version 2.2. W G O R S Ungegn, UNGEGN, W. G. o. R. S. (2003). Report on the current status of united nations romanization systems for geo- graphical names. version 2.2. January. Building a multilingual lexical resource for named entity disambiguation, translation and transliteration. W Wentland, J Knopp, C Silberer, M Hartung, Proceedings of the 6th LREC. the 6th LRECMarrakech, MoroccoWentland, W., Knopp, J., Silberer, C., and Hartung, M. (2008). Building a multilingual lexical resource for named entity disambiguation, translation and transliter- ation. In Proceedings of the 6th LREC, Marrakech, Mo- rocco. Multilingual transliteration using feature based phonetic method. S.-Y Yoon, K.-Y Kim, R Sproat, ACL'07). Prague, Czech RepublicYoon, S.-Y., Kim, K.-Y., and Sproat, R. (2007). Multilin- gual transliteration using feature based phonetic method. (ACL'07), pages 112-119, Prague, Czech Republic.
5,317,323
WORD EXPERT PARSING l
This paper describes an approach to conceptual analysis and understanding of natural language in which linguistic knowledge centers on individual words, and the analysis mechanisms consist of interactions among distributed procedural experts representing that knowledge.Each word expert models the process of diagnosing the intended usage of a particular word in context. The Word Expert Parser performs conceptual analysis through the Interactlons of tl~e individual experts, which ask questions and exchange information in converging on a single mutually acceptable sentence meaning. The Word Expert theory is advanced as a better cognitive model of natural language understanding than the traditional rule-based approaches.The Word Expert Parser models parts o~ tSe theory, and the important issues of control and representation that arise in developing such a model [orm the basis of the technical discussion.An example from the prototype LISP implementation helps explain the theoretical results presented.
[]
WORD EXPERT PARSING l Steven L Small Department of Computer Science University of Maryland College Park 20742Maryland WORD EXPERT PARSING l This paper describes an approach to conceptual analysis and understanding of natural language in which linguistic knowledge centers on individual words, and the analysis mechanisms consist of interactions among distributed procedural experts representing that knowledge.Each word expert models the process of diagnosing the intended usage of a particular word in context. The Word Expert Parser performs conceptual analysis through the Interactlons of tl~e individual experts, which ask questions and exchange information in converging on a single mutually acceptable sentence meaning. The Word Expert theory is advanced as a better cognitive model of natural language understanding than the traditional rule-based approaches.The Word Expert Parser models parts o~ tSe theory, and the important issues of control and representation that arise in developing such a model [orm the basis of the technical discussion.An example from the prototype LISP implementation helps explain the theoretical results presented. [. Introduction Computational understanding of natural language requires complex Interactions among a variety of distinct yet redundant mechanisms. The construction of a computer program to perform such a task begins with the development of an organizational framework which Inherently .incorporates certain assumptions about the nature ot these processes and the environment in which they take place. Such cognitive premises affect nro?oundly the scope and substance of computational ~nalysis for comprehension as found in the program. This paper describes a theory of conceptual parsing which considers knowledge about language to be distributed across a collection of procedural experts centered on individual words. Natural language parsing with word experts entails several new hypotheses about the organization and representation of linguistic and pragmatic knowledge for computational language comprenension. The Word Expert Parser [1] demonstrates hpw the word expert qTt~T~ed w£~h certain ocher choices oaseo on previous work, affect structure and process in a cognitive model of parsing. The Word Expert Parser is a cognitive model of conceptual language analysis in which the unit of ltngu~stic knowledge is the word and the fqcu~ o~ research ts the set or processes unoerlyinR comprehension. The model is aimed directly at problem~ of word sense ambiguity and idiomatic expressions, and in greatly generalizing the notion of wora sense, promotes these issues to a central place in the study of language parsing. Parsing models typically cope unsatisfactorily with the wide heterogeneity of usages of particular words. If a sentence contains a standard form of a word, it can usually be parsed; if it involves a less prevalent form which has a different part of speech, perhaps it too can be parsed. Disti.nguishing amen 8 the ~any senses of a common vero, adjective, or pronoun, tar example, or correctly translating idioms are rarely possible, At the source of this difficulty is the reliance on rule-based formalisms, whethar syntactic or semantic (e.g.. cases), which attempt to capture ~he linguistic contributions inherent in constituent chunks or sentences that consist of more than single words. A crucial assumption underlying work on the Word Expert Parser is that the ~undamental unit of linguistic Knowledge is the word. and that understanding its sense or role in a particular context is the central parsing process. In the parser to be described, the word expert constitutes the kernel of linguistic knowled~nd zts representation the e~emental data structure. IE is procedural in nature and executes directly as a process, cooperating with the other experts for a given sentence to arrive at a mutually acceptable sentence meaning. Certaln principles behind the parser d 9 nqt follow directly from the view or worn primacy, out ~rom other recent theories of parsing. The cognitive processes involved in language comprehension comprise the focus of linguistic study of the word expert approach. Parsin8 is viewea as an inferential process where linguistic knowledge of syntax and semantics and general pragmatic knowledge are applied in a uniform manner during IThe research described in this renor~ .is funded by the National Aeronautics and Space Admzn~stratton under grant , n umbe, r NSC-7255. Their support is gratefully acKnowleageG, Interpretatlon. This methodological position closely follows that of Rlosbeck (see [2] and [3 ]) and Schank [4]. The central concern with word usage and word sense ambiguity follows similar motivatlons of Wllks [5]. The control structure of the Word Expert Parser results from agreqment .with ~he hypothesis of .Harcus that parsing can he none aetermzntsttcally and ~n a way tn Dhlcn information ,gained through interpretation is permanent [6]. Rieger ~ view of inference as intelligent secectlon tmong a number of competing plausible alternatives {7J of course forms the cornerstone of the new theory. Hi~ ideas on word sense selection for language analysis ([8] and [9~) and strategy selection for general problem solving [10] constitute a consistent cognitive perspective. Any natural language understanding system must incorporate mechanisms to perform word sense dlsa?biguatlo~ in. the context .of ape, n-ended world gnow~eoge, rne Importance at these mechanisms tar wore usage diagnosis derives from the ubiquity of local ambiguities, and brought about the notion chat ~hey be made the central processes of computational analysls an 9 understanding, Consideration of almost any Engllsn content word leads to a realization of the scope of the problem --with a little time and perhaps help from the dlctlonaFy , man~.dlstinct usages can ee.id~ntifl~d. As.a stmpie lllustrarzon, several usages earn tar the worus "heavy" and "ice" appear in Figure I. Each of. these seemingly" benign words exhibits a rich depth of contextual use, An earlier paper contains.a list at almost sixty verbal usages for the word "take" [llJ. The representation of all contextual word usages in an active way t~at insures their utility for linguistic dlagnasis led to the notion of word experts. Each word expert is a procedural entit~~f all posslblq contextual interpretations of the -word it represents. = Whe~ placed in a context formed by.expqrts for thg.othe ~ wares In a sentence, earn expert ShOUld De capaole or sufficient context-problng and self-examination to determine successfully' its functional or semantic role, and further, to realize the nature of that function or the precise meaning of the word. The representation and control issues involved in basing a parser on word experts are discussed below, following presentation of an example execution of the existing Word Expert Parser. Model Overview The Word Expert Parser successfully parses the sentence "The deep ~hilosopher throws the peach pit into the aeep pit," through cooperation among the appropriate word. experts, Initialization of ~he parser consists or retrlevln~ tr~ experts for "the", "deep', "philosopher", "throw", s", ~ 2An Important aeeumption of the word expert viewpoint is that the set or sucn contextual wars usages is not only finite, but fairly small as well. 3The verspectlve of viewing language through lexlcal contribution~ to structure a~d meaning has naEurallv led to the development of wold experts for co~mon m?rphemes that are not war as ~ana even, experimentally, for ~unctuatlos), Especially important is the word expert tar "-ins', which aids significantly i n helpinR co Some word senses of "heavy" 1. An overweight person is politely called "heavy": "He has become quite heavy." 2. Emotional music is referred to as "heavy": "Mahler writes heavy music." ~. An intensity of precipitation is "heavy": "A heavy snow is expected today." Some word senses of "ice" I. The solid state of water is called "ice": "Ice melts at 0Oc. " 2. "Ice" participates In an idiomatic neminal describing a favorite delight: "Homemade ice cream is delicious." 3. "Dry Ice" is the solid state of carbon dioxide: "Dry ice will keep that cool ;11 day." ~. "Ice" or "iced" describes things that have been cooled (sometimes with ice): "One iced tea to go please." 5. "Ice" also describes things made of ice: "The ice sculptures are beautiful~" 6,7. "Ice hockey" is the name of a popular sport which has a rule penelizln~ an action called "icing": "Re iced the puck causing a face-off." ~. The term "ice box" refers to both a box containing ice used for cooling foods end a refrigerator: "This ice box isn't plugged in~" Flsure 1: Example contextual word usages ".over", and ~o forth, from a dis~ flle~ and .or~anizin 8 them along with data repositories cal~e~ wor~ oIns in a left to right order in ~he sentence level wo~k~pace. Note that three copies ot t T~-3R~...t ~or "the" anb c.~o cop.ies of each expert for "deep" and "pit" appear in th~ worKspace. Since each expert executes as a process, each process Inetantlatlon in the workspa..ce must be put into an executaole state. At this point, the parse is ready to begin. The word expert for "the" runs first, and is able to terminate immediately, creating a new concept designator (called a concept bin and participating in the concept level worksp~f~"~iclT-'will eventually hold the data the intellectual philosopher described in the input. Next the "deep" expert runs, and since "deep" has a number of word senses,5 is unable to terNinate (i.e~, complete its dlscriminetlgn task)..Instead,it ~uspenas its execution, stating the conditions upon winch it should be resumed. These conditions take the form of associative trigger patterns, and are referred to as disambiguate expressions Involving gerunds or participles such as "the man eat ir~ tiger". A full discussion ot thls will appear in [12]. 4Al~hough I call them "processes". word experts are actually coroutlnes resembling CONNIVER's generators [tS], and even more so, the stack groups of the MIT L~SP Machine [14]. 51t should be clear that the notion of "word sense" as used here encompasses what might more traditionally be ~escr.ibea as "contextua~ ~orn usage", Aspects o~ a word token's linguistic envlromnent constitute Its broadened "sense". restart demons. The "deep" expert creates .a restart demon co wake l'C up when the sense ot the nominal to its right ( l .e., "~hllosopher") becomes knoWn. The exper~ f.or "philosopher now runs, observes the co.ntrol state ot the parser, ant contributes the tact Chat One new concept refers to a person e.ngaged in the study of philosophy. As this expert terminates, the expert tot "=eep" resumes spontaneously, and, constrained by the fact chat "deep" must describe an entity that can be viewed as a person, it finally terminates successfully, contributing the fact that the person is intellectual. The "throw" expert runs next and successfully prunes away several usages of "throw" for contextua, reasons. A major reason for the semantic richness of verbs such as "throw", "cake", and "Jump", is that In context, each interacts strongly with a number of succeedin8 pre~ositions and adverbs to form distinct meaninBs, The woro expert approach easily handles this grouping together or words to torn larger word-like entities. In the particular case of verbs, the expert for a word like ."throw" simply exam.ines.i~.s rSght lex ical n.eighbor, an~ oases its oWn sense alscrtmlnet2on on the co(Rolnetlon or ~ at it .expects co find there, what It actually finds ere, an~ what this neighbor tells it (if It Soas so rat as to ask). No interesting p.article follows throw" in the current exampze, out It snoulo oe easy to conceive or th.e basic expert probes to discriminate the sense of "throw" wnen ;ol-owed by "away", "up", "out" ~ "in the towel", or other woras or wore groups, when no such word rollows "throw". as Is the case nere, its expert slmp-y waits for the existence of an entire concept to Its right, to determine if it meets any of the requirements .~hat would make the correct contextual interpretation of ' throw" different trom the expected "propel by moving ones arm" (e.g., "throw a party'.'). Before any such substantive conceptual activity takes place~ however, .t~ "S" expert ~uns arm ~ontri~uCes Its stannaro morphological information to throw "s data bin. This execution of the "s" expert does not, of course, affect "throw"' s suspended status. The "the" expert for the second "the" in the sentence runs next, and as in the previous case, creates a new con.cep~ bin to represent the da.~a about the no nina~ and des crlptlo.n, to come. Lne "peecn" expert realizes that It coulo oe either a noun or an adjective, and thus attempts what ~ call a "pairing" operation with its right neighbor. It essentially asks the expert for "pit" if the two ot them form a noun-noun pair. To determine the answer, ooth "pit" and "peach" have access to the entire model of linguistic and pragmatic knowledBe. Durtn~ this time. ~peach" is in a st.a~e called "attempting pairing" which Is nlzrerent trom the "suspended" state of the "throw" ex.~.ert. "Pit" answers back that it does pair up with "peach' (since "pit" is aware of its run-time context) and enters the "rea.dy" state. "Peach".now ned:ermines its c.orre~t sense and t;erm~netee: An.d ~nc~ only one mean%ngrul sense ~or'plt remains, the pit expert executes quickly, . t.ermlnattng with the contextually a~pro~riace "trulC pit" sense. As ic terminates, the piC. expert closes off the concept b.in In which It part~cipaces, spontaneously resumins the "throw" expert. An examination of the nature of fruit pit.a reveals that they are pergect.ly suited to propelling with ones. arm, ar~ thus, the "th.row" expert terminates successzul~y, contributing its wore| sense to its event concept bin. .The "lnto~ expert, runs next, opens a concept bin ~of t~pe 'setting") rot the time, location, or situation about to be described, and suspends itself. On suspension, "lnto"'s expert posts an associative restart condition that will e.nable .its re.sumptlon when a new p~cture concept ~s opened to the right. This initial action CaKes p~ace rot most prepositions. In certain cases, if the end of a sentence is reached before an appropriate expected concept is opened, an expert will take alternative action. For example, one of the "in" experts restart trigger patterns consists of control state data of Just this kind --if the end of a sentence is rear.had .and no. conceptuql object, for the sect.ing creaceo oy "In" has oeen round, the "in" expert wxl~ resume nonetheless, and create a default concept t or perform some kind of intelligent reference aeterminatlon. The sentence "The doctor is In." illustrates this point. In the current example~ the. "the" expert that executes lm.med~ately alter t_.nto"'s suspension creates the exporter.picture concept. The wor.d ex~er~..for."deep" then rune ano, as oe~ore, cannot Immedlately olscrlmlnate among Its several se.nses. ."Deep" chug suspend.s, waiting tor the expert rot the word to Its right to neap. At h.ls point, there are two experts suspended, although ~.ne control flow remalns ralrly simple, other examples exist in whlch a complex set or conceptual dependencies cause a number or exper.~s to De suspendedslmultaneously. These situations usuaA.~y resolve themes+yes wl~_h a ca §qadlns o~ expert res,-,ptlons and terminations. In our seep ~c example, "deep" ~oets expectations on the central tableau of global control state Knowledge, and waits rot "pit" to terminate • "PIt"' s expert now runs, and since thls bulletin board contains "deep"'s expectations of a ~. oI~, or printed matter, "pit" maps immediately onto a large hole in the ground. This in turn, causes both the resumption and termination of the "deep" expert as well as the closure of the concept bin to whlch the~ oelong. At the closing of the concept bin, the "into expert resumes, marks its concept as a location, and terminates. With all the word experts completed and all concept bins closed, the expert for ".'" runs and completes the parse. The concept level workspace now contains five concepts: a picture concept designating an intellectual philosopher, an event concept representing the throwing action, another picture concept describing a fruit pit which came from a peach, a setting concept representing a location, and the picture concept which describes precisely the nature of this location. Work on the mechanism to determine the schematic roles of the concepts has just begun, and is described briefl~ later. A program trace that shows the actions ot the Nora Expert Parser on the example just presented is available on request. Structure of the Model The organization of the parser centers around data repositories on two levels --the sentence level workspace contains a word bin for each word (and sub-lexical morpheme) of the input and the concept level workspace contains a concept bin (described above) for each concept referred to in the input sentence. A third level of processing, the schema level workspaee, while not yet implemented, will contain a schema for each conceptual action of the input sentence. All actions affecting the contents of these data bins are carried out by the word expert processes, one of which is associated with each word bin in the wo rkspace. In addition to this first order information about lexical and conceptual objects, the parser contains a central tableau of control state descriptions available to any expert that can make use of self referential knowledge about its own processing or the states of processing of other model components. The availability of such control state information improves considerably both the performance and the psychological appeal of the model --each word expert attempting to disambiguate its contextual usage knows precisely t~e progress of its neighbors and the state of convergence (or the lack thereof) of the entire parsing process. Word Experts The principal knowledge structure of the model is the word sense discrimination expert. A word expert represents the the linguistic knowledge required to dlsamblguate the meaning of a single word in any context. Although represented cumputationslly as coroutlnes, these experts differ considerably from ad hoc LISP programs and have approximately the same ~elatlon ~o LISP as an augmented transition network [15] grammar. ° 2use as rh~ graphic represeptatlon of an augmented transltlon networ~ aemonstrates the basic control paradigm of the ATN parsing approach, a graphic representation for word experts exists which embodies its functional framework. Each word expert derives from a branching discrimination structure called a word sense discrimination network or sense net. A sense nec consists of an ordered se~ of • /~tr~Ti~g (the nodes of the network), and for each one, the set of possible answers to that question (the branches emanating from each node). Traversal of a sense network represents the process of converging on a single contextual usage of a word. The terminal nodes of a sense net represent distinct word senses of the word modeled by the network. A sense net for the word "heavy" appears in part (a) of Figure 2. Examination of this network reveals that four senses are represented --the three adjective usages shown in Figure 1 plus the numinal sense of "thug" as In "Joe's heavy told me to beat it." Expert Representation The network representation of a word expert leaves out certain computational necessities of actually using it for parsing. A word expert has two fundamental activities. (I) An expert asks questions about the lexical and conceptual data being amassed by its neighbors, the control states of various model components, and more general issues requiring common sense or knowledge of the physical world. (2) In addition, at each node an expert performs actions to affect the lexical and conceptual contents of the workspaces, the control states of itself, concept bins, 6An ATN without arbitrarily complex LISP computations on each arc and at each node, that is. 7In addition to common sense knowledge of the physical world, this could include information about the plot, characters, or focus of a children's story, or in a specialized domain such as medical diagnosis [17], could include highly domain specific knowledge. and the parser as a whole, and the model's expectations. The current procedural representation of the word expert for "heavy" appears as part (b) of Figure 2. Each word expert process Includes three components --a declarative header, a start node, and a body. The header provides a description of the expert's behavior for purposes of inter-expert constraint forwarding. If sense discrimination by a word expert results in the knowledge that a word to its right, either not yet executed or suspended, must map to a specific sense or conceptual category, then it should constrain it to do so, thus helping it avoid unnecessary processing or fallacious reasoning. Since word experts are represented as processes, constraining an expert consists of altering the pointer to the address at which it expects to continue execution. Through its descriptive header, an expert conditions this activity and insures that it takes place without disastrous consequences. Each node in the body of the expert has a type deslgnated by a letter following the node name. either Q (question), A (action), S (suspend), or T (terminal). By tracing through the question nodes (treating the others as vacuous except for their gore pointers), a sense network for each word expert process can be derived. The graphical framework of a word expert (and thus the questions it asks) represents its principal linguistic task of word sense disamblguatlon. Each question node has a type, shown following the Q in the.node --MC tmultiple choice), C (conditional), YN (yes/no/, and PI (posslble/Imposslble). In the example expert for "heavy", node nl represents a conditional query into the state of the entire parsing process, and n?de n[2 a multiple choice question involving the conceptual nature of the word to "heavy"s right in the input sentence. b Multiple choice questions typically delve into the aslc relations among ob3ects ann actions zn the world. For example, the question asked at node n12 of the "heavy" expert is typical: "Is the object to my right better described as an artistic object a a form of precipitation, or a physical object? Action nodes in the "heavy" expert perform such tasks as determining the concept bin to which it contributes, and pqstin 8 expectations for the word to its right. In terms ot its side effects, the "heavy" expert is fairly simple. A full account of the word expert representation language will be available next year [12]. Expert Questions The basic structure of the Word Expert Parser depends principally on the role of individual word experts in affectlug.(1) each other:s actions and ~2) the neclaratlve result or computatlonal analysis. ~xperts affect each other by posting expectations on the central bulletin board, constraining each other, changing control states of model components (most notably themselves), and augmenting data. structures in. the workspeces. ° .They contribute to the conceptua£ ans ecnematlc result ot toe parse by contrlbuting object names, descrlptions~ schemata, ane other useful data to the concept level workspace. To determine exactly what contributions .to make, i.e.j the accurate ones In the particular run-tlme context at handj the experts as~ questions ot various kinds about the processe sot the model and the world at large. Four types of questions may be asked by an expert, and whereas some queries can be made in more than one way, the several question types solicit different kinds of information. Some questions requlre fairly involved inference to be answered adequately, and others demand no more than simple register lookup. This variety corresponds well, in my opinion, with human processing involved in conceptual analysis. Certain contextual clues to meaning are structural; taking advantage of them requires solel~ knowledge of the state of the parsing process (e.g., 'building a noun prase"). Other clues subtly present themselves through more global evidence, usually having to do with linking together high order information about the specific domain at hand. In story comprehension, this involves the plot, characters, focus of attention, and general social psychology as well as common sense knowledge about the world. Understanding texts uealing with specialized subject matter requires knowledge about that particular subject, other subjects related to it, and of course, common sense. The questions asked by a word expert in arriving at the correct contextual interpretation of a word probe sources of both kinds of information, and take different forms. ~uestlons about the plot or a story or ice cheracfiers, or common sense queetlona requLrtn~ spatial or temporal stmul, attona ~}re, bes.C pnrasee as possible/impossible ~or yes/no/maybe) q~est$on~, Sometimes during sena~ 4iscrtm~n~tion,. thq p-ausl~illty or some gene.ra~ tgcC~eaus to tee pursult or ~ifferent Information than Its lmpzauatbtlity. Such aline t lone occur with enough frequengy to justify a spec~a~ type or questlon to ueal wtth them. The Importance of HulClple Choice Multiple choice questions comprise the central inferential component of word experts. They derive from R1eger' s notion that intelligent selection among competin 8 alternatives by . relative .differencing represents an important aspect oz human proe~em so~vlr~ [7]. The Word Expert Parser, unlike certain standardized tests, prohibits multiple choice questions from contalnlnR a "none of the above" choice. Thus, ehey demand tee most "reasonable" or "consistent" choice of pot ential.ly .unep~ealt.ng answers. What does a child (or adult) GO wnen zacea wlcn a sentence that seems Co state. an implausible proposition or reference lmplauqible objects? He surely does his best Co make sense ot the sentence, no master what ie says. Depending on the context, certain intelligent and literate people create metaphorical interpretations for such sentences. The word expert approach interprets metaphor, idiom s and "normal" text wleh the same mechanism. When it tampora~'ll),, suspect, s execution at none nil, its "`suepand. ed' control state description also appears on cnls taD.Leeu, .Contro..~ state descriptions such. as "suspended"~ terminates' , "attempting. ~airing" Ls.ee above) ~ and "reaay" are posies on this ou~etin board, whlcn contains a state designation for each expert and concept in the workJpmce, as well as a description of the parser state a~ a whole. Under res~.rioted condLCions~ an expert may arzect the state oeecrlptione on thls tao~eau, an expert that has determined its nominal role, may, for example, chan~e the .state of. its.concept .~the one to which lC contributes) to "oounaea" or ' closed", depending on whether or. not all or.her experts participating in chat concept nave ce~inated. Worn experts .may post expectations, on the bulletin .board co .tacilitace handshaking oetween themselves an~ SUDsequently executing neighbors. In the example .parse; the "de`ep" expert expects an entity t~aC It can uescr~oe; oy saylng so In de~ail,..~t e mi.bles the "pit" exper~ Co eermloaCe succeseru.lly on flrst runn1~, somethln8 1c would not ~e able to do other~r~se. The .initial execution of a word. expert _ must accomplien certain goa~s or a structura± nature. It tee word participates ~n a noun-noun pa~r, thls must be determined; in either case, the expert must determine the concept bin to which it concribucAs all of its descriptive data throughout the parse. ~ This concept 9An exce.pcion arises when an expert.creates a default concept bln to. represent .a conceptua-.notion references in tile texts out CO whlcn no woras in the text contribute. The automobile in "Joanie parked." is an example. could either be one that already exists in the workspace or a new one created by the expert at the time of its decision. After deciding on a concept, the principal role of a (content) word expert is to discriminate among the possibly many remaining senses of the word. Note that a good deal of this disambiguation may take place during the initial phase of concept determination. After asking enough questions to discover some piece of conceptual data, this data augments what already exists in the word's concept 5in, including declarative structures put there both by itself and by the other lexical participants in that concept. The parse completes when each word expert in the .workspace nas terminated. At this point, the concept ievez worKspace contains a complete conceptual interpretation ot the input text. Conceptual Case Resolution Adequate conceptual parsing of input text regulres a stage missing from this dlscusslon and constituting the current phase of research ---the attachment of each picture and setting concept (bin) to the appropriate conceptual case of an event concept. Such a mechanism can be viewed in an entirely analogous fashion to the mechanisms just described for performln 8 local disamblguation of word senses. Rather ~han word experts, however, the experts on this level are conceptual in nature. The concept level thus becomes the main level of activity and a new level, call it the schema level workspace, turns into the ma~n repository rot inferred Information. When a concept bin has closed, a concept expert is retrieved from a disk file, and initialized. If it is an event concept, its function is to fill its conceptual cases with settings and pictures; if it is a setting or picture, it must aetermlne its schematic role. The activity on this level, therefore, involves higher order processing than sense discrimination, but occurs in Just about the same way. The ambiguities involved in mapping known concepts into conceptual case schemata appear identical to those having to do with ma2ping words into concepts. Discovering that the word "pit maps in a certain context to the notion of a "fruit pit" requires the same abilities and knowledge as realizing that "the red house" maps in some context to the notion of "a ~ocation for smoking pot and listening to records". The implementation of the mechanisms to carry out this next level of inferential disambiguation has already begun. It should be quite clear that this schematic level is by no means the end of the line --active expert-baseo p~ot following and general text understanding flt nicely Int? the word expert framework and constitute its loglca~ extension. Summary and Conclusions The Word Expert Parser is a theory of o rganization and cgntro ~ for a conceptual, lansuage an@.~yzer. Th~ contro~ envlrosment ts cnaracter~zeo ny a co£~ectlon ot generator-like coroutines, called word experts, which cooperatively arrive at a conceptual interpretation of an ~nput sentence. Many torms of linguistic ann non-lln~uistlc knowledge are available to these experts In performing their task, including control state Knowledge and knowledge of the world, and by eliminating all but the mpst persistent forms of ambiguity, the parser models numan processing. This new model of parsin£ claims a number of theoretical advantages: (I) Its representations of linguistic knowledge reflect the enormous redundancy in natural languages --without this redundancy in the model, the inter-expert handshaking (seen in many..forms in the example parse) would not be possible. ~z) ~ne model suggests some interesting approaches to language acquisition. Since much of a word expert's knowledge Is encoded in a branching discrimination structure,, addlng new information about a word involves the addition oz a new branch. This branch would be placed in the expert at the point where the contextual clues for dlsambiguatlng the new usage differ from those present for a known usage. (3) Idiosyncratic uses of langua8@ are easily e ncooea, s~nce the wore expert provides a c~esr way to no so. These uses are indistinguishable from other uses in their encodings in the model. (4) The parser represents a cognltively plausible model or se~uentlal coroutine-like processing in human ~anguage understanding. The organization of linguistic knowledge around the word, rather than the rewrite rule, motivates interesting conjectures about the flow of control In a human language understander. Figure 2 : 28The blackboard of the Hearsay speech understanding system [~6]. ~s anelggous to the entire wormspace ot the parser, xnoluaxng the word bins, concept bins, and oulletin board. category (PA • nl)] ~sense <descriptors (LARGE-PHYSICAL-MASS . nil) (INTENSE-~UANTITY . nO3) (SERIOUS-OR-EMOTIONAL . uS2)>]> <start nO> <exnert [n~:A (~E~USE) (NEXT nl)] [nl:~ C parser-state t (open-picture . n2) [rS:A (CONCEPT new PICTURE) ~rr .4 ] (NEXT nlO)] [nlO:A (EX~C~(EX~R~ (r,,)Cr") vio,/pp~ie~P~ p~cART)I~ZnTZON) ~EX~C"I' (rw) view/PP I~¥SOBJ) (N~XT nil)] [nll:S wait-for-r~lght-word ~RES_U_ME.~trlgger 'expert-state (ha) 'terminated)) ~u~u~ t~rst) (NEXT nl2)J tel2:0 HC vlew/PP (rw) tart . ritz) ~. ~praclpitation~ nc~) ~pnysobJ . ntl)I [ntl:T P~ LARGE-PRYSICAL-MASS] [nt2:T PA SERIOUS-OR-EMOTIONAL] [nCS:T PA INTENSE-AMOUNT]>] (b) Process representation of "heavy" expert: Word expert representation The explicit representation of control state and structural Informeclon racilltates i~s use in pars in~.-conditional and yes/no questions petters s~'nple lookup operatlona In the PIAN~ER-IIke associative dac~ base [18] chef stores the workapace data. on semantic networks, such as ~RI. (191, with mulclple perspectives, nrocedural attachment, and intelligent aescripCion matching, must be used to represent in a uniform way both general world knowledge and knowledge acguired through textual Interprecatlon. In KRL terms, a multiple choice question such as "Is the object RAIN more llke ARTISTIC-OBJECT, PHYSICAL-OBJECT, or PRECIPITATION?" must be answered by appeal co ~he units representing the four notions involved. Clearly, RAIN can be viewed as s PHYSICAL-OBJECT; much less so as an ARTISTIC-OBJECT. However, in almost all contexts, RAIN is closest conceptually to PRECIPITATION. Thus, this should be the answer. This multiple choice ge;~antsqa I~tS many uses ~n c onceptuaJ~, parslng ar~. :ul~Tscale lanEuage comprene~Jlon as we~ as lngeneraproblem, solvln K [201. That any rraEment ot text (or ocher n, lan sensual input) has some interpretation from the.point of vi.ew o.~ a parcicula.r read.st constitutes, a zunaamenta~ unaerly~ng ~dea oz the worn expert approacn.Exper~ Side Effects Word experts take two klnds of actions --actions explicitly intended to affect sense discrimination by other experts)end actions to eugme`nC the conceptual infgrmaCion .chat constitutes the result or a parse. Each path throuKn a sense network represents a distinct usage of ~he modeled wordt and at each seep of the way, the ~orcl expert must update, the model Co r efle.ct the .state_of ~Cs processln 8 end t~e extent of 1is Kno.wieoge.. lee heavy" ~per~ of Figure 2(b) exhibits severaA o~ these actions. Nodes n2 and ~ of this word expert process represent."heavy"' s decision about the concept bin (i.e., ;pnceptua, notion) in which It partlclpates.I~. the first case. It declaes Co contribute to tile same Din as its left neighbor; in the second, it creates a new one, eventually. [o cunts.in the conceptual data provided by l~.sml~.ana ~ernape ocher experts to its r1.sht..At node nius heavy posts Its expectations regarolr~ the word to ice right on the. central .bulletin board. ~ 8 ] 8Rieger, C., Viewing Parsing as Word Sense iscrimination, A Survey of Linguistic Science, Dingwall (ed.), Greylock F'~b.,~.TT-~ 9] Rieger~ C., .Five Aspects. of a Full Scale Story omprenens~on ~oaei, Assoc~atlve Networks --The Representation and Use oz Knowledge in U~s, Find~ ~eo.), academ~c-'FTe~r~,'r~79. [ I0 ] I0Rieger, C., An Organization of Knowledge for Problem Solving and Language Comprehension, Artificial Intelligence, vol. 7, no. 2, 1976. ~ 11] Small. S., Conceptual Language Analysis. for Story omprehenalon. Technica~ ~eport 663, Unlversity ot Maryland, 1978. [12] Small, S., Word Experts for Conceptual Language Analysis, Ph.D. Thesis (forthcoming), University of Maryland, 1980. [13] McDermott, D. and G. Sussman, The Conniver Reference Manual. AI-Memo 259a, Massachusetts Institute of Technology, 1974. [14] Lisp Machine Group, LISP Machine Progress Report, Al-Memo 444, Massachusetts Institute of Technology, 1977. [15] Woods, W., Transition Network Grammars for Natural Language Analysis, Communications of the ACM, vol. 13, no. 10, 1970. ~ 16] Erman, L. and V. Lesser, A Multi-Level Organization or Problem Solving using Many, Diverse, Cooperating ~ 17] Reggia, J., Representing and Using Medical Knowledge or the Neuro¢ogical Localization Problem (First Report of the NE,UREX Project), Tecnnical Report 695, University of Harylana, 1978. Mll8] Sussman, G.,. T. Winograd, Massachusetts Institute of Technology, 1971. ~19] BobrowxD. and T. Wlnograd, An Overview of KRL, A nowledge ~.epresentation Eanguage, Cognitive Science, vol. 1, no. 1, 1977. ~ 20] London, P., Dependency Networks as a Representation or Modeling in General Problem Solvers, Technical Report 698, University of Maryland, 1978.Sources of Knowledge, Proceedings of the 4th International Joint Conference on Artificial Intelligence, 1975. and E. Charuiak, c ro-Planner Reference Manual, AI-Memo 205a, ~] Riesbeck, C., ComputationalUnderstanding: Analysis Sentences ACKNOWLEDGEMENTS I would llke to thank Chuck Rieger for his Insights, encouragement, and general manner.Many of the ideas presented here Chuck has graciously allowed me to steal. In addition, I thank the following people for helpin 8 me with this work through their comments and suggestions: Phil Agre, Milt Crlnberg, Phll London, Jim Reggla, Renan Samet, Randy Trigg, Rich Wood, and Pamela lave. Expert Parsing, roceedlngs ot the 6th International Jolnt Conzerence on Artificial Intelligence, 1979. and Context, AI-Memo 238. C ~ I] Gleger, S Small, Word, Stanford University~ I] gleger, C. and S. Small, Word .Expert Parsing, roceedlngs ot the 6th International Jolnt Conzerence on Artificial Intelligence, 1979. and Context, AI-Memo 238, Stanford University, 1974. Comprehension by omputer: Expectation-based Analysis of Sentences in. C Riesbeck, R Schank, Riesbeck, C. and R. Schank, Comprehension by omputer: Expectation-based Analysis of Sentences in . Context, Research Report. 78Yale UniversityContext, Research Report 78, Yale University, 1976. Conceptual Dependency: A Theory of Natural Language Understanding. R Schank, Cognitive Psychology. 34Schank, R., Conceptual Dependency: A Theory of Natural Language Understanding, Cognitive Psychology, vol. 3, no. 4, 1972. Making Preferences More Active. Y Wllks, Artificial ntelli~ence. II3Wllks, Y. Making Preferences More Active, Artificial ntelli~ence, vol. II, no. 3, 1978. Capturlng Linguistic C~reralizatione in a Parser for Ensllah x Prqceedings of the _2nd Nat$onal ~onterence ot tne ~anaalan ~oclety rot ~omputatlonai Studies of Intelligence. M Marcus, Marcus, M.,Capturlng Linguistic C~reralizatione in a Parser for Ensllah x Prqceedings of the _2nd Nat$onal ~onterence ot tne ~anaalan ~oclety rot ~omputatlonai Studies of Intelligence, 1978. the Importance of Multiple Choice. C Ringer, Proceedings of the 2nd Conference on Theoretical Issues in Natural Language Processing. the 2nd Conference on Theoretical Issues in Natural Language ProcessingRinger, C., "the Importance of Multiple Choice, Proceedings of the 2nd Conference on Theoretical Issues in Natural Language Processing, 1978.
21,821,146
Combining Segmenter and Chunker for Chinese Word Segmentation
Our proposed method is to use a Hidden Markov Model-based word segmenter and a Support Vector Machine-based chunker for Chinese word segmentation. Firstly, input sentences are analyzed by the Hidden Markov Model-based word segmenter. The word segmenter produces n-best word candidates together with some class information and confidence measures. Secondly, the extracted words are broken into character units and each character is annotated with the possible word class and the position in the word, which are then used as the features for the chunker. Finally, the Support Vector Machine-based chunker brings character units together into words so as to determine the word boundaries.
[ 3446853, 725590 ]
Combining Segmenter and Chunker for Chinese Word Segmentation Masayuki Asahara masayu-a@is.aist-nara.ac.jp Graduate School of Information Science Nara Institute of Science and Technology Japan Ling Chooi ling-g@is.aist-nara.ac.jp Graduate School of Information Science Nara Institute of Science and Technology Japan Xiaojie Goh xiaoji-w@is.aist-nara.ac.jp Graduate School of Information Science Nara Institute of Science and Technology Japan Yuji Wang Graduate School of Information Science Nara Institute of Science and Technology Japan Matsumoto Graduate School of Information Science Nara Institute of Science and Technology Japan Combining Segmenter and Chunker for Chinese Word Segmentation Our proposed method is to use a Hidden Markov Model-based word segmenter and a Support Vector Machine-based chunker for Chinese word segmentation. Firstly, input sentences are analyzed by the Hidden Markov Model-based word segmenter. The word segmenter produces n-best word candidates together with some class information and confidence measures. Secondly, the extracted words are broken into character units and each character is annotated with the possible word class and the position in the word, which are then used as the features for the chunker. Finally, the Support Vector Machine-based chunker brings character units together into words so as to determine the word boundaries. Methods We participate in the closed test for all four sets of data in Chinese Word Segmentation Bakeoff. Our method is based on the following two steps: 1. The input sentence is segmented into a word sequence by Hidden Markov Model-based word segmenter. The segmenter assigns a word class with a confidence measure for each word at the hidden states. The model is trained by Baum-Welch algorithm. 2. Each character in the sentence is annotated with the word class tag and the position in the word. The n-best word candidates derived from the word segmenter are also extracted as the features. A support vector machine-based chunker corrects the errors made by the segmenter using the extracted features. We will describe each of these steps in more details. Hidden Markov Model-based Word Segmenter Our word segmenter is based on Hidden Markov Model (HMM). We first decide the number of hidden states (classes) and assume that the each word can belong to all the classes with some probability. The problem is defined as a search for the sequence of word classes C = c 1 , . . . , c n given a word sequence W = w 1 , . . . , w n . The target is to find W and C for a given input S that maximizes the following probability: arg max W,C P (W |C)P (C) We assume that the word probability P (W |C) is constrained only by its word class, and that the class probability P (C) is constrained only by the class of the preceding word. These probabilities are estimated by the Baum-Welch algorithm using the training material (See (Manning and Schütze., 1999)). The learning process is based on the Baum-Welch algorithm and is the same as the well-known use of HMM for part-of-speech tagging problem, except that the number of states are arbitrarily determined and the initial probabilities are randomly assigned in our model. Correction by Support Vector Machine-based Chunker While the HMM-based word segmenter achieves good accuracy for known words, it cannot identify compound words and out-of-vocabulary words. Therefore, we introduce a Support Vector Machine(below SVM)-based chunker (Kudo and Matsumoto, 2001) to cover the errors made by the segmenter. The SVM-based chunker re-assigns new word boundaries to the output of the segmenter. An SVM (Vapnik, 1998) is a binary classifier. Suppose we have a set of training data for a binary class problem: (x 1 , y 1 ), . . . , (x N , y N ), where x i ∈ R n is a feature vector of the i th sample in the training data and y i ∈ {+1, −1} is the label of the sample. The goal is to find a decision function which accurately predicts y for an unseen x. An SVM classifier gives a decision function f (x) for an input vector x where f (x) = sign( z i ∈SV α i y i K(x, z i ) + b). f (x) = +1 means that x is a positive member, and f (x) = −1 means that x is a negative member. The vectors z i are called support vectors, which receive a nonzero weight α i . Support vectors and the parameters are determined by solving a quadratic programming problem. K(x, z) is a kernel function which maps vectors into a higher dimensional space. We use a polynomial kernel of degree 2 given by K(x, z) = (1 + x · z) 2 . The SVM classifier determines the position tag for each character. We introduce the word class tag as the feature, which is generated by the word segmenter. Since we perform chunking by character units, the feature used by the classifier will be the information for the character unit. The training data for our SVM-based chunker is constructed from the output of the HMM-based word segmenter defined in the previous section. In the current setting, the HMM produces all the possible tags (class labels) for each of the word within a predefined probability bound. All the words in the output are then segmented into characters, and each of the characters is tagged with pairs of a word class and a position tag. For example, in the paired tag "0-B", "0" is a class label of the word which the character belongs to and "B" indicates the character's position in the word. The number of classes is determined in advance of the HMM learning. The position tag consists of the following four tags (S/B/E/I): S means a single-character word; B is the first character in a multi-character word; E is the last character in a multi-character word; I is the intermediate character in a multi-character word longer than 2 characters. As shown in Figure 1, we set the HMM-based word segmenter to produce the classes of n-best word candidates to take into account multiple possibility of word boundaries. The correct word boundary can be defined by assigning either of two kinds of tags to each of the characters. Look at the rightmost column of Figure 1 named as "Chunker Outputs." The label "B" in this column shows that the character is the first character of a correct word, and "I" shows that the character is the other part of a word. This means that the preceding positions of "B" tags are the word boundaries. Those two tags correspond to the two classes of the SVM chunker: In the training (and test) phrase, we use window size of two characters to the left and right direction to learn (and estimate) the class for a character. For example, the shadowed parts in Figure 1 are used as the Figure 1: The Extracted Features for the Chunker features to learn (or estimate) the word boundary tag "I" for the character " ". Model Validation To find out the best setting of learning, we would like to determine "the number of word classes" and "the depth of n-best word candidates" by using some sort of confidence measure. We perform validation experiments for these two types of parameters by using the training material provided. Validation Tests for HMM-based Word Segmenter The first validation experiment is to determine "the number of word classes" of the HMM. 80% of the material is used for the HMM training, and the other 20% is used as the validation set. We test two settings for the number of classes -5 & 10. The results are shown in Table 1. In most cases, models perform slightly better with the increasing of the number of classes. When the corpus size is large like the Academia Sinica data, this tendency becomes more significant. Whereas it is known that the Baum Welch algorithm is very sensitive to the initialization of the classes, we randomly assigned the initial classes without making much effort. There are two reasons: (1) Since the word segmenter outputs are used as the clues to the chunker in our method, we only need some consistent class annotations. (2) The initial classes did not affect on the word segmentation accuracy in our pilot experiments. Validation Tests for SVM-based Chunker The second validation test is for the chunking model to determine both "the number of word classes" and "the depth of the n-best candidates". 80% of the material used for the HMM training, another 10% is used for the chunking model training and the last 10% is used for the validation test. The results are shown in Table 2, 3 and 4. Since the training of this model is time-and resourceconsuming, the Academia Sinica data being very large could not get enough time to finish the validation test. The results show that the chunker actually improves the word segmentation accuracy compared with the output of the HMM word segmenter for these three data sets. The segmentation errors made by the word segmenter for compound words and unknown words are corrected. The There is no significant difference in the results between the different depths of n-best answers. Still, we choose the best model for the test materials among them. If we need to have a faster analyzer, we should employ only the best answer of the word segmentation. For the HMM, the larger number of classes tends to get better accuracy than smaller ones. However, for the chunking model, the result is the other way round. The model with the smaller number of classes gets slightly better accuracy. So, there should be trade-off between smaller and larger number of classes. Final Models for Test Material For the final models, 80% of the training material is used for HMM training and 100% of the material is used for the chunking model training. The parameters, namely "the number of word classes" and "the depth of n-best word candidates", are determined by the validation tests described in Section 2. While there is no significant difference between the depths of n-best answers, we choose the best model among them for the testing. The parameters are shown in Table 7. We cannot create the model using all the original Academia Sinica data because of its large size. Therefore, we use 80% of the data for HMM training (5 classes) and only 10% for chunking model training (with only the best candidates). Throughput Speeds As described, our system is based on three modules: HMM-based word segmenter, Feature extractor and SVM-based chunker. The word segmenter is composed by ChaSen (written in C/C++) (Matsumoto et. al., 2003) which is adopted for GB/Big5 encoding. The feature extractor is written in Perl. The SVM-based chunker is composed by YamCha (written in C++) (Kudo and Matsumoto, 2001). Table 5 shows the speeds 1 of the three modules individually and of the total system. "# of words" means the size of the word segmenter lexicon. Note that, if a word belongs to more than one class, we regard them as different words in our definition. "# of SV" means the number of support vectors in the chunker. The total system speed depends highly on that of the chunker. It is known that the speed of SVM classifiers depends on the number of support vectors and the number of features. Conclusion We presented our method for Chinese Word Segmentation Bakeoff in 2nd SIGHAN Workshop. The results for the test materials are shown in Table 6. The proposed method is purely corpus-based statistical/machine learning method. Although we did not incorporate any heuristic rules (e.g. part-of-speeches, functional words and concatenation for numbers) into the model, the method achieved considerable accuracy for the word segmentation task. Table 1 : 1Validation Results for HMM Data # of classes Rec. Prec. F AS 5 0.845 0.768 0.804 AS 10 0.900 0.857 0.878 CTB 5 0.909 0.844 0.875 CTB 10 0.912 0.848 0.879 HK 5 0.867 0.742 0.799 HK 10 0.867 0.741 0.799 PK 5 0.942 0.902 0.921 PK 10 0.944 0.905 0.924 Table 2 : 2Validation Results (CTB) for Chunking # of classes n-bestRec. Prec. F 5 1 0.957 0.930 0.943 5 2 0.957 0.931 0.944 5 3 0.957 0.930 0.943 5 4 0.957 0.930 0.943 10 1 0.956 0.929 0.943 10 2 0.957 0.928 0.942 10 3 0.956 0.929 0.942 10 4 0.955 0.928 0.941 Table 3 : 3Validation Results (HK) for Chunking # of classes n-best Rec. Prec. F 5 1 0.853 0.793 0.822 5 2 0.859 0.799 0.828 5 3 0.859 0.799 0.828 5 4 0.859 0.800 0.828 10 1 0.856 0.793 0.823 10 2 0.858 0.797 0.826 10 3 0.857 0.796 0.826 10 4 0.858 0.797 0.826 Table 4 : 4Validation Results (PK) for Chunking # of classes n-best Rec. Prec. F 5 1 0.960 0.934 0.947 5 2 0.961 0.935 0.948 5 3 0.962 0.936 0.949 5 4 0.962 0.935 0.948 10 1 0.961 0.932 0.946 10 2 0.962 0.935 0.948 10 3 0.961 0.934 0.947 10 4 0.961 0.934 0.947 improvement in Chinese Treebank (CTB) data set is sig- nificant, because the data set contains many compound words. Table 7 : 7The Models for the Test Material -with respect to F-Measure in Our Validation TestData # of classes n-best F AS 5 1 N/A CTB 5 2 0.943 HK 5 4 0.828 PK 5 3 0.948 Table 5 : 5Throughput Speeds (characters per second) Data Word Seg. (# of words) Fea. Ext.(n-best) Chunker (# of SV) Total Speed Table 6 : 6Results for the Test Materials Data T. Rec. T. Prec.F OOV Rec. IV Rec. Ranking AS 0.944 0.945 0.945 0.574 0.952 3rd/6 CTB 0.852 0.807 0.829 0.412 0.949 8th/10 HK 0.940 0.908 0.924 0.415 0.980 5th/6 PK 0.933 0.916 0.924 0.357 0.975 2nd/4 The throughput speeds are measured on a machine: Intel(R) Xeon(TM) CPU 2.80GHz × 2, Memory 4GB, RedHat Linux 9. AcknowledgmentsWe thank Mr. Taku Kudo of NAIST for his development of the SVM-based chunker YamCha. Chunking with Support Vector Machines. T Kudo, Y Matsumoto, Proc. of NAACL. of NAACLT. Kudo and Y. Matsumoto. 2001. Chunking with Sup- port Vector Machines. In Proc. of NAACL 2001, pages 192-199. Foundation of Statistical Natural Language Processing. C D Manning, H Schütze, Markov Models. C. D. Manning and H. Schütze. 1999. Foundation of Statistical Natural Language Processing. Chapter 9. Markov Models, pages 317-340. Morphological Analyzer ChaSen-2.3.0 Users Manual Tech. Y Matsumoto, A Kitauchi, T Yamashita, Y Hirano, K Takaoka, M Asahara, JapanNara Institute of Science and TechnologyReportY. Matsumoto, A. Kitauchi, T. Yamashita, Y. Hirano, K. Takaoka and M. Asahara 2003. Morphological Ana- lyzer ChaSen-2.3.0 Users Manual Tech. Report. Nara Institute of Science and Technology, Japan. Text chunking using transformation-bases learning. L A Ramshaw, M P Marcus, Proc. of the 3rd Workshop on Very Large Corpora. of the 3rd Workshop on Very Large CorporaL. A. Ramshaw and M. P. Marcus. 1995 Text chunking using transformation-bases learning In Proc. of the 3rd Workshop on Very Large Corpora, pages 83-94. V N Vapnik, Statistical Learning Theory. A Wiley-Interscience PublicationV. N. Vapnik. 1998. Statistical Learning Theory. A Wiley-Interscience Publication.
213,878,997
[]
Where are we in Named Entity Recognition from Speech? May 2020 Antoine Caubrière antoine.caubriere@univ-lemans.fr LIUM -Le Mans University France Sophie Rosset sophie.rosset@limsi.fr LIMSI -University of Paris-Saclay CNRS France Yannick Estève yannick.esteve@univ-avignon.fr LIA -Avignon University France Antoine Laurent antoine.laurent@univ-lemans.fr LIUM -Le Mans University France Emmanuel Morin emmanuel.morin@univ-nantes.fr LS2N -University of Nantes CNRS France Where are we in Named Entity Recognition from Speech? Proceedings of the 12th Conference on Language Resources and Evaluation (LREC 2020) the 12th Conference on Language Resources and Evaluation (LREC 2020)MarseilleMay 20204514Named Entity RecognitionAutomatic Speech RecognitionTree-structured Named EntityEnd-to-End Named entity recognition (NER) from speech is usually made through a pipeline process that consists in (i) processing audio using an automatic speech recognition system (ASR) and (ii) applying a NER to the ASR outputs. The latest data available for named entity extraction from speech in French were produced during the ETAPE evaluation campaign in 2012. Since the publication of ETAPE's campaign results, major improvements were done on NER and ASR systems, especially with the development of neural approaches for both of these components. In addition, recent studies have shown the capability of End-to-End (E2E) approach for NER / SLU tasks. In this paper, we propose a study of the improvements made in speech recognition and named entity recognition for pipeline approaches. For this type of systems, we propose an original 3-pass approach. We also explore the capability of an E2E system to do structured NER. Finally, we compare the performances of ETAPE's systems (state-of-the-art systems in 2012) with the performances obtained using current technologies. The results show the interest of the E2E approach, which however remains below an updated pipeline approach. Introduction Named entity recognition seeks to locate and classify named entity mentions in unstructured text into pre-defined categories (such as person names, organizations, locations, ...). Quaero project is at the initiative of an extended definition of named entity for French data. This extended version has a multilevel tree structure, where base entities are combined to define more complex ones. With the extended definition, named entity recognition consists in the detection, the classification and the decomposition of the entities. This new definition was used for the French evaluation campaign ETAPE (Galibert et al., 2014). Since the ETAPE's results publication in 2012, no new work were published, to the best of our knowledge, on named entity recognition from speech for Quaero-like treestructured French data. Tree-structured named entities can not be tackled as a simple sequence labeling task. At the time of the ETAPE campaign, state-of-the-art works focused on multiple processing steps before rebuilding a tree structure. Conditional Random Field (Lafferty et al., 2001) (CRF) are in the core of these previous sequence labeling approaches. Some approaches (Dinarelli and Rosset, 2012;Dinarelli and Rosset, 2011) used Probabilistic Context-Free Grammar (Johnson, 1998) (PCFG) in complement of CRF to implement a cascade model. CRF was trained on components information and PCFG was used to predict the whole entity tree. The ETAPE winning NER system (Raymond, 2013) only used CRF models with one model per base entity. Most of the typical approaches for named entity recognition from speech follows a two steps pipeline, with first an ASR system and then a NER system on automatic transcriptions produced by the ASR system. In this configuration, the NER component must deal with an imperfect transcription of speech. As a result, the quality of automatic transcriptions has a major impact on NER performances (Ben Jannet et al., 2015). In 2012, HMM-GMM implementations were still the stateof-the-art approaches for ASR technologies. Since this date, the great contribution of neural approaches for NER and ASR tasks were demonstrated. Recent studies (Lample et al., 2016;Ma and Hovy, 2016) improve the NER accuracy by using a combination of bidirectional Long Short-Term Memory (bLSTM) and CRF layers. Other studies (Tomashenko et al., 2016) are based on a combination of HMM and Deep Neural Network (DNN) to reach ASR state-of-the-art performances. Lately, some E2E approaches for Named Entity Recognition from speech have been proposed in (Ghannay et al., 2018). In this work, the E2E systems will learn an alignment between audio and manual transcription enriched with NE without tree-structure. Other works use End-to-End approach to map directly speech to intent instead of map speech to word and then word to intent (Lugosch et al., 2019). Theses works shows the growing interest in E2E approaches for this type of task. In this paper, we propose a study of recent improvements for NER in the scope of the ETAPE campaign. We compare classical pipeline approaches with updated components and E2E approaches train with two kinds of strategy. The first contribution of this paper is a 3-pass implementation in order to tackle tree-structured named entity recognition. This 3-pass implementation consists in splitting the tree-structured scheme of named entity annotation into 3 parts to allow classical sequential labeling of each part before rebuilding the complex structure. The second contribution is an application of an E2E approach for tree-structured named entity recognition. It consists in training a system that learns the alignment between audio and textual transcription enriched with the structured named entity. After a description of the Quaero named entity treestructured task (Section 2.), we described our 3-pass im-plementation, our state-of-the-arts for NER and ASR components and our E2E implementation (Sections 3. and 4.). Data sets (Section 5.), experimental results and analyses are presented (Section 6.) followed by a conclusion (Section 7.). Task definition This study focuses on tree-structured named entities following the Quaero guideline . This guideline allows annotation according to 8 main types of named entities: amount, event, func, loc, org, pers, prod and time. The annotation uses sub-types to set up a hierarchy of named entities in order to better describe the concepts. Final annotation is necessarily the leaf of the hierarchical tree with each annotation node separates by a point. For example loc.add.phys which is the physical address of a place. With types and sub-types, there is 39 possible entity types in the Quaero annotation guideline. Also, in order to decompose the concepts, named entities are annotated by component. There is 28 possible component in the Quaero annotation guideline. The component is the smallest annotated element. Each word located inside a named entity needs to be annotated in components. Except for some articles and linking words. Most of the components depend on named entities types (e.g "day", "week" which refer to the type "time") but some are crosscutting (e.g "kind", "qualifier" which can be located inside all named entity types). Finally, annotations have a tree-structure. A named entity can be composed of components and other named entities, itself composed of components and named entities without nesting limit. For example, the sentence "la mairie de paris" can be annotated as "la <org.adm <kind mairie > de <loc.adm.town <name paris > > >". org.adm/loc.adm.town are Named Entities types with subtypes and kind/name are components. With the Quaero definition of named entity, NER consists in entity detection, classification and decomposition. Since this new definition is used for the French evaluation campaign ETAPE, the task in this study consists in Quaero named entity extraction from speech. Pipelines systems 3-pass implementation Our NER systems use standard BIO2 (Sang and Veenstra, 1999) format. This standard consists of writing a column file with first the words column and then the labels column. There is one couple of word/label per line and two different sentences are separated by an empty line. The label of a word corresponds to the named entity concept in which the word is located. This label is prefixed by a "B-" or an "I-" depending on the position of the word in the concept. "B-" (Begin) is used to prefixed the label of the first word and "I-" (Inside) for all the others. "O" (Outside) is the label used for words that are not inside a concept. Due to the structure of the annotation, most of the time words are inside more than one concept. Consequently, multiple labels are often related to a word. A single sequence labeling system cannot manage more than one prediction by word. The label concatenation can handle this problem by reducing all labels related to a word into a single one. Figure 1 illustrates an example of this concatenation. Figure 1: Transformation example of a tree-structured named entity sequence into BIO format. This sentence means in English "the town hall of paris" The label concatenation induces a dramatic increase of the number of predictable outputs for a classical sequence labeling approach. With this concatenation this number grow up to around 1690 predictable tags. It also induces a large annotation sparsity. These issues motivated us to split the BIO annotation into different levels. Since the named entities are necessarily decomposed in components, two facts can be deduced. First, the root of the tree structure is necessarily a named entity type. Second, the leaves of the tree structure are mainly components. And third, annotations between the leaves and the root of this structure are a mixture of type and component. Based on these observations we split the concatenated BIO annotation into three different levels. The first level contains the furthest annotations to the word level. These annotations are the root of the tree-structured named entities. This level is represented in green color in figure 1 and requires 96 predictable tags. The third level contains the closest annotations to the word level. These annotations are the leaves of the named entities. This level is represented in red in figure 1 and requires 57 predictable tags. Finally, the second level contains every others annotations. These annotations are named entity types and/or components located between the root and the leaves of the named entities. This level is represented in black in figure 1 and requires 187 predictable tags. With the annotation divided into three levels, the treestructured NER task is tackled by three sequence-labeling systems. A sequence labeling model is trained for each level. The final output of our 3-pass implementation is the output concatenation of each model from the first level to the third. With this final output, we are able to rebuild the tree-structured annotation. Then we can transform the BIO format into sequences. The sub-components of a named entity are dependents on the parent-component of this entity. For example, an organisation (parent-component) can contains a name (subcomponent) and a time can contains an amount. In order to provide this information to our systems, the predictions from the previous levels are added as an additional input to the next levels. So, predictions from the first level are injected into the training data of the second and the third level. Also, predictions from the second level are injected into the training data of the third level. The 3-pass implementation is represented by the figure 2. CRF The NER systems developed for this work are based on CRF (Conditional Random Fiedls). The models were trained using the WAPITI software (Lavergne et al., 2010). The models are based on a various set of features: • Words and bi-grams of words located in a [-2,+2] window around the target word • Prefixes and suffixes of words located in a [-2,+2] window around the target word • Some Yes/No features like "Does the word start with capital letter?" "Does the word contain non alphanumeric characters?" Some models also used morpho-syntactic features extracted from the output of the tree-tagger tool. For the 3-steps models, hypothesis provided by previous level models are also used. For all the models, we used the rprop algorithm during the training with a maximum of 40 iterations. NeuroNlp2 NeuroNlp2 1 is an implementation of the NER system proposed in (Ma and Hovy, 2016). This system uses a neural approach for sequences labeling. It takes benefits from word and character-level embeddings learned automatically by using a combination of bidirectional Long Short-Term Memory, convolution layers and Conditional Random Fields. A single CNN layer is used for character embeddings computing. Then, character embeddings are concatenated to word embeddings and feed the bLSTM layers. Finally, the output vectors of bLSTM are fed into the CRF layer to decode the best label sequence. In complement, dropout layers (Srivastava et al., 2014) are applied on input and output vectors of bLSTM and on input vectors of CNN. For our works, we kept all the default parameters except the numbers of bLSTM hidden layers which is set to two and the number of units per hidden layers is set to 200. ASR System The state-of-the-art speech recognition system for this study was built using Kaldi (Povey et al., 2011). The acoustic model is based on the lattice-free MMI, so-called "chain" model (Povey et al., 2016). We used a time-delay neural network (Peddinti et al., 2015) and a discriminative training on the top of it using the state-level minimum Bayes risk (sMBR) criterion (Veselỳ et al., 2013). A regular backoff n-gram model was estimated using the data presented in section 5.2. using SRILM. A 2-gram decoding is performed, followed by a 3-gram and a 4-gram rescoring step. The LM interpolation weights between the different data sources was optimized on the REPERE ) development corpus. The vocabulary contains the 160k most frequents words in the manually transcribed corpus. End-to-End System In this study, we used an End-to-End (E2E) implementation based on DeepSpeech 2 ASR system (Amodei et al., 2016). His architecture consists of a stack of two 2D-invariant convolutional layers (CNN), five bidirectional long short term memory layer (bLSTM) with sequence-wise batch normalization and a final softmax layer. This system is trained with the Connectionist Temporal Classification (CTC) loss function which allows the system to learn an alignment between an audio input and a character sequence to produce (Graves et al., 2006). Input features are sequences of log-spectrograms of power normalized audio clips calculated on 20ms windows. As we proposed in (Ghannay et al., 2018), output sequences consist of a sequence of characters composed of the word and Named entity tags. These tags are represented by starting tags and ending tags before and after words supporting these tags. The NE tree structure can be represented by a succession of tags, thus, the concatenation of labels is not required. The labels sparsity issue of the BIO format is not present in the case of our E2E system and so, the 3-pass implementation is not used. This system will learn the alignment between audio and character sequences enriched with NE tags. For example, the sentence: "la mairie de paris" for speech recognition becomes: "la <org.adm <kind mairie > de <loc.adm.town <name paris > > >" for Named Entity Recognition. In this example, "org.adm", "kind", "loc.adm.town" and "name" are four NE starting tags and ">" represent the ending tags. Notice that starting and ending tags are actually represented by a single character within the character sequence produced by the neural network. The previous example become "la $ & mairie > de % # paris > > >". Data Named entity recognition For our experiments, data comes from the French corpus ETAPE (Gravier et al., 2012). This corpus is composed of data recorded from French radio and TV stations between 2010 and 2011. They come from four different sources: France Inter, LCP, BFMTV, and TV8. This corpus contains 36 hours of speech divided into three parts: training (22 hours), development (7 hours) and test (7 hours). These data have manual transcriptions and are fully manually annotated with named entities concepts. Our training data were augmented with the Quaero corpus . This corpus is composed of data recorded from French radio and TV stations between 1998 and 2004. These data are made up of 100 hours of speech manually transcribed and fully annotated with named entities following the Quaero annotation guideline. Automatic speech recognition In this study, we used several corpora (ESTER 1&2 (Galliano et al., 2009), REPERE and VERA (Goryainova et al., 2014)) for a total of around 220 hours of speech. These data are used for the acoustic model training of the kaldi ASR system of the pipeline approach. The LM of this approach was trained using the speech transcripts augmented with several French newspapers (see section 4.2.3 in (Deléglise et al., 2009)). For ASR parts, our pipeline system and our E2E system use the same dataset except for the speech of ETAPE train dataset which is used only with our E2E approach. Experiments All our experiments are evaluated on the ETAPE test set with the Slot Error Rate (SER) metric (Makhoul et al., 1999) defined as: SER = α 1 S t + α 2 S b + α 3 S bt + βD + γI R(1) where: • S b : the number of slot boundaries substitution • S t : the number of slot type substitution • S bt : the number of slot boundaries and type substitution • D/I : the number of slot deletion / insertion • R : the number of slot references A slot is defined as an annotated text segment with start/end boundaries and a NE type. α 1 , α 2 , α 3 , β and γ are the weights assigned to each type of error. Here, α 3 , β and γ are set to 1 and α 1 and α 2 are set to 0.5. The best NER system of ETAPE campaign (Raymond, 2013) was made of 68 different binary CRF models. One per entity type and component. This system was applied to the output of the best ASR system and this combination reached 59.3% of SER. This constitute our baseline (System 0). In order to use the automatic transcriptions provided by different ASR system, manual references of named entities are projected on automatic transcriptions. Also, as the E2E system produce words and NE concepts, we keep only the word to get his automatic transcriptions. To be fully comparable, we use the ETAPE evaluation and projection scripts for all our experiments. Pipeline Experiments In this study, the pipeline experiments were carried out on automatic transcriptions coming from two different ASR systems. We compare the results of the best ASR of the evaluation campaign (Galibert et al., 2014) and the results of our state-of-the-art ASR. Performances of these ASR systems are presented in table 1. Evaluation Metric used is Word Error Rate (WER). The best ASR system of the evaluation campaign is denoted ASR 2012 , while our stateof-the-art ASR is denoted ASR 2019 . The system ASR 2019 is trained with all our audio data described in 5.2.. 16.5 The system ASR 2012 reached 21.8% of WER. Our state-ofthe-art ASR reaches 16.5% of WER on the ETAPE test set. This represents a relative improvement of 24.3% in terms of WER. NER systems were trained on manual transcriptions and then applied on automatic transcriptions. Part-of-speech (POS) tags were used for all of these experiments. System A corresponds to a 1-pass implementation of a classical CRF approach applied on ASR 2012 . System B corresponds to a 3-pass implementation of the same CRF approach than system 1 applied on the same automatic transcription. System C corresponds to the same 3-pass CRF implementation than system B applied on automatic transcription of our state-of-the-art ASR system. System D corresponds to a 3-pass implementation of our state-of-the-art NER system applied on ASR 2012 . Finally, system E corresponds to the combination of our state-of-the-art ASR and NER systems with a 3-pass implementation of NER component. Results of these systems are shown in Table 2. Our simplest system A reached 69.4% of SER. By using the 3-pass approach in the same configuration, the system B reached 59.5% of SER. The use of the 3-pass approach allows a 14.3% relative gain, showing the interest of the 3-pass approach. Results obtained with B are close to the baseline system (+0.2%), with only 3 CRF models instead of 68. As expected, automatic speech transcription quality improvement has a positive impact on SER results. This can be shown by a comparison between systems B and C and also between the systems D and E. For a CRF NER system, results start at 59.5% of SER and decrease to 55.0% (7.6% relative gain). For a bLSTM-CRF NER system, results start at 56.1% of SER and decrease to 51.1% (8.9% relative gain). The use of our state-of-the-art NER system allows another significant improvement. This improvement can be shown by a comparison between system B and system D and also by analyzing the differences between C and E. For an HMM-GMM ASR system, results decrease from 59.5% of SER to 55.0% (7.6% relative gain) For an HMM-DNN ASR system, results decrease from 55.0% to 51.1% of SER (7.1% relative gain). Finally, the combination of our 3-pass approach with stateof-the-art ASR and NER systems reach the best results for tree-structured named entity recognition from speech on these data at 51.1% of SER with a pipeline approach. End-to-End Experiments For the E2E system training, we apply the same strategy as in our previous work to compensate the lack of audio data with a manual NE annotation (Ghannay et al., 2018). It consists of multi-task learning with first an ASR system and then, by transfer learning, a NER system (ASR − > N ER struct ). The output labels change between ASR and NER tasks by the addition of labels for NE tags. For the transfer learning, we keep all the model's parameters except the top layer (softmax) which are fully reset. To train the ASR task, we use all our audio data described in 5.2.. For the NER task, we use the data described in 5.1.. Our previous work shows the interest of a Curriculumbased Transfer Learning approach (CTL) for the E2E system (Caubrière et al., 2019). It consists to train the same model several times with different tasks ordered from the most generic to the most specific. In our targeted task, a NE is composed of types and components. Components are used to decomposed NE types (see section 2.). With the CTL approach, we proposed to train the NER task with two different tasks. First with the NE types only and second with the full annotation. Since the components are directly dependent on the NE types, we assume that a task with types only is more generic than a task with types and components. We train the learning chain ASR − > N ER types − > N ER f ull , with first the speech recognition system, then the NER system trained with only the NE types annotations and finally the NER system with the full annotation for the targeted task. Results of the both E2E systems are reported in table 3. Metrics and data sets used are the same as our pipeline experiments. Results shows the interest of the CTL approach for our task. by splitting the training into two different tasks we are able to reduce the SER from 62.9% to 61.9%. With the DeepSpeech 2 implementation, it is possible to compute a beam search on the neural network outputs. We use two different word-level language models (3-gram and 4-gram) trained on the ETAPE and QUAERO train set. Results are presented in table 4. As expected, all results are improved by the use of a language model. By applying the 3-gram LM we can reduce significantly the SER from 62.9% to 57.9%. We can reduce more the SER by applying a 4-gram LM and reach 57.3%. Notice that the CTL approach is still useful and set our best results to 56.9% of SER. Global comparison We reported in table 5 the results of the best pipeline system, the best E2E system and the best system of the ETAPE campaign, our baseline. Table 5: Reported results of ETAPE baseline and best pipeline and end-to-end systems. System SER (Sys 0) Baseline ETAPE 2012 59.3 (E2E) ASR− > N ER types − > N ER f ull (4-gram) 56.9 (PIP) 3-pass -bLSTM-CRF -ASR 2019 51.1 With our E2E approach, we reach a relative improvement of 4% since the publication of ETAPE results. However, results show also that a pipeline approach with each component updated with our 3-pass implementation still better and set the new state-of-the-art. Comparison between the baseline and our best pipeline systems shows a significant relative improvement of 13.8%. By comparison between our best E2E approach and our best pipeline approach, results show a relative improvement of 10.2% at the advantage of the pipeline approach. Conclusion This study gives an update on the NER results that can be achieved on the French ETAPE evaluation campaign. Our experiments have been carried out on pipeline and end-toend systems. In this paper, an original 3-pass implementation is proposed for the NER component in the context of pipeline systems. By splitting the tree-structured named entities annotations into three parts, we are able to handle this task as three different simple sequence labeling tasks. This approach reaches similar results than the best NER system of ETAPE campaign with only 3 CRF models instead of 68 binaries models. Based on our previous work on flat named entity recognition system with E2E approach, we also proposed an E2E system for structured named entity recognition. We are able to reach the best results with the E2E systems by the use of our CTL approach. By comparison between the best result of ETAPE evaluation campaign and our best E2E system, results show a relative improvement of 4%. However, this approach doesn't set the new state-of-the-art which is set by the fully updated pipeline systems with our original 3-pass implementation. Experimental results show an interesting global relative improvement of 13.8% between ETAPE results and the new stateof-the-art. Acknowledgements This work is partially supported by the French National Research Agency under grant ANR-15-CE23-0025-01 (Con-tentCheck project) and by the RFI Atlanstic2020 RAPACE project. Bibliographical References Figure 2 2Figure 2: 3-pass implementation overview Sys A. 1-pass -CRF -ASR 2012 69.4 Sys B. 3-pass -CRF -ASR 2012 59.5 Sys C. 3-pass -CRF -ASR 2019 55.0 Sys D. 3-pass -bLSTM-CRF -ASR 2012 56.1 Sys E. 3-pass -bLSTM-CRF -ASR 2019 51.1 Table 1 : 1Automatic Speech Recognition performances ASR System WER ASR 2012 21.8 ASR 2019 Table 2 : 2Pipeline experimental results System SER Sys 0. Baseline ETAPE 2012 59.3 Table 3 : 3End-to-End experimental results with a greedy decoding System SER ASR− > N ER struct 62.9 ASR− > N ER types − > N ER f ull 61.9 Table 4 : 4End-to-End experimental results with a beam search decoding System LM SER ASR− > N ER struct 3-gram 57.9 ASR− > N ER types − > N ER f ull 3-gram 57.5 ASR− > N ER struct 4-gram 57.3 ASR− > N ER types − > N ER f ull 4-gram 56.9 Deep speech 2: Endto-end speech recognition in English and Mandarin. D Amodei, S Ananthanarayanan, R Anubhai, J Bai, E Battenberg, C Case, J Casper, B Catanzaro, Q Cheng, G Chen, International Conference on Machine Learning. Amodei, D., Ananthanarayanan, S., Anubhai, R., Bai, J., Battenberg, E., Case, C., Casper, J., Catanzaro, B., Cheng, Q., Chen, G., et al. (2016). Deep speech 2: End- to-end speech recognition in English and Mandarin. In International Conference on Machine Learning, pages 173-182. How to evaluate asr output for named entity recognition? In Interspeech. Ben Jannet, M A Galibert, O Adda-Decker, M Rosset, S , Dresden, GermanyBen Jannet, M. A., Galibert, O., Adda-Decker, M., and Rosset, S. (2015). How to evaluate asr output for named entity recognition? In Interspeech, Dresden, Germany, September. Curriculum-based transfer learning for an effective end-to-end spoken language understanding and domain portability. A Caubrière, N Tomashenko, A Laurent, E Morin, N Camelin, Y Estève, Interspeech. Caubrière, A., Tomashenko, N., Laurent, A., Morin, E., Camelin, N., and Estève, Y. (2019). Curriculum-based transfer learning for an effective end-to-end spoken lan- guage understanding and domain portability. In Inter- speech. Improvements to the lium french asr system based on cmu sphinx: what helps to significantly reduce the word error rate?. P Deléglise, Y Esteve, S Meignier, Merlin , T , Tenth Annual Conference of the International Speech Communication Association. Deléglise, P., Esteve, Y., Meignier, S., and Merlin, T. (2009). Improvements to the lium french asr system based on cmu sphinx: what helps to significantly reduce the word error rate? In Tenth Annual Conference of the International Speech Communication Association. Models cascade for tree-structured named entity detection. M Dinarelli, S Rosset, Proceedings of 5th International Joint Conference on Natural Language Processing. Chiang Mai, Thailand5th International Joint Conference on Natural Language ProcessingNovember. Asian Federation of Natural Language ProcessingDinarelli, M. and Rosset, S. (2011). Models cascade for tree-structured named entity detection. In Proceedings of 5th International Joint Conference on Natural Lan- guage Processing, pages 1269-1278, Chiang Mai, Thai- land, November. Asian Federation of Natural Language Processing. Tree representations in probabilistic models for extended named entity detection. M Dinarelli, S Rosset, European Chapter of the Association for Computational Linguistics (EACL). Avignon, FranceDinarelli, M. and Rosset, S. (2012). Tree representations in probabilistic models for extended named entity detec- tion. In European Chapter of the Association for Com- putational Linguistics (EACL), pages 174-184, Avignon, France, April. The ETAPE speech processing evaluation. O Galibert, J Leixa, G Adda, K Choukri, G Gravier, Proc of LREC. of LRECReykjavik, Iceland. ELRAGalibert, O., Leixa, J., Adda, G., Choukri, K., and Gravier, G. (2014). The ETAPE speech processing evaluation. In Proc of LREC, Reykjavik, Iceland. ELRA. The ester 2 evaluation campaign for the rich transcription of french radio broadcasts. S Galliano, G Gravier, L Chaubard, Tenth Annual Conference of the International Speech Communication Association. Galliano, S., Gravier, G., and Chaubard, L. (2009). The ester 2 evaluation campaign for the rich transcription of french radio broadcasts. In Tenth Annual Conference of the International Speech Communication Association. End-toend named entity and semantic concept extraction from speech. S Ghannay, A Caubrière, Y Estève, N Camelin, E Simonnet, A Laurent, E Morin, IEEE Spoken Language Technology Workshop (SLT). IEEEGhannay, S., Caubrière, A., Estève, Y., Camelin, N., Si- monnet, E., Laurent, A., and Morin, E. (2018). End-to- end named entity and semantic concept extraction from speech. In 2018 IEEE Spoken Language Technology Workshop (SLT), pages 692-699. IEEE. The repere corpus: a multimodal corpus for person recognition. A Giraudel, M Carré, V Mapelli, J Kahn, O Galibert, L Quintard, LREC. Giraudel, A., Carré, M., Mapelli, V., Kahn, J., Galibert, O., and Quintard, L. (2012). The repere corpus: a mul- timodal corpus for person recognition. In LREC, pages 1102-1107. Morpho-syntactic study of errors from speech recognition system. M Goryainova, C Grouin, S Rosset, I Vasilescu, LREC. 14Goryainova, M., Grouin, C., Rosset, S., and Vasilescu, I. (2014). Morpho-syntactic study of errors from speech recognition system. In LREC, volume 14, pages 3050- 3056. Connectionist temporal classification: labelling unsegmented sequence data with recurrent neural networks. A Graves, S Fernández, F Gomez, J Schmidhuber, Proceedings of the 23rd international conference on Machine learning. the 23rd international conference on Machine learningACMGraves, A., Fernández, S., Gomez, F., and Schmidhuber, J. (2006). Connectionist temporal classification: labelling unsegmented sequence data with recurrent neural net- works. In Proceedings of the 23rd international confer- ence on Machine learning, pages 369-376. ACM. The etape corpus for the evaluation of speech-based tv content processing in the french language. G Gravier, G Adda, N Paulsson, M Carré, A Giraudel, O Galibert, Proceedings of the Eight International Conference on Language Resources and Evaluation (LREC'12). the Eight International Conference on Language Resources and Evaluation (LREC'12)Istanbul, Turkey, mayEuropean Language Resources Association (ELRAGravier, G., Adda, G., Paulsson, N., Carré, M., Giraudel, A., and Galibert, O. (2012). The etape corpus for the evaluation of speech-based tv content processing in the french language. In Proceedings of the Eight Interna- tional Conference on Language Resources and Evalua- tion (LREC'12), Istanbul, Turkey, may. European Lan- guage Resources Association (ELRA). Proposal for an extension of traditional named entities: From guidelines to evaluation, an overview. C Grouin, S Rosset, P Zweigenbaum, K Fort, O Galibert, L Quintard, Proceedings of the Fifth Linguistic Annotation Workshop (LAW-V). the Fifth Linguistic Annotation Workshop (LAW-V)Portland, ORAssociation for Computational LinguisticsGrouin, C., Rosset, S., Zweigenbaum, P., Fort, K., Galibert, O., and Quintard, L. (2011). Proposal for an extension of traditional named entities: From guidelines to evalua- tion, an overview. In Proceedings of the Fifth Linguistic Annotation Workshop (LAW-V), pages 92-100, Portland, OR, June. Association for Computational Linguistics. Pcfg models of linguistic tree representations. M Johnson, Computational Linguistics. 244Johnson, M. (1998). Pcfg models of linguistic tree repre- sentations. Computational Linguistics, 24(4):613-632. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. J Lafferty, A Mccallum, F C Pereira, ICML. Lafferty, J., McCallum, A., and Pereira, F. C. (2001). Con- ditional random fields: Probabilistic models for segment- ing and labeling sequence data. In ICML. G Lample, M Ballesteros, S Subramanian, K Kawakami, C Dyer, arXiv:1603.01360Neural architectures for named entity recognition. arXiv preprintLample, G., Ballesteros, M., Subramanian, S., Kawakami, K., and Dyer, C. (2016). Neural architectures for named entity recognition. arXiv preprint arXiv:1603.01360. Practical Very Large Scale CRFs. T Lavergne, O Cappé, F Yvon, Proceedings the 48 th Annual Meeting of the Association for Computational Linguistics. the 48 th Annual Meeting of the Association for Computational LinguisticsLavergne, T., Cappé, O., and Yvon, F. (2010). Practical Very Large Scale CRFs. In Proceedings the 48 th An- nual Meeting of the Association for Computational Lin- guistics, pages 504-513. Speech model pre-training for endto-end spoken language understanding. L Lugosch, M Ravanelli, P Ignoto, V S Tomar, Y Bengio, InterspeechLugosch, L., Ravanelli, M., Ignoto, P., Tomar, V. S., and Bengio, Y. (2019). Speech model pre-training for end- to-end spoken language understanding. In Interspeech. End-to-end sequence labeling via bi-directional lstm-cnns-crf. X Ma, E Hovy, arXiv:1603.01354arXiv preprintMa, X. and Hovy, E. (2016). End-to-end sequence la- beling via bi-directional lstm-cnns-crf. arXiv preprint arXiv:1603.01354. Performance measures for information extraction. J Makhoul, F Kubala, R Schwartz, R Weischedel, Proc. of DARPA Broadcast News Workshop. of DARPA Broadcast News WorkshopMakhoul, J., Kubala, F., Schwartz, R., and Weischedel, R. (1999). Performance measures for information ex- traction. In Proc. of DARPA Broadcast News Workshop, pages 249-252. A time delay neural network architecture for efficient modeling of long temporal contexts. V Peddinti, D Povey, S Khudanpur, Sixteenth Annual Conference of the International Speech Communication Association. Peddinti, V., Povey, D., and Khudanpur, S. (2015). A time delay neural network architecture for efficient modeling of long temporal contexts. In Sixteenth Annual Confer- ence of the International Speech Communication Associ- ation. The kaldi speech recognition toolkit. D Povey, A Ghoshal, G Boulianne, L Burget, O Glembek, N Goel, M Hannemann, P Motlicek, Y Qian, P Schwarz, IEEE Signal Processing Society. Technical reportPovey, D., Ghoshal, A., Boulianne, G., Burget, L., Glem- bek, O., Goel, N., Hannemann, M., Motlicek, P., Qian, Y., Schwarz, P., et al. (2011). The kaldi speech recog- nition toolkit. Technical report, IEEE Signal Processing Society. Purely sequence-trained neural networks for asr based on lattice-free mmi. D Povey, V Peddinti, D Galvez, P Ghahremani, V Manohar, X Na, Y Wang, S Khudanpur, Interspeech. Povey, D., Peddinti, V., Galvez, D., Ghahremani, P., Manohar, V., Na, X., Wang, Y., and Khudanpur, S. (2016). Purely sequence-trained neural networks for asr based on lattice-free mmi. In Interspeech, pages 2751- 2755. Robust tree-structured named entities recognition from speech. C Raymond, Proceedings of the International Conference on Acoustic Speech and Signal Processing. the International Conference on Acoustic Speech and Signal ProcessingVancouver, CanadaRaymond, C. (2013). Robust tree-structured named enti- ties recognition from speech. In Proceedings of the In- ternational Conference on Acoustic Speech and Signal Processing, Vancouver, Canada, May. Entités nommées structurées : guide d'annotation quaero. limsi-cnrs. S Rosset, C Grouin, P Zweigenbaum, orsay, franceRosset, S., Grouin, C., and Zweigenbaum, P. (2011). En- tités nommées structurées : guide d'annotation quaero. limsi-cnrs, orsay, france. Representing text chunks. E F Sang, J Veenstra, Proceedings of the ninth conference on European chapter of the Association for Computational Linguistics. the ninth conference on European chapter of the Association for Computational LinguisticsAssociation for Computational LinguisticsSang, E. F. and Veenstra, J. (1999). Representing text chunks. In Proceedings of the ninth conference on Euro- pean chapter of the Association for Computational Lin- guistics, pages 173-179. Association for Computational Linguistics. Dropout: a simple way to prevent neural networks from overfitting. N Srivastava, G Hinton, A Krizhevsky, I Sutskever, R Salakhutdinov, The Journal of Machine Learning Research. 151Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., and Salakhutdinov, R. (2014). Dropout: a simple way to prevent neural networks from overfitting. The Journal of Machine Learning Research, 15(1):1929-1958. Lium asr systems for the 2016 multigenre broadcast arabic challenge. N Tomashenko, K Vythelingum, A Rousseau, Y Estève, 2016 IEEE Spoken Language Technology Workshop (SLT). IEEETomashenko, N., Vythelingum, K., Rousseau, A., and Estève, Y. (2016). Lium asr systems for the 2016 multi- genre broadcast arabic challenge. In 2016 IEEE Spoken Language Technology Workshop (SLT), pages 285-291. IEEE. Sequence-discriminative training of deep neural networks. K Veselỳ, A Ghoshal, L Burget, D Povey, Interspeech. 2013Veselỳ, K., Ghoshal, A., Burget, L., and Povey, D. (2013). Sequence-discriminative training of deep neural net- works. In Interspeech, volume 2013, pages 2345-2349.
53,083,290
Recovering Missing Characters in Old Hawaiian Writing
In contrast to the older writing system of the 19th century, modern Hawaiian orthography employs characters for long vowels and glottal stops. These extra characters account for about one-third of the phonemes in Hawaiian, so including them makes a big difference to reading comprehension and pronunciation. However, transliterating between older and newer texts is a laborious task when performed manually. We introduce two related methods to help solve this transliteration problem automatically. One approach is implemented, endto-end, using finite state transducers (FSTs). The other is a hybrid deep learning approach, which approximately composes an FST with a recurrent neural network language model.
[ 7439240, 8355580, 7045397 ]
Recovering Missing Characters in Old Hawaiian Writing October 31 -November 4. 2018 Brendan Shillingford brendan.shillingford@cs.ox.ac.uk University of Oxford 1 DeepMind Oiwi Parker Jones oiwi.parkerjones@wolfson.ox.ac.uk University of Oxford 1 DeepMind Recovering Missing Characters in Old Hawaiian Writing Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing the 2018 Conference on Empirical Methods in Natural Language ProcessingBrussels; BelgiumOctober 31 -November 4. 20184929 In contrast to the older writing system of the 19th century, modern Hawaiian orthography employs characters for long vowels and glottal stops. These extra characters account for about one-third of the phonemes in Hawaiian, so including them makes a big difference to reading comprehension and pronunciation. However, transliterating between older and newer texts is a laborious task when performed manually. We introduce two related methods to help solve this transliteration problem automatically. One approach is implemented, endto-end, using finite state transducers (FSTs). The other is a hybrid deep learning approach, which approximately composes an FST with a recurrent neural network language model. Introduction From 1834 to 1948, more than 125,000 newspaper pages were published in the Hawaiian language (Nogelmeier, 2010). Yet by 1981, many expected this once flourishing language to die (Benton, 1981). Hawaiian has since defied expectations and experienced the beginnings of a remarkable recovery (Warner, 2001;Wilson and Kamanā, 2001). However much of the literary inheritance that is contained in the newspapers has become difficult for modern Hawaiians to read, since the newspapers were written in an orthography that failed to represent about one-third of the language's phonemes. This orthography, which we will refer to as the missionary orthography, excluded Hawaiian phonemes that did not have equivalents in American English (see Schütz, 1994), including Hawaiian's long vowels /i: e: a: o: u:/ and glottal stop /P/. By contrast, the modern Hawaiian orthography, an innovation of Pukui and Elbert's Hawaiian dictionary (Pukui and Elbert, 1957), presents a nearly perfect, one-to-one * Authors contributed equally. mapping between graphemes and phonemes (see Appendix A.1). The process of manual transliteration from missionary to modern Hawaiian orthography is extremely labor intensive. Yet the cultural benefits are so great that hundreds of pages of newspaper-serials have already been transliterated by hand, such as Nogelmeier's new edition of the epic tale of Hi'iakaikapoliopele, the volcano goddess's sister (Ho'oulumāhiehie, 2007). Critically important as such efforts are to the continued revitalization of this endangered language, they are still only an introduction to the material that could be translated for a modern Hawaiian audience. In this paper, we propose to automate, or semiautomate, the transliteration of old Hawaiian texts into the modern orthography. Following a brief review of related work (Section 2), we begin by describing a dataset of modern Hawaiian (Section 3). In Section 4, we present two methods for recovering missing graphemes (and hence phonemes) from the missionary orthography. The first composes a series of weighted FSTs; the second approximately composes a FST with a recurrent neural network language model (RNNLM) using a beam search procedure. Both approaches require only modern Hawaiian texts for training, which are much more plentiful than parallel corpora. Section 5 reports the results of our transliteration experiments using a simulated parallel corpus, as well as two 19th century newspaper articles for which we also have modern Hawaiian transcriptions. Being based on FSTs, both approaches are modular and extensible. We observe useful and promising results for both of our methods, with the best results obtained by the hybrid FST-RNNLM. These results showcase the strength of combining established hand-engineering methods with deep learning in a smaller data regime, with practical applications for an endangered language. Related work Many of the themes that we address relate to existing literature. For example, Hajič et al. (2000) and Scannell (2014) have written on machine translation (MT) for closely related languages and on multilingual text normalization. Though languagerelatedness makes MT easier (Kolovratník et al., 2010), state-of-the-art techniques such as neural machine translation (NMT) have not performed well for languages with little data (Östling and Tiedemann, 2017). So while the Hawaiian transliteration problem could be cast as an instance of MT or of NMT, we chose to sidestep the scarcity of parallel data by not considering such approaches. Hybrid approaches that combine expert knowledge for well-understood structures with deep learning for data-plentiful subproblems offer rich opportunities for data-efficient modelling. Prior work has combined FSTs with RNNs, although not using the approximate FST-to-RNN composition algorithm that we introduce here (in Appendix A.4). For example, Sproat and Jaitly (2016) used an FST to restrict the search space when decoding from an RNN and Rastogi et al. (2016) incorporated RNN information into an FST. Data Missionary & modern orthography The primary difference between the missionary and modern Hawaiian orthographies is that the missionary orthography does not encode long vowels or the glottal stop (see Appendix A.1). For example, the following Hawaiian phrases were recorded by a 19th-century German traveller in the missionary orthography: Ua oia au, E ue ae oe ia Ii, E ao ae oe ia ia (Chamisso, 1837, p. 7). In the modern orthography these become: Ua 'ō 'ia au 'I am speared', E uē a'e 'oe iā 'Ī'ī 'You must weep for 'Ī'ī (a person)', and E a'o a'e 'oe iā ia 'You teach him' (Elbert and Pukui, 1979, p. 3). We can convert text in the modern Hawaiian orthography backward chronologically to an approximate missionary orthography by mapping each glottal stop ' to the empty string ε, and each long vowel, e.g. āēīōū , to its corresponding short vowel, a e i o u . As a first approximation, we may treat mappings from the modern-to-missionary orthographies as unambiguously many-to-one; thus there is information loss. We will return to secondary differences between the orthographies in Section 6. To illustrate, the following four words in the modern orthography all map to the same missionary string aa: a'a (root), 'a'a (brave), 'a'ā (crumbly lava rock), and 'ā'ā (stutter). The forward mapping from missionary-tomodern orthographies is one-to-many. Thus the missionary string aa could map to a'a, 'a'a, 'a'ā, or 'ā'ā. The transliteration problem we address here seeks to discover how we can use context to recover the information not present in the missionary orthography that modern Hawaiian orthography retains. Data sources We draw on three sources for modern Hawaiian text: the main text of Hi'iakaikapoliopele (Ho'oulumāhiehie, 2007), 160 short texts from Ulukau: The Hawaiian Electronic Library, and the full Hawaiian Wikipedia (see Figure 1). 1 For evaluation, we simulate a missionary-era version of the modern texts using the backward mapping described above. In addition, we evaluated our models on a couple of 19th century newspaper samples for which we have parallel missionary-era and modern text. Both simulated and real parallel corpora will be described in Section 5. Finite state transducers Our initial approach represents the mapping from missionary to modern orthography using a composition of (weighted) FSTs. For a thorough review of FSTs, see Mohri (1997). First, we construct a finite state acceptor, I, from the input text. Here we construct a trivial chainshaped acceptor that accepts only the input text. Each symbol in the input text is represented by a state which emits this symbol on a single transition that moves to the next state. The transition emitting the final symbol in the string leads to the sole accepting state. Second, we construct a missionary-to-modern orthography conversion FST which we call C, which models potential orthography changes that can occur when transliterating from the missionary to modern Hawaiian orthography. For example, two non-deterministic transitions introduce an optional long-vowel map (a :ā) and (a : a). Another transition inserts glottal stops: ( : '). By capturing the orthographic changes we know to occur, the composition I • C produces a large set of candidates to be narrowed using the language model. Third, we use the modern Hawaiian text from Section 3.2 to construct and evaluate a number of character-level n-gram language models, of various combinations of order and Katz backoff and Kneser-Ney (KN) smoothing (Katz, 1987;Kneser and Ney, 1995); see Appendix A.5 for details. N-gram language models can be expressed as weighted FSTs. We denote the n-gram or weighted FST language model as G. Character-level models are used as we wanted to generalize to outof-vocabulary words, which we expected to occur frequently in a small corpus like the one we have for Hawaiian. Finally, we use this model to infer modern orthography given a piece of text in missionary orthography as input, then compose the FSTs to form the search graph FST: S = I•C•G. The minimum cost path through S gives the predicted modern orthography. Of these n-gram-based approached, we found the Kneser-Ney-based models to perform best; these approaches are called FST-C-NGRAM-KN and FST-C wb -NGRAM-KN. We circumvent the lack of a large, non-simulated parallel corpus by training the language model exclusively on text in the modern Hawaiian orthography. In turn, the orthographic transliteration FST C produces candidates which are disambiguated by the language model. The result is finally evaluated against the ground-truth modern text. Although the orthographic transliteration model is an approximation, and thus not exhaustive, it embodies an explicit and interpretable representation that can be easily extended independently of the rest of the model. To illustrate how the approach can be extended, we constructed a variant C wb (where wb stands for word boundary). C wb optionally inserts a space after each vowel using an additional arc that maps ( : space), as diagrammed in Appendix A.2. This variant is able to model some changes in Hawaiian's word-boundary conventions (Wilson, 1976), such as alaila becoming a laila which demarcates the preposition a 'until' and noun laila 'then'. We employ this variant to predict modern equivalents from 19th century newspaper samples in Section 5. Pseudocode summarizing this method is shown in Appendix A.3. Example predictions can be found in Appendix A.6. FSTs with LSTM language models As an alternative approach, we combined the FST C in the previous section with an RNNLM (Mikolov et al., 2010). RNNLMs often generalize better than n-gram models. An RNN is a neural network that models temporal or sequential data, by iterating a function mapping a state and input to a new state and output. These can be stacked to form a deep RNN. For language modelling, each step of the final RNN layer models a word or character sequence via p(w 1 , . . . ,w n ) = n i=1 p(w i |w 1:i−1 ) and can be trained by maximum likelihood. Recent language modeling work has typically used the long short-term memory (LSTM) unit (Hochreiter and Schmidhuber, 1997) for its favorable gradient propagation properties. All RNNs in this paper are LSTMs. Our goal is to replace the n-gram language model in the end-to-end FST approach with an RNNLM. While the minimum cost path through an FST can be computed exactly as done in the previous section, it is not straightforward to compose the relation defined by an FST with an arbitrary one like that defined by an RNNLM. A minimum cost path through the composition of the FST and the RNNLM can be defined as a path (i.e. label sequence) that minimizes the sum of the FST path cost and the RNNLM cost. We can approximately find a minimum cost path of the composition of the two models by a breadthfirst search over the FST graph, or using a beam search, as follows. At any particular iteration, consider a single beam element. The beam element holds the current FST and RNN states, and the path taken through the FST so far. We follow each possible arc from the current FST state, each producing a new child beam element, and feed the output symbol into the RNN (unless it is ). There may be duplicate beam elements due the nondeterminicity of the FST; in this case, the lower cost edge wins. We sort by the sum of the FST and RNN costs, keep the lowest-cost K, and then proceed to the next iteration. If a beam element is on an accepting state of the FST, it is kept as-is between iterations. Detailed pseudocode is provided in Appendix A.4. In the following we will refer to the hybrid models as FST-RNNLM-or as FST-RNNLM-C and FST-RNNLM-C wb if we want to distinguish between which FST we used. Similarly, the FSTonly models will be referred to as FST-C and FST-C wb , with suffixes denoting what kind of n-gram and smoothing were used. For example, FST-C-7GRAM-KN denotes a FST-only model with an 7-gram language model and Kneser-Ney smoothing. Details of the language models trained can be found in Appendix A.5. Results Evaluation. Since we were unable to find a sufficiently large corpus of parallel texts in the missionary and modern Hawaiian orthographies, we instead used a corpus of modern Hawaiian texts (ground-truth) as summarized in Section 3.2 and Figure 1. Note that training the n-gram and RNN language models required only this modern corpus. To evaluate the accuracy of our approaches, we derived a synthetic parallel corpus from these modern Hawaiian texts. We also used a small but real parallel corpus, based on two 19th century newspaper texts and their hand-edited modern equivalents. Simulated parallel corpus. To produce a simulated parallel corpus (input-missionary), we systematically reduced the orthography in the modern texts using the backward mapping described in Section 3.1. We then applied the two approaches described in Section 4, with the aim of recovering the information lost. We evaluated the predicted modern text (predictions) by computing CERR = d(prediction, ground-truth) d(input-missionary, ground-truth) , where d denotes character-level edit distance. This is a modification of character error rate, normalized by the distance of the input and target rather than by the length of the target. We note that CERR may be high even when the predictions are very accurate as d(input-missionary, ground-truth) is small when the text is similar in both orthographies. Table 1 reports the results of the approaches we described in Section 4. Out of the Kneser-Ney n-gram models, we found that the FST-C-9GRAM-KN and the version modelling word boundaries (FST-C wb -9GRAM-KN) to perform best on the synthetic parallel corpus and newspapers, respectively. C wb was not applied to the synthetic parallel corpus as we did not model word splitting. The hybrid models (FST-RNNLM) outperformed all FST-only approaches. Real parallel corpus (newspaper texts). Not content to evaluate the model on simulated missionary orthography, we also evaluated it on two newspaper texts, using selections originally published in 1867 and 1894 for which we had 19th century and manually-edited modern equivalents. The newspaper selections discuss Kahahana, one of the last kings of O'ahu (Kamakau and Perreira, 2002), and Uluhaimalama, a garden party and secret political gathering, held after the deposition of Hawai'i's last queen (Pukui et al., 2006). Unlike the synthetic missionary corpus evaluation where we did not model word splitting, we found that replacing C with C wb on the newspaper texts significantly improved the output, especially on the FST-RNNLM model. Thus, we found the wordsplitting hybrid model (FST-RNNLM-C wb ) to be the best performing model overall (Table 1). Conclusions and future work With this paper we introduced a new transliteration problem to the field, that of mapping between old and new Hawaiian orthographies-where the modern Hawaiian orthography represents linguistic information that is missing from older missionaryera texts. One difficulty of this problem is that there is a limited amount of Hawaiian data, making data-hungry solutions like end-to-end deep learning An example of (missionary input, predicted modern text, ground-truth), from each newspaper. Note the correctly split word in the second example. Incorrect characters, which are quite rare, are shown as red and underlined. More sample predictions can be found in Appendix A.6. unlikely to work. To solve the transliteration problem, we therefore proposed two models: the first was implemented end-to-end using weighted FSTs; the second was a hybrid deep learning approach that combined an FST and an RNNLM. Both models gave promising results, but the hybrid approach, which allowed us to use a more powerful recurrent neural network-based language model despite our dataset's small size, performed best. Factoring a problem like ours into one part that can be modelled exactly using expert domain knowledge and into another part that can be learned directly from data using deep learning is not novel; however it is a promising research direction for data-efficient modelling. To our knowledge, this paper is the first to describe a procedure to compose an FST with an RNN by approximately performing beam search over the FST. While the role of the RNNLM part of the hybrid approach may be obvious, the FST component plays an important role too. For example, the hand-designed FST component can be replaced without needing to retrain the RNNLM. We tried to showcase this modularity by constructing two FSTs which we referred to as C and C wb , where only the latter allowed the insertion of spaces. Future work could extend the FST to model orthographic changes suggested by an error analysis of the current model's predictions (see Appendix A.6). These errors motivate new mappings for consonant substitutions like (r : l) and (s : k) observed in loanword adaptations (e.g. rose ⇒ loke). The error analysis also motivates mappings to delete spaces ( : ) and to handle contractions, like na'lii ⇒ nā ali'i. We could further incorporate linguistic knowledge of Hawaiian into the FST, which tells us, for example, that a consonant is typically followed by a vowel (Parker Jones, 2010). Additional improvements to the hybrid model might be obtained by increasing the amount of modern Hawaiian text used to train the RNNLM. One way to do this would be to accelerate the rate at which missionary-era Hawaiian texts are modernized. To this end, we hope that the present models will be used within the Hawaiian community to semi-automate, and thereby accelerate, the modernization of old Hawaiian texts. Figure 1 : 1Modern data sources and their sizes. Figure 2 : 2Figure 2: An example of (missionary input, predicted modern text, ground-truth), from each newspaper. Note the correctly split word in the second example. Incorrect characters, which are quite rare, are shown as red and underlined. More sample predictions can be found in Appendix A.6. Table 1: Performance (%CERR). Slash-separated pairs denote FSTs incapable/capable of inserting word boundaries, respectively; see Section 4. The -KN suffix denotes Kneser-Ney smoothing. The data from Section 3.2 is used for evaluating the modern-orthography language model perplexity, and "Corpus" evaluates test-set transliteration performance from the synthetic missionary text back to the original modern text. Input Ua lawe ola ia o Keawehano imua o Kahekili, a ua hai aku o Kapohu... Prediction Ua lawe ola 'ia 'o Keawehano i mua o Kahekili, a ua ha'i aku 'o Kapohu... Ground-truth Ua lawe ola 'ia 'o Keawehano i mua o Kahekili, a ua ha'i aku 'o Kapohū...LM perplexity Transliteration performance (%CERR) Transliteration model Valid. Test Corpus Newspaper 1 Newspaper 2 FST-(C/C wb )-7GRAM-KN 3.07 3.13 27.3% 50.1% / 38.7% 52.0% / 47.5% FST-(C/C wb )-9GRAM-KN 2.95 3.02 26.6% 50.7% / 39.3% 52.5% / 47.2% FST-(C/C wb )-11GRAM-KN 2.94 3.02 27.8% 53.9% / 41.3% 54.1% / 48.7% FST-RNNLM-(C/C wb ) 2.65 2.69 16.3% 47.2% / 34.3% 49.8% / 41.2% ModelsWe can frame the task of transliterating from missionary-to-modern Hawaiian orthographies as a sequence transduction problem. Many deep learning approaches (e.g.Sutskever et al., 2014;Graves, 2012) are not easily applicable to this task since we do not have a sufficiently large dataset of parallel texts. Instead, we focus on approaches that mix hand-designed finite state transducers with trained language models, including deep learning approaches like RNNLMs(Mikolov et al., 2010).1 Ulukau: The Hawaiian Electronic Library: http: //ulukau.org/, Hawaiian Wikipedia: https://haw. wikipedia.org/. Both accessed 19 May 2018. AcknowledgmentsWe are grateful to M. Puakea Nogelmeier for providing an electronic copy of Hi'iakaikapoliopele(Ho'oulumāhiehie, 2007). The flight of the Amokura: Oceanic languages and formal education in the South Pacific. A Richard, Benton, WellingtonNew Zealand Council for Educational ResearchRichard A Benton. 1981. The flight of the Amokura: Oceanic languages and formal education in the South Pacific. New Zealand Council for Educational Research, Wellington. Über die Hawaiische Sprache, Vorgelegt der Königlichen Academie der Wissenschaften zu Berlin am 12, Januar, 1837. Weidmann. Chamisso Adelbert Von, LeipzigAdelbert von Chamisso. 1837.Über die Hawaiische Sprache, Vorgelegt der Königlichen Academie der Wissenschaften zu Berlin am 12, Januar, 1837. Wei- dmann, Leipzig. . H Samuel, Mary Kawena Elbert, Pukui, Hawaiian Grammar. University of Hawai'i PressHonoluluSamuel H. Elbert and Mary Kawena Pukui. 1979. Hawaiian Grammar. University of Hawai'i Press, Honolulu. Alex Graves, arXiv:1211.3711Sequence transduction with recurrent neural networks. arXiv preprintAlex Graves. 2012. Sequence transduction with recurrent neural networks. arXiv preprint arXiv:1211.3711. Machine translation of very close languages. Jan Hajič, Jan Hric, Vladislav Kuboň, ANLC '00 Proceedings of the sixth conference on Applied natural language processing. Jan Hajič, Jan Hric, and Vladislav Kuboň. 2000. Ma- chine translation of very close languages. In ANLC '00 Proceedings of the sixth conference on Applied natural language processing, pages 7-12. Long short-term memory. Sepp Hochreiter, Jürgen Schmidhuber, Neural computation. 98Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735-1780. . Ho&apos;oulumāhiehie, Ho'oulumāhiehie. 2007. Ka Mo'olelo o . Hi&apos;iakaikapoliopele, M. Puakea NogelmeierAwaiaulu PressHonoluluHi'iakaikapoliopele. Awaiaulu Press, Honolulu. Edited by M. Puakea Nogelmeier. Ka mo'olelo o Kahahana, māhele 1. Ka Ho'oilina. Manaiakalani Samuel, Hiapo Kamakau, Perreira, 1Samuel Manaiakalani Kamakau and Hiapo Perreira. 2002. Ka mo'olelo o Kahahana, māhele 1. Ka Ho'oilina, 1(1):102-121. Estimation of probabilities from sparse data for the language model component of a speech recognizer. Slava Katz, IEEE transactions on acoustics, speech, and signal processing. 353Slava Katz. 1987. Estimation of probabilities from sparse data for the language model component of a speech recognizer. IEEE transactions on acoustics, speech, and signal processing, 35(3):400-401. Improved backing-off for m-gram language modeling. Reinhard Kneser, Hermann Ney, Acoustics, Speech, and Signal Processing. IEEE1International Conference onReinhard Kneser and Hermann Ney. 1995. Improved backing-off for m-gram language modeling. In Acoustics, Speech, and Signal Processing, 1995. ICASSP-95., 1995 International Conference on, vol- ume 1, pages 181-184. IEEE. Statistical machine translation between related and unrelated languages. David Kolovratník, Natalia Klyueva, Ondřej Bojar, Proceedings of the Conference on Theory and Practice of Information Technologies (ITAT-09). the Conference on Theory and Practice of Information Technologies (ITAT-09)David Kolovratník, Natalia Klyueva, and Ondřej Bo- jar. 2010. Statistical machine translation between related and unrelated languages. In Proceedings of the Conference on Theory and Practice of Informa- tion Technologies (ITAT-09), pages 31-36. Recurrent neural network based language model. Tomas Mikolov, Martin Karafiát, Lukás Burget, Jaň Cernocký, Sanjeev Khudanpur, Proceedings of Interspeech. InterspeechTomas Mikolov, Martin Karafiát, Lukás Burget, Jaň Cernocký, and Sanjeev Khudanpur. 2010. Recurrent neural network based language model. In Proceed- ings of Interspeech, pages 1045-1048. Finite-state transducers in language and speech processing. Mehryar Mohri, Computational linguistics. 232Mehryar Mohri. 1997. Finite-state transducers in lan- guage and speech processing. Computational lin- guistics, 23(2):269-311. Mai Pa'a i ka Leo: Historical Voice in Hawaiian Primary Materials: Looking Forward and Listening Back. M , Puakea Nogelmeier, Bishop Museum PressHonoluluM. Puakea Nogelmeier. 2010. Mai Pa'a i ka Leo: His- torical Voice in Hawaiian Primary Materials: Look- ing Forward and Listening Back. Bishop Museum Press, Honolulu. Neural machine translation for low-resource languages. Jörg Robertöstling, Tiedemann, arXiv:1708.05729Computing Research Repository. 1RobertÖstling and Jörg Tiedemann. 2017. Neu- ral machine translation for low-resource languages. Computing Research Repository, arXiv:1708.05729. Version 1. A computational phonology and morphology of Hawaiian. Parker Oiwi, Jones, University of OxfordPh.D. thesisOiwi Parker Jones. 2010. A computational phonology and morphology of Hawaiian. Ph.D. thesis, Univer- sity of Oxford. Illustrations of the IPA: Hawaiian. Parker Oiwi, Jones, Journal of the International Phonetic Association. 48Oiwi Parker Jones. 2018. Illustrations of the IPA: Hawaiian. Journal of the International Phonetic As- sociation, 48:103-115. . Mary Kawena Pukui, Samuel H Elbert, HonoluluHawaiian-English Dictionary. University of Hawai'i PressMary Kawena Pukui and Samuel H. Elbert. 1957. Hawaiian-English Dictionary. University of Hawai'i Press, Honolulu. No ka mahi'ai 'ana, māhele 6. Ka Ho'oilina. Mary Kawena Pukui, Holo Ho&apos;opai, Oiwi Parker Jones, Keao Nesmith, 5Mary Kawena Pukui, Holo Ho'opai, Oiwi Parker Jones, and Keao NeSmith. 2006. No ka mahi'ai 'ana, māhele 6. Ka Ho'oilina, 5(1):2-23. Weighted finite-state transductions with neural context. Pushpendre Rastogi, Ryan Cotterell, Jason Eisner, Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesPushpendre Rastogi, Ryan Cotterell, and Jason Eisner. 2016. Weighted finite-state transductions with neu- ral context. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 623-633. Statistical models for text normalization and machine translation. Kevin Scannell, Proceedings of the First Celtic Language Technology Workshop. the First Celtic Language Technology WorkshopKevin Scannell. 2014. Statistical models for text nor- malization and machine translation. In Proceedings of the First Celtic Language Technology Workshop, pages 33-40. The Voices of Eden: A history of Hawaiian language studies. Albert J Schütz, HonoluluUniversity of Hawai'i PressAlbert J. Schütz. 1994. The Voices of Eden: A his- tory of Hawaiian language studies. University of Hawai'i Press, Honolulu. Richard Sproat, Navdeep Jaitly, arXiv:1611.00068RNN approaches to text normalization: A challenge. Computing Research Repository. 2Richard Sproat and Navdeep Jaitly. 2016. RNN approaches to text normalization: A challenge. Computing Research Repository, arXiv:1611.00068. Version 2. Sequence to sequence learning with neural networks. Ilya Sutskever, Oriol Vinyals, Quoc V Le, Advances in neural information processing systems. Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Advances in neural information processing sys- tems, pages 3104-3112. The movement to revitalize Hawaiian language and culture. L Sam, Warner No&apos;eau, Leanne Hinton and Kenneth HaleAcademic Press144San Diego, CAThe Green Book of Language Revitalization in PracticeSam L. No'eau Warner. 2001. The movement to re- vitalize Hawaiian language and culture. In Leanne Hinton and Kenneth Hale, editors, The Green Book of Language Revitalization in Practice, pages -144. Academic Press, San Diego, CA. Standardized Hawaiian orthography. H William, Wilson, University of Hawai'iManuscriptWilliam H. Wilson. 1976. Standardized Hawaiian or- thography. Manuscript, University of Hawai'i. Mai loko mai o ka 'i'ini: Proceeding from a dream": The 'Aha Pūnana Leo connection in Hawaiian language revitalization. H William, Kauanoe Wilson, Kamanā, Leanne Hinton and Kenneth HaleAcademic PressSan Diego, CAThe Green Book of Language Revitalization in PracticeWilliam H. Wilson and Kauanoe Kamanā. 2001. "Mai loko mai o ka 'i'ini: Proceeding from a dream": The 'Aha Pūnana Leo connection in Hawaiian language revitalization. In Leanne Hinton and Kenneth Hale, editors, The Green Book of Language Revitalization in Practice, pages 147-176. Academic Press, San Diego, CA.
13,861,243
Principled Hidden Tagset Design for Tiered Tagging of Hungarian
For highly inflectional languages, the number of morpho-syntactic descriptions (MSD), required to descriptionally cover the content of a word-form lexicon, tends to rise quite rapidly, approaching a thousand or even more set of distinct codes. For the purpose of automatic disambiguation of arbitrary written texts, using such large tagsets would raise very many problems, starting from implementation issues of a tagger to work with such a large tagsets to the more theory-based difficulty of sparseness of training data. Tiered tagging is one way to alleviate this problem by reformulating it in the following way: starting from a large set of MSDs, design a reduced tagset, Ctag-set, manageable for the current tagging technology. We describe the details of the reduced tagset design for Hungarian, where the MSD-set cardinality is several thousand. This means that designing a manageable C-tagset calls for severe reduction in the number of the MSD features, a process that requires careful evaluation of the features.
[]
Principled Hidden Tagset Design for Tiered Tagging of Hungarian Dan Tufiş tufis@valhalla.racai.ro y Research Institute for Linguistics Hungarian Academy of Sciences Romanian Academy (RACAI) 13, '13 septembrie'74311Bucharest 5, BudapestRomania Péter Dienes dienes@nytud.hu y Research Institute for Linguistics Hungarian Academy of Sciences Romanian Academy (RACAI) 13, '13 septembrie'74311Bucharest 5, BudapestRomania Csaba Oravecz oravecz@nytud.hu y Research Institute for Linguistics Hungarian Academy of Sciences Romanian Academy (RACAI) 13, '13 septembrie'74311Bucharest 5, BudapestRomania Tamás Váradi varadi@nytud.hu y Research Institute for Linguistics Hungarian Academy of Sciences Romanian Academy (RACAI) 13, '13 septembrie'74311Bucharest 5, BudapestRomania Principled Hidden Tagset Design for Tiered Tagging of Hungarian For highly inflectional languages, the number of morpho-syntactic descriptions (MSD), required to descriptionally cover the content of a word-form lexicon, tends to rise quite rapidly, approaching a thousand or even more set of distinct codes. For the purpose of automatic disambiguation of arbitrary written texts, using such large tagsets would raise very many problems, starting from implementation issues of a tagger to work with such a large tagsets to the more theory-based difficulty of sparseness of training data. Tiered tagging is one way to alleviate this problem by reformulating it in the following way: starting from a large set of MSDs, design a reduced tagset, Ctag-set, manageable for the current tagging technology. We describe the details of the reduced tagset design for Hungarian, where the MSD-set cardinality is several thousand. This means that designing a manageable C-tagset calls for severe reduction in the number of the MSD features, a process that requires careful evaluation of the features. Introduction The combinatorial possibilities of inflection and derivation in Hungarian morphology (for an estimate see (Tihanyi, 1996)) pose a challenge for corpus annotation in that it is difficult to establish a set of morphosyntactic descriptions that does justice to the rich morpho-syntactic information encoded within the words and at the same time remains computationally tractable. Tiered tagging (Tufiş, 1998) is one way to alleviate this problem by reformulating it in the following way: starting from a large set of MSDs, design a reduced tagset, Ctag-set, manageable for the current tagging technology. The Ctag-set is used as a hidden tagset for the proper tagging of a text. This text, tagged in terms of the Ctag-set, is subject to a procedure aiming at recovering all (or most of) the information left out from the Ctag-set with respect to the MSD-set. In other words, each Ctag assigned to an item in the tagged text, is replaced with an appropriate and more informative descriptor, namely a MSD. In section 2. we will give an overview of the general principles one can follow in the design process. Section 3. presents the data analysis mostly along the lines described in (Váradi and Oravecz, 1999), but with much larger data sets and further investigations than those presented there. Section 4. will describe the process of reducing the MSD set into a Ctag set of manageable size. In section 5. we show some preliminary results on tagging accuracy and error analysis comparing the performance of the tagging process with a verbose tagset and that of the tiered tagging with a more constrained tagset. Conclusions and suggestions for further work will follow in section 6. ? The author was supported by the Research Support Scheme of the Open Society Support Foundation, grant No.: 320/1998 General requirements for tiered tagging The design process of a reduced tagset has to consider two fundamental requirements: to identify and leave out the features/values in the MSDs which do not provide relevant clues for the contextual disambiguation, and to make it possible to recover as accurately and fast as possible the information eliminated in the previous phase. Fortunately, these two objectives, although not very simple to reach, are feasible and rewarding. The process is a trial-and-error one and relies both on human introspection and evidence provided by the data analysis. One possible approach would be to use an information loss-less algorithm to convert the MSD-set into a Ctag-set. Such an algorithm might reduce the size of the tagset with 10-20%, which is too little for a large initial tagset. However, modifying such an algorithm to allow for limited ambiguity (that is losing a limited amount of information), could result in a drastic reduction of the Ctag-set, up to a cardinality which is within the restrictions imposed by the available training data and computing power. The remaining problem is deciding what kind of ambiguities to accept in the output of such a generalization algorithm, so that by using a subsequent process we will be able to resolve them. In our approach, the reduced tagset is designed as a subsuming one for the MSD-set and as such once a Ctag was assigned to a lexical item in the tagged text, the recovery process has to identify the relevant MSD, out of the set of the MSDs that are subsumed by the Ctag in case. The recovering process could be lexicon driven (the lexicon would be encoded in terms of the large MSD-set) and can be conceived of as the intersection between the set of MSDs subsumed by a Ctag assigned to a wordform w, and the set of MSDs for w as provided by the lexicon (Tufiş, 2000). This model can be compiled as a database, so that the recovery process could be a simple look-up in this database. Actually, for Hungarian the construction of such a system is a bit cumbersome. The huge number of possible wordforms in Hungarian rules out the possibility of lexical lookup from precompiled tables for unconstrained corpora and makes the use of a morphological analyzer necessary at least in the preparation phase of the corpus for the tagging process. The output of the morphological analysis is then converted into the MSD encoding, and, in principle, a specific lexicon could be constructed containing the lexical items with their corresponding MSDs for the corpus to be tagged. This lexicon can then be used in the recovery process for the lexical items in the tagged corpus but obviously will not suffice for other corpora. Thus, one can either construct specific lexicons for each chunk of corpora to be tagged and use them in the MSD lookup, or resort to the morphological analyzer in the recovery process as well to provide the set of possible MSDs for the lexical items "on-line". This whole issue basically boils down to an efficiency problem and needs further investigation (besides a fast morphological analyzer). The items that this recovering process makes ambiguous are more often than not the difficult cases for statistical disambiguation methods. Therefore, the tiered-tagging approach might use for such cases a rule-based disambiguation phase as well. Data analysis The morphological analysis and morphosyntactic descriptions (MSD) The language resource of our analysis consisted of the whole current stock of the Hungarian National Corpus (approximating 80m words) compiled into a word frequency list as input to the morphological analysis. Table 1 presents some basic statistics on the range of word form variation found in the corpus. Entries Word forms Lemmas 74,063,211 1,728,771 429,612 1 The word form list was processed with HUMOR, the morphological analyzer developed originally for Hungarian (Prószéky and Tihanyi, 1996). The main statistical figures of the results are displayed in Table 2. Provided that the morphological analysis is correct, the remaining ambiguity amounts to 27.7% of the tokens and 13.2% of the word forms, indicating that ambiguous items tend to appear in the upper regions of the word frequency list. The output notation of the morphological analyzer was not suitable to be applied directly as a MSD set for two reasons: a) it was not designed to return a POS tag and a lemma for each analysis of a given word form and b) it returns several analyses at varying levels of specificity. 1 The number of lemmas were calculated on the assumption that alternatives in ambiguous cases were evenly distributed. This is obviously false but the correct figure could only be arrived at after the corpus has been completely disambiguated. For illustration purposes an example is repeated here from (Váradi and Oravecz, 1999): Figure 1 shows the analysis of lehetőségekben 'within possibilities'. As regards point a) note that the leftmost item in each line is tagged with a POS label but this POS may change as derivational suffixes are added to the stem. In the first line we find that the noun stem lehetőség features in the lexicon as a unit and in this particular case the two inflectional suffixes PL and INE obviously did not modify the POS status of the resulting word form. However, in the following line the derivation suffix COL does turn the adjective stem into a noun but this fact remains implicit in the analysis. Point b) is illustrated by lines 2-4, which unfold a derivational tree at successively finer levels. The multitude of analyses in themselves do not create any ambiguity as in this particular example they all amount to the same reading as a noun. They are mentioned here merely to illustrate the need to interpret the analyzer's output to make the data tractable. To construct an initial MSD notation we eliminated all derivational details about the internal structure of the rightmost POS category 2 . Only the lemma, the POS category and the inflectional structure is preserved. So the above example is transformed into the following form: lehetőségekben lehetőségn[N][PL][INE] This format represents roughly the same information as and can in principle be mapped into the EAGLES compliant encoding scheme developed in Multext-East (Erjavec and Monachini, 1997). However, the presence or lack of some of the distinctions in one representation with respect to the other does not make a fully automatic mapping from one format to the other possible, so for the time being the above format is used as an internal MSD notation as output from the morphological analysis. Still, to establish the possibility of referring to positional attributes and their values in MSD representations, which facilitates the identification of reducible features for the corpus tagset, the MSD scheme, as an initial step in tagset creation, is converted into an attribute/value single string representation. The intent at this stage is merely to preserve in a concise and consistent notation all the information provided by the MSD that is relevant for tagging. Table 3 displays the features encoded in this initial Ctag scheme (F set) for the major POS categories. One of the major aspects in which the current scheme differs from the one used in the Multext-East project lies in the inclusion of the feature "stem category". This is devised to preserve the derivational history of the lemma as well as to indicate the syntactic behaviour of the word as a head category. This scheme allows to treat for instance various kinds of pronouns according to the major POS category they may fulfill so that a nominal pronoun like rajta 'on it' is encoded as a N with stem category P. The example above is accordingly recoded as lehetőségekbenn[NP3N2] (i.e. third person plural noun of noun stem class in '2' (=inessive) case). Analysis of ambiguity at the MSD level A ranked ordering of the cumulative frequency of the ambiguous word forms reveals a fairly even pattern of the coverage of top ambiguity classes in terms of tokens: it takes a little over the hundred most frequent ambiguous cases to cover half of the total ambiguity. As an overall measure of the distribution of ambiguity per tokens, Table 4 displays corresponding figures for our corpus of the index proposed in (Tufiş, 1998). The comparison of these values to those of different tagset schemes in section 4.2. will provide some insight into the recoverability of information from and coverage of tagsets over ambiguity classes. TW KW AW MSD 1.334 1.349 2.230 The design of Ctag sets The reduction of the initial tagset As the cardinality of the full initial tagset is too high to be handled by current tagging methods, especially by statistical taggers, different levels of granularity in the tagset have been explored. This section addresses the problems of possible reductions of the initial tagset. There are three important principles we have to consider during this process: (i) when merging MSDs into the the same Ctag, we have to retain the recoverability of the original MSD of each word; (ii) we should not lose any information giving contextual clues for the disambiguation of other words; (iii) ambiguity classes should be merged when contextual information is not enough to disambiguate. Merging MSD tags in the light of these principles is an empirical issue: checking the fulfillment of the principles involves either the investigation of the ambiguity classes occurring in the corpus (principle (i)) or the comparison of tagging results obtained by using the merged/non-merged tagsets (principles (ii) and (iii)). As an initial attempt to design the reduced corpus tagset, we made use of the algorithm proposed by (Tufiş, 2000) to remove features from the full tags that can be recovered from the intersection of the set of MSDs for a lexical item with the set of MSDs the proposed Ctag for this item subsumes. Basically, the algorithm removes an attribute from a tag if this attribute is recoverable, i.e. the deletion of the attribute does not merge two tags in an ambiguity class. Applied to a 74 million word lexicon, the algorithm yielded the results in Table 5 Although the deletion of the proposed features results in a recoverable tagset, the resulting reduction in the size of the tagset is significant but not satisfactory. The first three items in Table 5 do not involve any reduction in the tagset since they are the same in all tags of the given category (S, _ and _, respectively). Deleting the verbal root in adverbs brings about a most minimal decrease (1). In the final two rows, the deletion of number of the verb or the person of a noun is very problematic, since we lose an important clue for contextual disambiguation (in Hungarian the verb and the subject must agree in number and person). Thus, the only relevant feature that can be deleted, retaining recoverability, is the number of the possessor of a noun. After such a deletion, the cardinality of the tagset is still too high to be convenient for current tagging methods: 1265 different tags would remain. These results might be attributed to two main reasons. First, each feature within a tag seems to be relevant, that is, the tagset is very compact. However, this insight is not supported by the fact that within the 74 million corpus only 1105 tags occur out of 2148 theoretical possibilities (see Table 6 for details on tag statistics). The second reason for such unsatisfactory results lies within the algorithm itself. This algorithm can only remove certain attributes from the full description, instead of applying a merger of some of the distinctions in attribute values. This difference is crucial for the interpretation of the results. Consider, for example, case marking on nouns. Hungarian has 21 cases, which are represented by the fifth attribute of the noun tag. The algorithm does not remove case marking, since ambiguity due to case (eg. NS3NN NS3NA) actually occurs (eg. párt 'party+NOM' or 'couple+ACC'). However, there is no ambiguity class where, all other attributes being equal, the word can be analyzed as eg. being either in superessive or in dative case. This means that the cases dative and superessive can be merged. This option is not available for the algorithm proposed by (Tufiş, 2000). Hence, we have also made recourse to non-algorithmic methods, relying on linguistic intuition. Starting from the F tagset, we made experiments with three reduced tagsets to be described presently. The medium (M) tagset has been derived by merging nonnominantive and non-accusative cases of nouns (represented by the letter O standing for "other"). Hence, this tagset distinguishes 3 cases only: N, A and O, which yields in the tagset cardinality of 384 tags, out of which 309 actually occurs in the corpus. Further reductions were made in order to arrive at the so called optimal (O) tagset. The merged attributes are the following: Possessor on Nouns The distinction between S1, S2, P1, P2 and P3 possessors is recoverable. Furthermore, they do not bear any information giving clue to the disambiguation of any words. Objects on Verbs The full tagset marks three kind of ob-jects in the case of verbal tags: I -no object or indefinite object, D -definite object and 2 -incorporated second person object. The classes I and 2, however, can be merged, since this distinction is recoverable and they behave in the same syntactic way. These reductions result in a tagset of 240 tags. Finally, we blurred stem-category distinctions in nouns and adjectives (i.e. the fourth attribute was removed), which gave a reduced (R) tagset with 119 tags. However, in this step we strongly violated the recoverability principle. The rationale behind this move was principle (iii): it was thought that this information could not be extracted from the context and the disambiguation of other tokens could not make use of this information either. The findings with tagging test in section 5., however, seem to question this assumption. Note that the decrease in the saturation value of the reduced tagset indicates non-recoverability: this move basically affected tags that were used frequently within the lexicon. Ambiguity classes and tagset size Applying the four tagsets, lists of ambiguity classes were drawn up using all ambiguous word forms from the corpus. Table 7 An important finding that emerges from the identical value of the ambiguity measure AW between the F and O tagsets is that the finer resolution tagset does not significantly increase the average number of alternatives to ambiguous words. In other words, the same amount of ambiguity can be tackled with the O tagset, which is almost one tenth the size of the other. Table 8 also gives some supporting evidence that the O tagset complies with the requirement in principle (i), inasmuch as while the size of the tagset is significantly reduced, there is no corresponding drop in the coverage of the tokens involved. The R tagset brings about a sizeable drop in the ambiguity types but there is some decrease in the tokens as well compared to the difference found between the F, M and O tagsets. The huge drop in the number of the classes between the MSD and F notation seems to justify the need for the latter as the initial notation for further reductions: many of the spurious ambiguities present between MSDs can be resolved by preserving only features relevant for the tagging process if only limited or no irrecoverable lexical information is lost. The distribution of intracategorial ambiguities across the different tagsets proposed in Table 9 underlines the importance of the evaluation of each of the features distinguishing such classes and suggests that if losing a limited amount of information is allowed significant decrease can be achieved in the number of ambiguity classes, again without a corresponding loss in token coverage. Evaluation of tagsets The three reduced Ctag sets proposed have been subject to practical evaluation in actual tagging experiments. We have made use of two HMM taggers: Thorsten Brants' 3gram TnT tagger (Brants, 1998) and the MULTEXT-ISSCO bigram tagger (Gilbert and Amstrong, 1995) used in the Multext-East project (Erjavec and Ide, 1998). The training corpus consisted of two register-diverse corpora: the first three quarters of Orwell's 1984 and newspaper text, adding up to 87969 tokens altogether. The test corpus included the rest of the Orwell and newspaper texts, 21267 tokens in total. The MULTEXT-ISSCO tagger was trained with the Baum-Welch algorithm. The TnT tagger has the problem of learning possible ambiguity classes and words from the training corpus only. To remedy this situation, after the training phase, we enriched the generated lexicon file with further ambiguities and added words from the test corpus with their ambiguity classes. The results of the tagging are summarized in Table 10. To some extent, the tagset cardinality correlates with the test results, except for the R tagset with the MULTEXT Conclusions The method of studying the extent and types of ambiguity on word form lists derived from an extensive corpus has provided useful orientation about the rough strategy to follow in tagset design. The pattern of distribution of ambiguity is fairly even and widespread. It is not to be expected that tackling a handful of ubiquitous cases will spectacularly reduce the total ambiguity. The finding that a large percentage of ambiguous tokens belong to intracategorial ambiguity classes stresses the importance of the finer distinctions within the inflectional endings that play an equal role in ambiguity resolution. This suggests that overall merging or deletion of features might prove too crude a tactic and each feature should be evaluated on its merit. In varying the size of the tagset one can gain important insight by looking at the distribution of ambiguity classes and their coverage over tokens in the corpus independent of any contextual information. However, it is the actual evaluation in tagging experiments that plays the role if the resulting tagset is to comply with principle (ii). References Brants, Thorsten, 1998. Figure 1 : 1A sample output of the morphological analyzer Table 1 : 1The distribution of word forms Table 2 : 2Summary figures of the morphological analysis Table 4 : 4Different measures of text ambiguity at the MSD level TW = number of tags / number of word tokens KW = number of tags / number of word tokens -unknown cases AW = number of tags assigned to ambiguous cases / number of ambiguous tokens .Head Pos. Meaning A 2 number of the Adjective (always S) A 3 not in use R 3 not in use R 4 marking verbal root in Adverbs N 6 number of the possessor of the Noun N 3 person of the Noun V 2 number of the Verb Table 5 : 5Recoverable automatic reductions Agreement in definiteness with object (def, indef, 2nd person) Owner's Num = sing. or plural owner Owner's Pers = person marker of owner * = not all combinations are possible, so not a simple product [NAR][V][N] = POS categories to which the attribute applyPOS Num Pers Stem [NAR] Mood/Tense [V] Case [N] Def [V] Owner's Num Owner's Pers Total N 2 [PS] 3 [123] 5 [QAVNP] 21 2 [PS] 3 [123] 2058 A 2 [AV] 2 R 2 [RV] 2 V 2 [PS] 3 [123] 5 [PRCSI] 3 [ID2] 79 Invariant minor categories: Q, D, PRE, RP, C, Int, Y 7 2148 N = Noun A = Adjective R = Adverb V = Verb Q = Numeral D = Article PRE = Verbal prefix RP = Postposition C = conjunction Y = Abbreviation Int = Interjection Def = Table 3 : 3The initial Ctag scheme (F set) Table 6 : 6Corpus-tagsets presents the measures of text ambiguity at the four levels. The MSD values are repeated here for convenience of comparison.TW KW AW MSD 1.334 1.349 2.230 F 1.330 1.345 2.224 M 1.330 1.3445 2.223 O 1.330 1.3445 2.223 R 1.291 1.303 2.164 Table 7 : 7Measures of text ambiguity Table 10 : 10Error rate with the tagsets tagger. The increase of error rate in this case might be attributed the lack of contextual information which could have been provided by features already missing from the R tagset. That there is practically no information loss if switching from the M to the O set only the latter being more compact is nicely justified by the improved results with both taggers. Obviously, only much more extensive testing could provide reliable justification, however, these preliminary experiments can also indicate whether feature reduction/merger applied so far are on the right track. TnT -A Statistical Part-of-Speech Tagger, Instalation and User Guide. University of Saarland. Erjavec, Tomaž and Nancy Ide, 1998. The MULTEXT-EAST corpus. In Antonio Rubio, Natividad Gallardo, Rosa Castro, and Antonio Tejada (eds.), First International Conference on Language Resources and Evaluation, LREC'98. Granada: ELRA. Erjavec, Tomaz and M. Monachini, 1997. Specifications and notation for lexicon encoding. COP Project 106 Multext-East, Deliverable D1.1 F (Final Report). Gilbert, R. and S. Amstrong, 1995. Tagging tool. MUL-TEXT Deliverable 2.4.1. Prószéky, Gábor and László Tihanyi, 1996. Humor -a Morphological System for Corpus Analysis. In Proceedings of the first TELRI Seminar in Tihany. Budapest.MSD F M R O amb. classes 7205 3123 1542 1370 590 tokens 20155486 19998444 19994060 19993592 18526417 Table 8 : 8Number of ambiguity classes and their coverage across the different tag setsF M O R amb. classes 2218 (71%) 759 (49.2%) 602 (44%) 120 (20%) tokens 6355199 (31.7%) 6336289 (31.7%) 6335331 (31.7%) 3770062 (20.3%) Table 9 : 9Intracategorial ambiguities and their coverage Tihanyi, László, 1996. MULTEXT-EAST Deliverable D1.2. Application to Hungarian. Appendix 2, chapter Number of Hungarian Word Forms. . Tufiş, Dan, 1998. Tiered tagging. Technical Report 32, RACAI. Tufiş, Dan, 2000. Using a large set of eagles-compliant morpho-syntactic descriptors as a tagset for probabilistic tagging. In Proceedings of Second International Conference on Language Resources and Evaluation. Athens. This volume. Tufiş, Dan, 1998. Tagging Romanian Texts: a Case Study for qtag, a Language Independent Probabilistic Tagger. In Antonio Rubio, Natividad Callardo, Rosa Castro, and Antonio Tejada (eds.), First International Conference on Language Resources and Evaluation. Granada, Spain: ELRA. Váradi, Tamás and Csaba Oravecz, 1999. Morphosyntactic ambiguity and tagset design for Hungarian. In Proceedings of the EACL LINC Workshop on Annotated Corpora. Bergen, Norway. The possibility of preserving this information in a concise way is currently under investigation. However, this needs a reformulation of the notation of the morphological analyzer into a labelled bracketing-like representation. The number of MSDs actually occurring is 5261 out of a possible value of around 10,000.
14,569,368
Bridging the Inflection Morphology Gap for Arabic Statistical Machine Translation
Statistical machine translation (SMT) is based on the ability to effectively learn word and phrase relationships from parallel corpora, a process which is considerably more difficult when the extent of morphological expression differs significantly across the source and target languages. We present techniques that select appropriate word segmentations in the morphologically rich source language based on contextual relationships in the target language. Our results take advantage of existing word level morphological analysis components to improve translation quality above state-of-the-art on a limited-data Arabic to English speech translation task.
[ 7124227, 10779203, 29938854, 7375882, 10170829, 15424398, 1559412 ]
Bridging the Inflection Morphology Gap for Arabic Statistical Machine Translation Association for Computational LinguisticsCopyright Association for Computational LinguisticsJune 2006. 2006 Andreas Zollmann zollmann@cs.cmu.edu School of Computer Science Carnegie Mellon University Ashish Venugopal ashishv@cs.cmu.edu School of Computer Science Carnegie Mellon University Stephan Vogel stephan.vogel@cs.cmu.edu School of Computer Science Carnegie Mellon University Bridging the Inflection Morphology Gap for Arabic Statistical Machine Translation Proceedings of the Human Language Technology Conference of the North American Chapter of the ACL the Human Language Technology Conference of the North American Chapter of the ACLNew YorkAssociation for Computational LinguisticsJune 2006. 2006 Statistical machine translation (SMT) is based on the ability to effectively learn word and phrase relationships from parallel corpora, a process which is considerably more difficult when the extent of morphological expression differs significantly across the source and target languages. We present techniques that select appropriate word segmentations in the morphologically rich source language based on contextual relationships in the target language. Our results take advantage of existing word level morphological analysis components to improve translation quality above state-of-the-art on a limited-data Arabic to English speech translation task. Introduction The problem of translating from a language exhibiting rich inflectional morphology to a language exhibiting relatively poor inflectional morphology presents several challenges to the existing components of the statistical machine translation (SMT) process. This inflection gap causes an abundance of surface word forms 1 in the source language compared with relatively few forms in the target language. This mismatch aggravates several issues found in natural language processing: more unknown words forms in unseen data, more words occurring only once, more distinct words and lower token-to-type ratios (mean number of occurrences over all distinct words) in the source language than in the target language. Lexical relationships under the standard IBM models (Brown et al., 1993) do not account for many-to-many mappings, and phrase extraction relies heavily on the accuracy of the IBM word-toword alignment. In this work, we propose an approach to bridge the inflectional gap that addresses the issues described above through a series of preprocessing steps based on the Buckwalter Arabic Morphological Analyzer (BAMA) tool (Buckwalter, 2004). While (Lee et al., 2003) develop accurate segmentation models of Arabic surface word forms using manually segmented data, we rely instead on the translated context in the target language, leveraging the manually constructed lexical gloss from BAMA to select the appropriate segmented sense for each Arabic source word. Our technique, applied as preprocessing to the source corpus, splits and normalizes surface words based on the target sentence context. In contrast to (Popovic and Ney, 2004) and (Nießen and Ney, 2004), we do not modify the IBM models, and we leave reordering effects to the decoder. Statistically significant improvements (Zhang and Vogel, 2004) in BLEU and NIST translation score over a lightly stemmed baseline are reported on the available and well known BTEC IWSLT'05 Arabic-English corpus (Eck and Hori, 2005). Arabic Morphology in Recent Work Arabic-to-English machine translation exemplifies some of the issues caused by the inflection gap. Refer to (Buckwalter, 2005) and (Larkey et al., 2002) for examples that highlight morphological inflection for a simple Modern Standard Arabic (MSA) word and basic stemming operations that we use as our baseline system. (Nießen and Ney, 2000) tackle the inflection gap for German-to-English word alignment by performing a series of morphological operations on the German text. They fragment words based on a full morphological analysis of the sentence, but need to use domain specific and hand written rules to deal with ambiguous fragmentation. (Nießen and Ney, 2004) also extend the corpus by annotating each source word with morphological information and building a hierarchical lexicon. The experimental results show dramatic improvements from sentencelevel restructuring (question inversion, separated verb prefixes and merging phrases), but limited improvement from the hierarchical lexicon, especially as the size of the training data increases. We conduct our morphological analysis at the word level, using Buckwalter Arabic Morphological Analyzer (BAMA) version 2.0 (Buckwalter, 2004). BAMA analyzes a given surface word, returning a set of potential segmentations (order of a dozen) for the source word into prefixes, stems, and suffixes. Our techniques select the appropriate splitting from that set by taking into account the target sides (full sentences) of that word's occurrences in the training corpus. We now describe each splitting technique that we apply. BAMA: Simple fragment splitting We begin by simply replacing each Arabic word with the fragments representing the first of the possible splittings returned by the BAMA tool. BAMA uses simple word-based heuristics to rank the splitting alternatives. CONTEXT: Single Sense selection In the step CONTEXT, we take advantage of the gloss information provided in BAMA's lexicon. Each potential splitting corresponds to a particular choice of prefix, stem and suffix, all of which exist in the manually constructed lexicon, along with a set of possible translations (glosses) for each fragment. We select a fragmentation (choice of splitting for the source word) whose corresponding glosses have the most target side matches in the parallel translation (of the full sentence). The choice of fragmentation is saved and used for all occurrences of the surface form word in training and testing, introducing context sensitivity without parsing solutions. In case of unseen words during testing, we segment it simply using the first alternative from the BAMA tool. This allows us to still translate an unseen test word correctly even if the surface form was never seen during training. CORRMATCH: Correspondence matching The Arabic language often encodes linguistic information within the surface word form that is not present in English. Word fragments that represent this missing information are misleading in the translation process unless explicitly aligned to the NULL word on the target side. In this step we explicitly remove fragments that correspond to lexical information that is not represented in English. While (Lee, 2004) builds part of speech models to recognize such elements, we use the fact that their corresponding English translations in the BAMA lexicon are empty. Examples of such fragments are case and gender markers. As an example of CORRMATCH removal, we present the Arabic sentence " h'*A lA ya zAl u gayor naZiyf " (after BAMA only) which becomes "h'*A lA ya zAl gayor naZiyf" after the CORRMATCH stage. The "u" has been removed. Experimental Framework We evaluate the impact of inflectional splitting on the BTEC (Takezawa et al., 2002) IWSLT05 Arabic language data track. The "Supplied" data track includes a 20K Arabic/English sentence pair training set, as well as a development ("DevSet") and test ("Test05") set of 500 Arabic sentences each and 16 reference translations per Arabic sentence. Details regarding the IWSLT evaluation criteria and data topic and collection methods are available in (Eck and Hori, 2005). We also evaluate on test and development data randomly sampled from the complete supplied dev and test data, due to considera-tions noted by (Josep M.Crego, 2005) regarding the similarity of the development and test data sets. System description Translation experiments were conducted using the (Vogel et al., 2003) system with reordering and future cost estimation. We trained translation parameters for 10 scores (language model, word and phrase count, and 6 translation model scores from (Vogel, 2005) ) with Minimum Error Rate training on the development set. We optimized separately for both the NIST (Doddington, 2002) and the BLEU metrics (Papineni et al., 2002). Table 1 and 2 shows the results of each stage of inflectional splitting on the BLEU and NIST metrics. Basic orthographic normalization serves as a baseline (merging all Alif, tar marbuta, ee forms to the base form). The test set NIST scores show steady improvements of up to 5 percent relative, as more sophisticated splitting techniques are used, ie BAMA+CONTEXT+CORRMATCH. These improvements are statistically significant over the baseline in both metrics as measured by the techniques in (Zhang and Vogel, 2004). Translation Results Our NIST results for all the final stages of inflectional splitting would place us above the top NIST scores from the ISWLT evaluation on the supplied test set. 2 On both DevSet/Test05 and the randomly split data, we see more dramatic improvements in the NIST scores than in BLEU. This might be due to the NIST metric's sensitivity to correctly translating certain high gain words in the test corpus. Inflectional splitting techniques that cause previously unknown surface form words to be translated correctly after splitting can significantly impact the overall score. Conclusion and Future Work This work shows the potential for significant improvements in machine translation quality by directly bridging the inflectional gap across language pairs. Our method takes advantage of source and target language context when conducting morphological analysis of each surface word form, while avoiding complex parsing engines or refinements to the alignment training process. Our results are presented on moderately sized corpora rather than the scarce resource domain that has been traditionally employed to highlight the impact of detailed morphological analysis. By showing the impact of simple processing steps we encourage the creation of simple word and gloss level analysis tools for new languages and show that small investments in this direction (compared to high octane context sensitive parsing tools) can yield dramatic improvements, especially when rapid development of machine translation tools becomes increasingly relevant to the research community. While our work focused on processing the morphologically rich language and then translating "down" into the morphologically poor language, we plan to use the analysis tools developed here to model the reverse translation process as well, the harder task of translating "up" into a highly inflected space. Table 1 : 1Translation results for each stage of inflectional splitting for the merged, sampled dev. and test data, highest scores in bold, relative improvements in brackets Inflection system NIST -Dev. NIST -Test BLEU -Dev. BLEU -TestNo preprocessing 9.46 9.38 51.1 49.6 Orthographic normalization (baseline) 9.58 9.35 52.1 49.8 BAMA 10.10 9.60 (+2.7%) 53.8 48.8 (-2%) BAMA+CONTEXT+CORRMATCH 10.08 9.79 (+4.7%) 53.7 50.6 (+1.6%) Table 2 : 2Translation results for each stage of inflectional splitting for the BTEC Supplied DevSet/Test05 data, highest scores in bold, relative improvements in brackets We use the term surface form to refer to a series of characters separated by whitespace The IWSLT evaluation did not allow systems to train separately for evaluation on BLEU or NIST, but results from the proceedings indicate that top performers in each metric optimized towards the respective metric. The mathematics of statistical machine translation: parameter estimation. F Peter, Brown, J Della Vincent, Stephen A Pietra, Robert L Della Pietra, Mercer, Comput. Linguist. 192Peter F. Brown, Vincent J. Della Pietra, Stephen A. Della Pietra, and Robert L. Mercer. 1993. The mathemat- ics of statistical machine translation: parameter esti- mation. Comput. Linguist., 19(2):263-311. Buckwalter Arabic Morphological Analyzer Version 2.0. LDC Catalog No. LDC2004L02, Linguistic Data Consortium. Tim Buckwalter, www.ldc.upenn.edu/CatalogTim Buckwalter. 2004. Buckwalter Arabic Mor- phological Analyzer Version 2.0. LDC Cata- log No. LDC2004L02, Linguistic Data Consortium, www.ldc.upenn.edu/Catalog. . Tim Buckwalter, Tim Buckwalter. 2005. www.qamus.org/morphology.htm. Automatic evaluation of machine translation quality using n-gram co-occurrence statistics. George Doddington, Proc. ARPA Workshop on Human Language Technology. ARPA Workshop on Human Language TechnologyGeorge Doddington. 2002. Automatic evaluation of ma- chine translation quality using n-gram co-occurrence statistics. In In Proc. ARPA Workshop on Human Lan- guage Technology. Overview of the IWSLT 2005 evaluation campaign. Matthias Eck, Chiori Hori, Proceedings of International Workshop on Spoken Language Translation. International Workshop on Spoken Language TranslationMatthias Eck and Chiori Hori. 2005. Overview of the IWSLT 2005 evaluation campaign. In Proceedings of International Workshop on Spoken Language Transla- tion, pages 11-17. The talp ngram-based smt system for iwslt'05. Jose B Marino Josep, M Crego, Adria De Gispert, Proceedings of International Workshop on Spoken Language Translation. International Workshop on Spoken Language TranslationJose B.Marino Josep M.Crego, Adria de Gispert. 2005. The talp ngram-based smt system for iwslt'05. In Pro- ceedings of International Workshop on Spoken Lan- guage Translation, pages 191-198. Improving stemming for arabic information retrieval: Light stemming and co-occurrence analysis. Leah Larkey, Lisa Ballesteros, Margaret Connell, Proc. of the 25th annual international ACM SIGIR conference on Research and development information retrieval. of the 25th annual international ACM SIGIR conference on Research and development information retrievalLeah Larkey, Lisa Ballesteros, and Margaret Connell. 2002. Improving stemming for arabic information re- trieval: Light stemming and co-occurrence analysis. In Proc. of the 25th annual international ACM SIGIR conference on Research and development information retrieval. Language model based arabic word segmentation. Young-Suk Lee, Kishore Papineni, Salim Roukos, ACL, Sapporo. JapanOssama Emam, and Hany HassanYoung-Suk Lee, Kishore Papineni, Salim Roukos, Os- sama Emam, and Hany Hassan. 2003. Language model based arabic word segmentation. In ACL, Sap- poro, Japan, July 6-7. Morphological analysis for statistical machine translation. Young-Suk Lee, Proceedings of the Human Language Technology and North American Association for Computational Linguistics Conference (HLT/NAACL). the Human Language Technology and North American Association for Computational Linguistics Conference (HLT/NAACL)Boston,MAYoung-Suk Lee. 2004. Morphological analysis for sta- tistical machine translation. In Proceedings of the Hu- man Language Technology and North American As- sociation for Computational Linguistics Conference (HLT/NAACL), Boston,MA, May 27-June 1. Improving SMT quality with morpho-syntactic analysis. Sonja Nießen, Hermann Ney, The 18th International Conference on Computational Linguistics. Sonja Nießen and Hermann Ney. 2000. Improving SMT quality with morpho-syntactic analysis. In The 18th International Conference on Computational Linguis- tics. Statistical machine translation with scarce resources using morphosyntactic information. Sonja Nießen, Herman Ney, Comput. Linguist. 302Sonja Nießen and Herman Ney. 2004. Statistical ma- chine translation with scarce resources using morpho- syntactic information. Comput. Linguist., 30(2):181- 204. Bleu: a method for automatic evaluation of machine translation. Kishore Papineni, Salim Roukos, Todd Ward, Wei-Jing Zhu, Proceedings of the Association of Computational Linguistics. the Association of Computational LinguisticsKishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic eval- uation of machine translation. In Proceedings of the Association of Computational Linguistics, pages 311- 318. Improving word alignment quality using morpho-syntactic information. H Popovic, Hermann Ney, 20th International Conference on Computational Linguistics (CoLing). Geneva, SwitzerlandH. Popovic and Hermann Ney. 2004. Improving word alignment quality using morpho-syntactic informa- tion. In 20th International Conference on Computa- tional Linguistics (CoLing), Geneva, Switzerland. Toward a broad-coverage bilingual corpus for speech translation of travel conversations in the real world. Toshiyuki Takezawa, Eiichiro Sumita, Fumiaki Sugaya, Hirofumi Yamamoto, Seiichi Yamamoto, Proc. of LREC 2002. of LREC 2002Las Palmas, Canary Islands, SpainToshiyuki Takezawa, Eiichiro Sumita, Fumiaki Sugaya, Hirofumi Yamamoto, and Seiichi Yamamoto. 2002. Toward a broad-coverage bilingual corpus for speech translation of travel conversations in the real world. In Proc. of LREC 2002, pages 147-152, Las Palmas, Ca- nary Islands, Spain, May. The CMU statistical translation system. Stephan Vogel, Ying Zhang, Fei Huang, Alicia Tribble, Ashish Venogupal, Bing Zhao, Alex Waibel, Proceedings of MT Summit IX. MT Summit IXNew Orleans, LA, SeptemberStephan Vogel, Ying Zhang, Fei Huang, Alicia Trib- ble, Ashish Venogupal, Bing Zhao, and Alex Waibel. 2003. The CMU statistical translation system. In Pro- ceedings of MT Summit IX, New Orleans, LA, Septem- ber. PESA: Phrase pair extraction as sentence splitting. Stephan Vogel, Proceedings of MT Summit X. MT Summit XPhuket,ThailandStephan Vogel. 2005. PESA: Phrase pair extraction as sentence splitting. In Proceedings of MT Summit X, Phuket,Thailand, September. Measuring confidence intervals for the machine translation evaluation metrics. Ying Zhang, Stephan Vogel, Proceedings of the 10th International Conference on Theoretical and Methodological Issues in Machine Translation (TMII). the 10th International Conference on Theoretical and Methodological Issues in Machine Translation (TMII)Baltimore, MDYing Zhang and Stephan Vogel. 2004. Measuring confi- dence intervals for the machine translation evaluation metrics. In Proceedings of the 10th International Con- ference on Theoretical and Methodological Issues in Machine Translation (TMII), Baltimore, MD, October.
7,371,587
Swedish Language Processing in the Spoken Language Translator
T h e p a p e r d e s crib e s th e S w ed ish la n g u a g e c o m p o n e n t s u sed in th e S p o ken L a n g u a g e T r a n s la to r (S L T ) s y s te m . S L T is a m u lt i-c o m p o n e n t sy s te m fo r tr a n s la tio n o f s p o k e n E n g lish in t o s p o k e n S w ed ish . T h e la n g u a g e p ro ce s s in g p a rts o f th e s y s te m are th e E n g lish C o r e L a n g u a g e E n g in e (C L E ) a n d its S w ed ish c o u n te r p a r t, th e S -C L E . T h e S -C L E is a g en era l p u rp o s e n a tu ra l la n g u a g e p r o ce s s in g s y s te m s fo r S w ed ish w h ich in th e S L T p r o je c t w as tu n ed to w a rd s th e register o f th e air tra v el in fo r m a tio n (A T I S ) d o m a in . T h e p e c u lia r itie s a n d th e co v e r a g e o f th e re su ltin g S w ed ish g ra m m a r are th e m a in to p ic s o f th e p a p e r, ev e n t h o u g h th e ov e ra ll S L T s y s te m a lso is b rie fly d e s crib e d .
[]
Swedish Language Processing in the Spoken Language Translator Bjorn Gamback Institute of Computer Science Natural Language Processing Group Swedish Box 1263S -164 28 KISTAStockholmSweden Swedish Language Processing in the Spoken Language Translator 37 T h e p a p e r d e s crib e s th e S w ed ish la n g u a g e c o m p o n e n t s u sed in th e S p o ken L a n g u a g e T r a n s la to r (S L T ) s y s te m . S L T is a m u lt i-c o m p o n e n t sy s te m fo r tr a n s la tio n o f s p o k e n E n g lish in t o s p o k e n S w ed ish . T h e la n g u a g e p ro ce s s in g p a rts o f th e s y s te m are th e E n g lish C o r e L a n g u a g e E n g in e (C L E ) a n d its S w ed ish c o u n te r p a r t, th e S -C L E . T h e S -C L E is a g en era l p u rp o s e n a tu ra l la n g u a g e p r o ce s s in g s y s te m s fo r S w ed ish w h ich in th e S L T p r o je c t w as tu n ed to w a rd s th e register o f th e air tra v el in fo r m a tio n (A T I S ) d o m a in . T h e p e c u lia r itie s a n d th e co v e r a g e o f th e re su ltin g S w ed ish g ra m m a r are th e m a in to p ic s o f th e p a p e r, ev e n t h o u g h th e ov e ra ll S L T s y s te m a lso is b rie fly d e s crib e d . Introduction T h e S w edish C o r e L a n g u a g e E n g in e (o r S -C L E fo r s h o r t) ( G a m b a c k a n d R a y n e r , 19 92) is a gen eral p u rp o s e n a tu ra l la n g u a g e p r o ce s s in g s y s te m fo r S w e d ish d e v e lo p e d b y th e S w ed ish In s titu te o f C o m p u t e r S cie n ce fr o m its E n g lish c o u n te r p a r t, th e S R I C o r e L a n g u a g e E n g in e (C L E ) (A ls h a w i, 1 9 9 2 ). T h e key id e a b e h in d th e s y s t e m is in d ic a te d b y th e w o rd " c o r e " : th e S -C L E w as in te n d e d t o b e u sed as a b u ild in g b lo ck in a b r o a d ra n ge o f a p p lic a t io n s a n d has a lre a d y b e e n te s te d as p a rt o f a d a ta b a s e qu ery s y s te m (G a m b a c k a n d L ju n g , 1 9 9 3 ) a n d as a t e x t -t o -s p e e c h fr o n te n d (G a m b a c k a n d E in e b o rg , 1 9 9 5 ). T h e tw o co p ie s o f th e C L E h a ve a lso b e en used to g e th e r to fo r m a m a ch in e tra n sla tio n s y s te m fo r a ca r-h ire d o m a in (A ls h a w i e t a l., 19 91 ). In th e S p o k en L a n g u a g e T r a n s la to r , d e s cr ib e d in th e n e x t s e c tio n , th e E n g lish C L E p e r fo rm e d as a b a ck -e n d t o a sp e e ch r e c o g n itio n s y s te m , th e S -C L E as a fr o n ten d t o a sp e e ch sy n th e sis s y s te m , a n d th e tw o C L E s to g e th e r fo r m e d a ( t e x t ) tra n s The SLT system T h e S p o k e n L a n g u a g e T r a n s la to r (S L T ) is a s y s te m p r o t o t y p e w h ich ca n tra n sla te in th e sp e e ch tr a n s la tio n task w ith as few ch a n g es as p o s s ib le . T h e ov era ll arch i te ctu re o f th e cu rre n t v ersion o f S L T s y s te m is d e s cr ib e d s h o rtly in th is se c tio n , for a c o m p le t e d e s cr ip tio n see (R a y n e r e t a l., 19 93 ) o r (A g n iis e t a t., 19 9 4 ). F ig u re 1: T o p -le v e l a rc h ite ctu r e o f th e S p o k e n L a n g u a g e T r a n s la to r th e s e s w h ich is p a sse d t o th e E n g lis h -la n g u a g e p r o ce s s o r , th e S R I C a m b r id g e C o r e L a n g u a g e E n g in e (A ls h a w i, 1 9 9 2 ). T h e C L E g r a m m a r a s so cia te s ea ch sp e e ch h y p o th e s is w ith a se t o f p o s s ib le q u a s i-lo g ic a l fo r m s , Q L F s (A ls h a w i a n d van E ijc k , 1 9 8 9 ), ty p ic a lly p r o d u c in g 5 to 50 Q L F s p er h y p o th e s is . In o rd e r t o a llo w fa s t p r o ce s s in g o f a la rg e n u m b e r o f h y p o th e s e s, a s c a le d -d o w n v ersion o f th e g r a m m a r in d u c e d w ith th e m a c h in e lea rn in g te ch n iq u e " E x p la n a tio n -B a s e d L e a rn in g " (S a m u e ls s o n a n d R a y n e r , 1 9 9 1 ) is first in v o k e d a n d p a rsed w ith an L R -p a rs e r (S a m u e ls s o n , 1 9 9 4 ). O n ly i f th is re s tr ic te d -c o v e r a g e g ra m m a r fa ils is th e g e n e r a l-p u r p o s e g r a m m a r tried o n th e (b y th e sp e ech re co g n iz e r) m o s t p re ferre d h y p o th e s is . A p refe re n ce c o m p o n e n t is th e n u sed t o g iv e ea ch Q L F a n u m e ric a l s c o r e re fle c tin g its lin g u is tic p la u s ib ility (A ls h a w i a n d C a rte r, 1 9 9 4 ). W h e n th e p re fe re n ce c o m p o n e n t h as m a d e its c h o ice , th e h ig h e s t-s c o rin g lo g ic a l fo r m is p a sse d t o th e tran sfer c o m p o n e n t , w h ich u ses a se t o f s im p le n o n -d e t e r m in is t ic re cu rs iv e p a tte r n m a tc h in g rules t o rew rite it in to a set o f p o s s ib le c o r r e s p o n d in g S w e d ish rep re sen ta tio n s (A ls h a w i e t a l., 1991; G a m b ä c k a n d B r e ta n , 1 9 9 4 ). T h e p re fe ren ce c o m p o n e n t is n o w in v o k e d a g a in , t o se le ct th e m o s t p la u s ib le tran sferred lo g ic a l fo r m . T h e resu lt is fe d t o a s e c o n d c o p y o f th e C L E , w h ich uses a S w e d ish -la n g u a g e g ra m m a r a n d le x ic o n d e v e lo p e d a t S IC S (G a m b ä c k a n d R a y n e r , 1 9 92 ) to co n v e rt th e fo r m in to a S w ed ish s trin g a n d an a s s o c ia te d s y n ta x tree. F i n ally, th e s trin g a n d tree are p a sse d to th e T e lia P r o p h o n sp e e ch sy n th e siz e r, w h ich fo r sp e ech r e co g n itio n a n d 7 4 .2 % fo r la n g u a g e p r o ce s s in g , w ith a 6 2 .2 % o v e ra ll p e r fo rm a n ce . g ra m m a r o r n a tu ra ln ess o f e x p re s s io n , w ith e rrors d u e t o d iv e rg e n ce in m e a n in g b e tw een th e s o u rce a n d ta rg e t sen ten ces a c c o u n tin g fo r less th a n 1% o f all tr a n s la tio n s . S L T p e r fo r m a n c e is d iscu ssed a t le n g th in (R a y n e r e t a l , 1 9 9 4 ). Swedish Language Processing A s n o te d a b o v e , th e S -C L E is a g en e ra l p u rp o s e n a tu ra l la n g u a g e p r o ce s s in g s y s te m fo r S w ed ish . T h e m a in o b j e c t o f th e s y s te m is t o m a p ce rta in n a tu r a l la n g u a g e e x p re s sio n s in to a p p r o p r ia te p r e d ic a te s in q u a s i-lo g ic a l fo r m . T h e s y s te m is b a se d c o m p le t e ly o n u n ifica tio n a n d h a s a fa irly la rg e b id ir e c t io n a l p h ra s e -s tr u c tu re ty p e g ra m m a r (i.e ., th e g ra m m a r ca n b e u sed b o t h fo r a n a ly s is a n d g e n e r a t io n ) co v e r in g re fle ctin g d iffe ren t ta stes o n th e sid e o f th e g ra m m a ria n s th a n real g ra m m a tic a l d iffe re n ce s a n d w ill th u s b e left o u t fr o m th e d is cu ss io n h ere .) A p r e v io u s v e rs io n o f th e S w ed ish g r a m m a r a n d h o w it w as d e v e lo p e d w as de s c r ib e d in (G a m b a c k a n d R a y n e r , 1 9 9 2 ). T h e r e w e a lso w en t in to s o m e d e ta il o n th e (a t le a st fo r a tr a n s la tio n ta s k ) m o s t v ita l d iffe ren ce s b e tw e e n E n g lish a n d S w ed ish , b o t h a t th e m o r p h o lo g y a n d s y n ta x levels. T h e p resen t p a p e r w ill th u s refra in fr o m r e c a p itu la tin g th a t d is cu ss io n a n d o n ly g iv e an o v e rv ie w o f th e m o s t im p o r t a n t p h e n o m e n a a n d th eir p resen t tr e a tm e n t in th e s y ste m . in g a n d s e m a n tic a n a ly s is. T h e rules o f th e g r a m m a r p r o p e r are th u s d iv id e d in to tw o d iffe re n t ru le sets, o n e w ith th e s y n ta x a n d a n o th e r w ith th e (c o m p o s it io n a l) s e m a n tic s . T h e m a in p r o ce s s in g ch a in is eis sh o w n in F ig u re 2. N L sen ten ce _ i ________________________________________________ u . . . . . • C sp e llin g rules ) m o r p h o lo g ic a l a n a ly s is ( p r o d u c t io n ru les ) s y n ta c tic p a rsin g I s e m a n tic a n a ly s is s y n ta x rules ) s e m a n tic ru les ) r Q L F F ig u re 2: T h e a n a ly s is ste p s o f th e S -C L E 40 Proceedings of NODALIDA 1995 Morphology G iv e n th a t S w ed ish is an in fle c tio n a l la n g u a g e , th e tr e a tm e n t o f th e in fle c tio n a l m o r p h o lo g y b y s im p le a fB x -s tr ip p in g u sed in th e o r ig in a l E n g lish C L E w as fa r fr o m su fficie n t. A " la z y " v ersion o f th e tw o -le v e l m o r p h o lo g y (K o s k e n n ie m i, 1 9 8 3 ) w as thus im p le m e n te d (C a r te r , 1 9 9 5 ). T h is v ersion is " la z y " in th a t it d o e s n o t a c c o u n t fo r g en era l ch a n g es o f th e s te m s o f w ord s . A ty p ic a l sp e llin g ru le is th e fo llo w in g w h ich sh o w s th a t w h e n th e a ffix e r is w o u ld g iv e th e w ord s fo r " fa th e r " , " g r a n d -fa th e r " (fa t h e r 's fa t h e r ), " g re a t g r a n d fa th e r" , e tc.: , f a r f a r , f a r f a r s f a r , f a r f a r s f a r f a r f a r SyntcLX O n th e s y n ta c tic sid e , th e E n g lish a n d S w ed ish g ra m m a rs d iffe r o n m a n y a cco u n ts . A n e x a m p le is th e th re e -v a lu e d d e fin iten e ss fe a tu r e , w h ich ra n ges o v e r valu es fo r " in d e fin ite " , " d e fin ite " a n d " p os se ss iv e " , th e la st o n e b e in g u sed o n g e n itiv e N P s . T h e s e are tre a te d as fo r m in g c o m p le x d e te rm in e rs, s o th a t ' en m a n s f r u ' ( a m a n 's w ife ), ' m a n n e n s f r u ' (th e m a n 's w ife ), a n d In th e C L E , ea ch s y n ta c tic ru le is p a ra lle le d b y (a t le a s t) o n e s e m a n tic rule. N o te th a t ea ch c o n s titu e n t in th e s e m a n tic ru les is a p a ir w ith th e first p a rt h o ld in g th e s e m a n tic lo g ic a l-fo r m fr a g m e n t a n d th e s e c o n d p a rt h o ld in g th e (b a s ic a lly ) s y n ta c tic in fo r m a t io n . Semantics Negation A s p e cific case w h ere th e E n g lish a n d S w ed ish g r a m m a r d iffers s ig n ific a n tly is in th e tr e a tm e n t o f n e g a tio n . N e g a tio n in S w ed ish is ex p re s sed w ith th e p a r t ic le ' i n i e ' ( n o t ) , w h ich is p la ce d a fter th e m a in v erb in a m a in cla u se , b u t b e fo r e it in a s u b o r d in a t e cla u se, thus: H a n s n a r k a d e in te . . . .a tt h a n i n t e s n a r k a d e . T h re e rules fo r v erb s are n e e d e d , th e first tr e a tin g m a in cla u se n e g a tio n , th e s e co n d tre a tin g s u b o r d in a t e cla u se n e g a tio n a n d th e th ird tr e a tin g a s p e cia l ca se o f m a in cla u se n e g a tio n w ith a p r o n o u n as o b je c t : N o te th a t th e fig u res in th e g ra p h refer t o se n te n ces th a t o b t a in e d a tr a n s la tio n , a n y tra n s la tio n . F or a d is cu ss io n o f th e tr a n s la tio n q u a lity , see (A g n a s e i a i , 1 9 9 4 ). m a n n e n [g illa d e t n t e ] M a r ia Future Work and Conclusions In th e p a p e r, th e S w ed ish la n g u a g In p a ra lle l, w ork w ill b e u n d e rta k en o n s y s t e m a tic a lly te s tin g h o w th e g ra m m a r co v e ra g e o f th e S w ed ish s y s te m ca n b e tu n ed to w a rd s a n ew d o m a in (B e r g lu n d an d G a m b a c k , 19 95 ) a n d w h eth er th e s y s te m is r o b u s t e n o u g h t o b e u sed as th e ba sis fo r b u ild in g a tre e -b a n k o f S w ed ish a n a lyses (S a n t a m a r ta e i a i , 1 9 9 5 ). B o t h th ese tests w ill use th e re p res en ta tiv e S w ed ish " S t o c k h o lm -U m e a c o r p u s " (S U C ) (E je rh e d et a i , 1 9 9 2 ). Acknowledgements T h e w ork re p o rte d here w as fu n d e d b y th e S w ed ish In s titu te o f C o m p u t e r S cie n ce an d T e lia N etw ork s. I w o u ld like to th a n k Ivan B re ta n , J u ssi K a rlg re n a n d G a m b ä c k , B . a n d L ju n g , S. 1993. Q u e s tio n A n s w e r in g in th e S w e d ish C o r e L a n g u a g e E n g in e. In P r o c e e d i n g s o f th e f t h S c a n d in a v ia n C o n f e r e n c e o n A r ti f ic i a l I n t e l l i g e n c e , p a g es 2 1 2 -2 2 5 , S t o c k h o lm , S w ed en . A ls o a v a ila b le as S IC S R e sea rch R e p o r t , R 9 2 0 1 4 , S t o c k h o lm , S w e d e n . G a m b ä c k , B . a n d R a y n e r, M . 1992. T h e S w e d ish C o r e L a n g u a g e E n g in e . In M u rv e it, H ., B u tz b e rg e r , J ., a n d W e in tr a u b , M . 1991. S p e e ch R e c o g n it io n in S R T s R e s o u rce M a n a g e m e n t a n d A T I S S y s te m s . In P r o c e e d i n g s o f th e f t h S p e e c h a n d N a t u r a l L a n g u a g e W o r k s h o p . D A R P A , M o r g a n K a u fm a n n . P a rk in so n , S. 1992. A C o m p u t a t io n a l G r a m m a r fo r U se in M a c h in e T r a n s la tio n . M a ster o f P h ilo s o p h y T h e s is , C a m b r id g e U n iv ersity , C a m b r id g e , E n g la n d . R a y n e r, M . a n d B o u illo n , P. 1995. H y b r id T ra n sfe r in an E n g lis h -F r e n c h S p o k e n L a n g u a g e T r a n s la to r . (In m a n u s c r ip t.). R a y n e r, M ., A ls h a w i, H ., B r e ta n , I., C a rte r, D . M ., D ig a la k is , V ., G a m b ä c k , B ., K a ja , J ., K a rlg ren , J ., L y b e r g , B ., P u lm a n , S. G ., P r ic e , P ., a n d S a m u e ls so n , C . 1993. A S p eech t o S p e e ch T r a n s la tio n S y s te m B u ilt f r o m S ta n d a r d C o m p o n e n ts . In P r o c e e d i n g s o f th e W o r k s h o p o n H u m a n L a n g u a g e T e c h n o lo g y , P r in c e to n , N ew Jersey. A R P A , M o rg a n K a u fm a n n . R a y n e r, M ., C a rte r, D . M ., P r ice , P ., a n d L y b e r g , B . 1994. E s tim a t in g P e r fo r m a n c e o f P ip e lin e d S p ok en L a n g u a g e T r a n s la tio n S y s te m s . In S a m u e lsso n , C . a n d R a y n e r, M . 1991. Q u a n t ita tiv e E v a lu a tio n o f E x p la n a tio n -B a sed L e a rn in g as an O p t im iz a t io n T o o l fo r a L a r g e -S c a le N a tu ra l L a n g u a g e S y s te m . In P r o c e e d i n g s o f th e 1 2 th I n t e r n a t i o n a l J o in t C o n f e r e n c e o n A r t i f i c i a l I n t e l l i g e n c e , p a g e s 6 0 9 -6 1 5 , S y d n e y , A u s tra lia . S a n ta m a r ta , L ., L in d b e r g , N ., an d G a m b ä c k , B . 1995. T o w a r d s B u ild in g a S w ed ish T re e b a n k . In la tio n s y ste m in th e air tra v el in fo r m a t io n d o m a in . In th e c o u rse o f th e p r o je c t , th e p r e v io u s S w ed ish s y s te m w as c o m p le t e ly re d esig n ed a n d th e g e n e r a l-p u r p o s e g r a m m a r e x p a n d e d , b u t a lso tu n e d t o co v e r th e p e c u lia r itie s o f th e register (s u b la n g u a g e ) o f a p a rticu la r d o m a in . rin g th e first y ea r o f th e p r o je c t . T h e fin a l s e c tio n o f th e p a p e r lo o k s in to th e fu tu re , d e s cr ib e s th e o n g o in g w ork o n m a k in g th e s y s te m c o m p le t e ly b id ir e c t io n a l, a n d s u m s u p th e p r e v io u s d is cu ss ion . u c te d fr o m p r e v io u s ly e x is tin g p ieces o f so ftw a re , w h ich w ere a d a p te d fo r use u e n ce as s h o w n in F ig u re 1. T h e in p u t sig n a l is p r o ce s s e d b y S R I M e n lo P a r k 's D E C I P H E R ( T M ) (M u r v e it e t a t., 1 9 9 1 ), a sp e a k e r-in d e p e n d e n t c o n tin u o u s sp e e ch r e c o g n it io n s y s te m b a se d o n H id d e n M a r k o v M o d e l te c h n o lo g y . It p r o d u c e s a set o f s p e e ch h y p o tte ra n ce D e c e m b e r 1993 A T IS c o r p u s ) are: 7 8 .8 % o f a ll u tte ra n ce s are su ch th a t th e to p -s c o r in g sp e e ch h y p o t h e s is is an a c c e p ta b le o n e . I f th e sp e e ch h y p o th e s is is c o rr e ct, th en an a c c e p ta b le tr a n s la tio n is p r o d u c e d in 6 8 .3 % o f th e cases an d th e o v e ra ll p e r fo r m a n c e o f th e s y s te m is 5 3 .8 % . L im itin g th e te s t c o rp u s to sen ten ces o f 10 w o rd s o r less (688 u tte r a n c e s ), th ese fig u res m o v e u p t o 8 3 .9 % F or a b o u t 10 % o f th e c o rr e ctly r e co g n iz e d u tte ra n ce s, an u n a c c e p ta b le tra n sla tio n is p r o d u c e d . N ea rly a ll o f th ese are in co r r e ct d u e t o th eir c o n t a in in g e rrors in g y , co v e rin g all m a in in fle c tio n a l classes o f n o u n s, v e rb s a n d a d je c tiv e s . T h e S -C L E has b een d e v e lo p e d fr o m th e o r ig in a l E n g lish C L E b y re p la c in g E n g lish -sp e cific m o d u le s (g r a m m a r , m o r p h o lo g y , le x ic o n a n d le x ic o n a c q u is itio n ) w ith c o rr e s p o n d in g S w e d is h -la n g u a g e v ersion s, e x p lo it in g th e la rg e o v e rla p b e tw e e n th e s tru ctu re s o f th e tw o la n g u a g e s . M o s t o f th e S w ed ish g r a m m a r is th u s c o m p le t e ly e q u iv a le n t to th e E n g lish o n e ; th is s e c tio n w ill c o n c e n tra te o n th e p a rts th a F th e in trica cie s h a v in g t o d o w ith case, g e n d er an d n u m b e r v a r ia tio n o n n o u n s, as w ell as th e n o u n p h ra se p a rt o f th e g r a m m a r o n so m e o f th e p r o b le m s a s s o c ia te d w ith a " free" w o rd o r d e r; a n d fo r K o re a n . T h e rest o f th is s e c tio n w ill in tu rn g o th r o u g h th e d iffe ren t p r o ce s s in g step s u se d w h e n fo r m in g a Q L F in th e S -C L E a n d d e s cr ib e th e ru le sets u sed in ea ch o f th e m : first th e m o r p h o lo g ic a l p r o ce s s in g w h ere th e ru le b a s e is d iv id e d in to m o rp h o p h o n o lo g ic a l " s p e llin g " rules a n d m o r p h o s y n t a c t ic " p r o d u c t io n " rules. T h e n th e g r a m m a t ic a l p r o ce s s in g w h ich in tu rn is d iv id e d in to tw o ste p s, s y n ta c tic pa rs im p ly a ru le n a m e m a in ly u sed fo r d e b u g g in g . T h e m a in p a rts o f th e ru le a p p e a r o n th e d ifferen t sid es o f th e a rro w (= > ): th ese are th e s u r fa c e a n d le x ic a l fo r m s , re sp e ctiv e ly . T h e v e rtica l b a rs ( I ) in d ic a t e w h ich le tte rs m a y b e c h a n g e d in th e rule. I f th e a rrow is b id ir e c t io n a l (< = > ), th e ru le m u s t a p p ly ; h e re it m a y o p t io n a lly a p p ly . T h e fin al tw o lists p u t r e s tr ic tio n s o n th e " v a ria b le s " 1 a n d 2 in th e rule, an d o n p o s s ib le fe a tu re s e ttin g s o n th e s te m . In th e cu rren t version o f th e S w ed ish m o r p h o lo g y (w h ic h is still u n d e r d e v e l o p m e n t ), 58 su ch sp e llin g rules a p p e a r a n d are c o m p le m e n t e d b y a n o th e r set o f 4 in te rw o rd rules u sed in th e d e r iv a tio n a l m o r p h o lo g y , w h ich in S w e d ish is a ls o q u ite c o m p le x ; h ow ev er, sin ce th e cu rren t v e rsio n o f th e s y s te m c a n n o t h a n d le d e r iv a tio n in E n g lish are fo r m e d s im p ly a s g r o u p s o f w o rd s , th e S w ed ish c o m p o u n d s are fo r m e d b y a c tu a lly c o m p o u n d in g th e w o rd s to g e th e r . In g e n e ra l, th is can b e d o n e in a w id e v a riety o f fa s h io n s , b u t in p r e s e n t-d a y S w ed ish m a in ly in tw o w ays o n ly : e ith er b y ju s t " g lu in g " th e w o rd s to g e th e r , o r b y in se rtin g an -s -b e tw e e n th e w o rd s in th e c o m p o u n d , as d e s cr ib e d in fo r e x a m p le (K ie fe rn ; th e o th e r w o rd s in th e c o m p o u n d ca n b e o f o th e r classes (e .g ., a d je c tiv e s or a d v e r b s ), b u t are n o rm a lly n ou n s, as w ell. A s a r u le -o f-t h u m b , n o u n c o m p o u n d s are fo r m e d first w ith o u t in s e r tin g an in fix s , b u t i f th e c o m p o u n d co n s ists o f m o r e th a n tw o w o rd s , an s w ill b e in s e rte d fo r every s e c o n d w o rd a d d e d to th e c o m p o u n d , s o fo r e x a m p le th e fo llo w in g s e q u e n ce , . . . W h e th e r a p a rticu la r n o u n w ill fo r m c o m p o u n d s b y in s e rtin g an s o r n o t d e p e n d s o n th e w o rd in q u e s tio n a n d is th u s le x ica liz e d . in tu rn ca n p r o d u c e an N th a t takes th e n u ll-in fix i f it m e e ts th e ' -s -' io n ru les lik e th e o n e a b o v e cu rre n tly n u m b e r 27 in th e s y s te m , o n ly 4 o f w h ich are u sed fo r fo r m in g c o m p o u n d s . T h e s e p r o d u c t io n ru les are a ctu a lly u sed b y th e s y n ta c tic m o r p h o lo g ic a l p r o ce s s in g a n d are m o r e o r less p a ra lle le d by 33 s e m a n tic m o r p h o lo g ic a l d e r iv a tio n rules. F irstly , several e x tr a rules a p p e a r in th e S w ed ish s y s te m , m a in ly t o c a p tu r e d ifferen t k in d s o f m o v e m e n ts , in p a r ticu la r th e fa c t th a t S w e d ish a llo w s fo r t o p ic a liz a tio n o f ju s t a b o u t an y ty p e o f c o n s titu e n t. S p a ce co n s id e r a tio n s p re v e n t a fu ll a c c o u n t o f th ese ru les fr o m b e in g in clu d e d in th is p a p e r ; th e y w ill b e F ' K a l l e s f r u ' (K a ll e 's w ife ) are all in te r p r e te d as h a v in g th e s tru ctu re [N P [D E T N] ] as e x a m p lifie d in F ig u r ig u re 3: T h e tree s tru ctu re fo r th e n o u n p h ra se ' en m a n s f r u ' T h is is o b t a in e d b y u sin g th e fo llo w in g tw o ru les (h e re q u ite s im p lifie d w ith m o e first ru le sp e cifica lly fo r m s d e te rm in e rs fr o m g e n itiv e N P s (w it h th e fe a tu r e se ttin g g e n = y ) re ga rd less o f th e N P 's d efin iten ess ( d e l = _ ) , g iv in g th e n e w ly fo r m e d d e te rm in e r a p o s se ss iv e d efin iten ess. T h e s e c o n d ru le fo r m s N P s fr o m d e te rm in e r s an d n o u n s as lo n g as th e d efin iten ess valu es on th e d a u g h te rs u n ify . T h is ru le m a y b e u sed on a w id e ra n g e o f d e te rm in e r a n d n o u n ty p e s , in c lu d in g g e n itiv e s. e d ifferen ces b etw e en E n g lish a n d S w ed ish s y n ta x is o n ly m ir r o r e d at th e (Q L F , i.e ., c o m p o s it io n a l) s e m a n tic lev el w ith o u t an y in te re stin g a d d itio n s . T h e m o s t n o ta b le e x c e p t io n is th e v e rb -p h ra s es. A lr e a d y a t th e s y n ta x -le v e l, m o s t w o rdo r d e r d ifferen ces s te m fr o m th e s t r o n g ly v e r b -s e c o n d n a tu re o f S w ed ish : fo r m a t io n o f b o t h Y N -a n d W H -q u e s tio n s is b y s im p le in v ersion o f th e s u b je c t a n d v e rb , w ith o u t th e in t r o d u c t io n o f an a u x ilia ry . T h is is illu s tr a te d in th e fo llo w in g e x a m p le s : Proceedings of NODALIDA 1995 n o te th a t th e m a in trick u sed is t e x i c a li z a i io n : in fo r m a t io n re g a rd in g fo r e x a m p le v erb s u b c a t e g o r iz a tio n sch e m e s (i.e ., th e n u m b e r a n d ty p e o f v e rb a l c o m p le m e n t s , su ch as o b je c t s , p a rticle s , e t c .) is r e m o v e d fr o m th e g r a m m a r a n d p u t in th e le x ic o n in ste a d . S y n ta c tica lly , th is e n a b le s us t o tre a t b o t h E n g lish a n d S w ed ish v e rb -p h ra s e s o f d iffe ren t k in d s w ith a ru le lik e th e fo llo w in g : re th e v a lu e o f th e subcat fe a tu r e o f th e ve rb h a s t o u n ify w ith th e rest o f th e v e rb -p h r a s e . T h e v a lu e o f subcat is sp e cifie d fo r a p a r ticu la r ve rb in its le x ic a l en try a n d c a n o f c o u r s e b e e m p t y (fo r in tra n sitiv e s, e t c .). O u r cu rre n t S w e d ish g ra m m a r tre a ts 48 d iffe ren t m a in verb c o m p le m e n t p a tte r n s p lu s c o p u la s a n d a u x ilia ries. W it h o u t c la im in g th is to b e th e a b s o lu te n u m b e r o f S w ed ish verb ty p e s in an y sense, it is ea s ily u n d e r s ta n d a b le th a t w ith o u t th e stra te g y o u tlin e d a b o v e , w e w o u ld h ave b e e n fo r c e d t o s ta te s p e cific in sta n ce s o f th e v e rb -p h r a s e fo r m a t io n ru le fo r a vast n u m b e r o f cases. F or all E n g lish v erb s a n d fo r S w e d ish m a in v e rb s, th e v e rb -p h ra s e ru le a b o v e has a s im p le c o u n te r p a r t, b u t ev en fo r S w ed ish a u x ilia rie s th e tr e a tm e n t cau ses n o p r o b le m s , even th o u g h an e x tr a ca se o f th e s e m a n tic ru le h a d t o b e a d d e d in o rd e r t o p a ss ten se a n d a s p e c t in fo r m a t io n p r o p e rly , g iv e n th a t fo r m a in ve rb s, th e tense in fo r m a t io n o f th e v e rb -p h ra s e is th e sa m e as th e o n e o f th e d a u g h te r v erb an d is s im p ly u n ified u p to g e th e r w ith th e o th e r s e m a n tic in fo r m a t io n , w h ile in th e a u x ilia ry ca se, th e s e m a n tic in te r p r e ta tio n o f th e m o t h e r v e rb -p h ra s e still is th e on e o f th e d a u g h te r v e rb -p h ra s e , b u t th e ten se is t o b e ta k en fr o m th e a u x ilia ry . T h u s w e g e t th e fo llo w in g tw o (in d e e d v ery s im p lifie d !) s e m a n tic rules: vp_v_comp_Normal, aux, [(V,vp:[tense_and_aspect=TA], (Aux,v:[aux=y, tense_aspect=TA, subcat=(V,vp:[])]), (V,vp:[]) H e d id n o t s n o r e . . . .th a t h e d id n o t s n o r e . " ), in clu d in g ' o f t a ' ( o ft e n ), ' a l l t i d ' (a lw a y s ) a n d ' t r o l i g e n ' fo r m a lis m as su ch . T h e d esig n c h o ic e in th e E n g lish C L E w a s t o tre a t n e g a tio n s e m a n tic a lly as an o p e r a t o r o n th e se n te n ce stru c tu re w h ich a t th e s y n ta c tic lev el p r e -m o d ifie s a v e rb -p h ra s e fo r m in g a n ew v e rb -p h r a s e , th e ru le th u s w ed ish su ch a tr e a tm e n t d o e s n o t su ffice; n e g a tio n is still v ie w e d as an o p e r a t o r at th e s e m a n tic lev el, b u t in ste a d o f m o d ify in g v e rb -p h ra s e s, it is ta k en as m o d ify in g th e v erb it s e lf in th e s y n ta x . S in ce w h e th e r th e m o d ific a t io n is p re-or p o s t-d e p e n d s o n th e ty p e o f cla u se , th is h a s b ee n tr e a te d b y a d d in g a s u b o r d i n a t e fe a tu re t o S, V P a n d V . a tt m a n n e n [ i n t e g i ll a d e ] M a r ia / m i g th a t th e m a n d id n o t lik e M a r y / m e v:[subordinate=y, ...] -> neg: [] + v:[vlorm=(\(att)), ...] Proceedings of e V P , S w ed ish m o d ifie r s are in tern a l ca n b e ta k en as an a rg u m e n t a g a in s t h a v in g a V P n o d e at all in S w ed ish , o r as b a sis fo r in tr o d u c in g a V n o d e . T h e a b o v e tr e a tm e n t g o e s a b it a lo n g th e w ay o f th e s e c o n d a lte rn a tiv e . 4 Swedish grammar coverage W it h o u t g o in g in t o m o r e d e ta ils o f th e S w e d ish g r a m m a r , w e s h o u ld n o te th a t its c o v e r a g e o n th e A T I S ta sk w as in cre a s e d s u b s ta n tia lly d u rin g th e p r o je c t . F ig u r e 4: T ra n sfe r a n d g e n e r a tio n c o v e ra g e in cre 2ise o w n b y th e fig u re fo r m id -J u n e ). ce s sin g p a rt h ave a lso b een d e s cr ib e d . T h e o v e ra ll S L T s y s t e m p r o t o t y p e a n d its co v e ra g e a fter th e first y ea r o f th e p r o je c t h as o n ly b e e n b rie fly d is cu ss e d , w h ile th e p a p e r has fo c u s e d o n th e d ifferen t m o d u le s o f th e S w ed ish p r o c e s s in g c o m p o n e n t . T h e s e h a ve b e en d e s crib e d m a in ly o n a p r o -e x a m p le ty p e le v e l, s h o w in g th e v a rio u s ru le fo r m a lis m s a t w ork . A t th e d a te o f w ritin g , w ork h a s ju s t b e g u n o n a s e c o n d p h a s e o f th e S L T p r o je c t . W e in te n d t o reverse th e s y s te m , s o th a t tr a n s la tio n o f sp o k e n S w ed ish in to s p o k e n E n g lish w ill b e p o s s ib le . E ven th o u g h th e m a in p a rt o f th e w o rk n e e d e d fo r th a t w ill b e o n p r o d u c in g a S w e d ish sp e e ch r e c o g n itio n s y s t e m , th e la n g u a g e p r o ce s s in g c o m p o n e n t s w ill b e e x te n d e d q u ite a lo t a t th e s a m e tim e . P a r tly b e c a u s e th e S w ed ish p a rt o f th e s y s te m h a s n o t b e e n e x te n s iv e ly trie d fo r la n g u a g e p r o c e s s in g as o p p o s e d t o ju s t g e n e r a tio n fo r a w h ile , p a r t ly b e ca u s e th e n ew v e rs io n o f S L T a lso w ill in clu d e e x te n d e d p r o ce s s in g in a n ew sp o k e n la n g u a g e d a ta b a s e q u ery teisk, as w ell as a llo w in g fo r s o m e tra n s la tio n s in a c o m p u t e r m e d ia t e d p e r s o n -to -p e r s o n d ia lo g u e se tu p . ere in v o lv e d in th e S L T p r o je c t a t S IC S , m y o th e r c o lle a g u e s in th e S IC S N L P -g r o u p , e v e r y b o d y else w o rk in g o n S L T re la te d issues a t th e v a rio u s sites, inp a rticu la r M a n n y R a y n e r an d M a lg o r z a ta S ty s fo r s o m e u sefu l su g g e s tio n s . C h a rlo t t a B e rg lu n d , N ik o la j L in d b e r g a n d L e n a S a n ta m a r ta are w o rth y o f m a n y th a n k s fo r (u n w illin g ly ) h a v in g h a d t o h elp o u t in d e b u g g in g th e g ra m m a r w h ile d o in g th eir B A an d M A T h e s is w ork . a p e r s f r o m th e 3 r d N o r d i c C o n f e r e n c e o n T e x t C o m p r e h e n s i o n in M a n a n d M a c h i n e , fe r, F . 1970. S w e d is h M o r p h o l o g y . S k r ip to r, S t o c k h o lm , S w ed en . K o sk e n n ie m i, K . 1983. T w o -L e v e l M o r p h o l o g y : A G e n e r a l C o m p u t a t i o n a l M o d e l f o r W o r d -F o r m R e c o g n i t i o n a n d P r o d u c t i o n . P r o c e e d i n g s o f th e I E E E I n t e r n a t i o n a l C o n f e r e n c e o n A c o u s t i c s , S p e e c h a n d S ig n a l P r o c e s s in g , K y o t o , J a p a n . S a m u e lsso n , C . 1994. N ote s o n L R P a rser D e sig n . In P r o c e e d i n g s o f th e 1 5 th I n t e r n a t i o n a l C o n f e r e n c e o n C o m p u t a t i o n a l L i n g u i s t i c s , P r o c e e d i n g s o f th e 1 0 th S c a n d in a v ia n C o n f e r e n c e o n C o m p u t a t i o n a l L i n g u i s t i c s , H elsin k i U n iv ersity , H e lsin k i, F in la n d , (p r e s e n ta tio n ). S tys, M . 1995. In c o r p o r a tin g D is co u rs e A s p e c t s in E n g lis h -P o lis h M T : T o w a r d s R o b u s t I m p le m e n ta tio n . (In m a n u s c r ip t.). M a c h i n e T r a n s l a t i o n in th e A m e r i c a s , p a g e s 8 9 -9 6 , C o lu m b ia , M a r y la n d . A ls h a w i, H ., C a rte r, D . M ., G a m b ä c k , B ., a n d R a y n e r, M . 1991. T r a n s la tio n by Q u a s i L o g ic a l F o rm T ra n sfe r. In M .-S N Ä S, H Ls H A W I, I Ta N, D C E D E R, K C O Llin S, M C R O U Ch, R La K Is, V E K H O Lm, B G A M B Ä C K, B , J Re N, J L Y B E R G, B P R Ice, P , T r a n s la to r : F irst-Y e a r R e p o r t . J o in t R esearch R e p o r t R. C R C -0 4 3 , S IC S a n d S R I9n g la n dn ä s , M .-S ., A ls h a w i, H ., B re ta n , I., C a rte r, D ., C e d e r, K ., C o llin s , M ., C r o u ch , R ., D ig a la k is , V ., E k h o lm , B ., G a m b ä c k , B ., K a ja , J ., K a rlg re n , J ., L y b e r g , B ., P r ice , P ., P u lm a n , S ., R a y n e r, M ., S a m u e ls so n , C ., a n d S v e n ss on , T . 1994. S p o k e n L a n g u a g e T r a n s la to r : F irst-Y e a r R e p o r t . J o in t R esearch R e p o r t R 9 4 :0 3 an d C R C -0 4 3 , S IC S a n d S R I In te r n a tio n a l, S t o c k h o lm , S w e d en a n d C a m b r id g e , E n g la n d . . A Ls H A W I, H , ch u s ettsA ls h a w i, H ., e d it o r . 1992. T h e C o r e L a n g u a g e E n g in e . T h e M I T P ress, C a m b rid g e , M a ss a ch u s etts . . A Ls H A W I, H A N D C A Rte R, D , 2 0 :6 3 5 -6 4 8A ls h a w i, H . a n d C a rte r, D . 1994. T r a in in g a n d S c a lin g P re fe re n ce F u n ctio n s fo r D is a m b ig u a t io n . C o m p u t a t i o n a l L i n g u i s t i c s , 2 0 :6 3 5 -6 4 8 . . A Ls H A W I, H A N D Va N E Ijc K, J , B ritish C o lu m b iaA ls h a w i, H. a n d va n E ijc k , J . 1989in e . In P r o c e e d i n g s o f th e 2 7 th A n n u a l M e e t i n g o f th e A s s o c i a t i o n f o r C o m p u t a t i o n a l L i n g u i s t i c s , p a g e s 2 5 -3 2 , V a n c o u v e r , B ritish C o lu m b ia .
21,708,183
KTH Tangrams: A Dataset for Research on Alignment and Conceptual Pacts in Task-Oriented Dialogue
There is a growing body of research focused on task-oriented instructor-manipulator dialogue, whereby one dialogue participant initiates a reference to an entity in a common environment while the other participant must resolve this reference in order to manipulate said entity. Many of these works are based on disparate if nevertheless similar datasets. This paper described an English corpus of referring expressions in relatively free, unrestricted dialogue with physical features generated in a simulation, which facilitate analysis of dialogic linguistic phenomena regarding alignment in the formation of referring expressions known as conceptual pacts.
[ 28722153 ]
KTH Tangrams: A Dataset for Research on Alignment and Conceptual Pacts in Task-Oriented Dialogue Todd Shore tcshore@kth.se KTH Speech, Music and Hearing Stockholm Sweden Theofronia Androulakaki KTH Speech, Music and Hearing Stockholm Sweden Gabriel Skantze gabriel@speech.kth.se KTH Speech, Music and Hearing Stockholm Sweden KTH Tangrams: A Dataset for Research on Alignment and Conceptual Pacts in Task-Oriented Dialogue referenceconceptual pactstask-oriented dialogue There is a growing body of research focused on task-oriented instructor-manipulator dialogue, whereby one dialogue participant initiates a reference to an entity in a common environment while the other participant must resolve this reference in order to manipulate said entity. Many of these works are based on disparate if nevertheless similar datasets. This paper described an English corpus of referring expressions in relatively free, unrestricted dialogue with physical features generated in a simulation, which facilitate analysis of dialogic linguistic phenomena regarding alignment in the formation of referring expressions known as conceptual pacts. Introduction There is recent interest in the role of referring expressions (REs) in situated dialogue and the alignment of referring language (RL) between dialogue participants (Barr and Keysar, 2002;Foster et al., 2006;Zarrieß et al., 2016;Aina et al., 2017). These datasets are useful for studying general patterns of alignment but are not specifically tailored to studying the effects of conceptual pacts (CPs) on RL in dialogue: CPs are patterns of RL which are mutually accepted (either explicitly or implicitly) and used by all dialogue participants throughout the course of a dialogue (Brennan and Clark, 1996). In order to study this phenomenon, we introduce a collection of recorded spoken English dialogues situated in a task called KTH Tangrams, wherein two participants collaborate in order to correctly select a predetermined abstract image on a procedurally-generated game board: Participants take turns assuming the role of either instructor, who can see which piece must be selected, or manipulator, who can select a piece but cannot see which one must be selected. This experiment design is similar to that used for many other works regarding RL, the most similar of these being PentoRef's PentoCV and RDG-Pento (Zarrieß et al., 2016). PentoCV and RDG-Pento consist of one participant instructing the other which pentomino piece (Golomb, 1994) is to be manipulated, but both participants are allowed to speak in a free fashion, a design originally defined by Kousidis et al. (2012). KTH Tangrams, however, is especially well-suited to observing CPs because the experiment design entails participants deterministically referring to abstract entities multiple times in a dynamic environment without the entities themselves playing a role in a larger, culminating goal as done by e.g. Foster et al. (2006). Related Work While there are many different works concerned with taskoriented dialogue, there are a number of differences in experiment design among them. Static Versus Dynamic Environments The roles of instructor and manipulator seen in many tasks used for dialogue research are analogous to the roles of director and matcher in traditional reference communication tasks, with the terms defined by Schober and Clark (1989) but the task itself originating from Krauss and Weinheimer (1964). These tasks involve simple reference resolution, whereby the state of the environment shared by the director and matcher (e.g. a set of figures on a sheet of paper) does not change during the task. Static reference communication tasks often differ from instructor-manipulator tasks in that, in the latter, the state of the participants' shared environment changes during the task, entailing that CPs be robust throughout these changes, as observed by e.g. Ibarra and Tanenhaus (2016). Since the referent of a(n effective) CP should remain unambiguous throughout the dialogue for all members of the CP, a dynamic environment would more easily show the difference of CPs from mere alignment of RL. Repeating Versus Culminating Tasks Certain tasks are repetitive in that a similar sub-task is repeated with parametric variations, such as done by Krauss and Weinheimer (1964). However, a number of works involve tasks which culminate to a predefined goalcf. Foster et al. (2006). This means that participants are aware of a sub-task's relation to a larger process, which has an effect on RL used and thus also CPs (Ibarra and Tanenhaus, 2016). While these effects are interesting, we are interested in CPs based on properties of the CPs' referents in themselves rather than on referents' purpose in a larger pattern of interaction: Resolving CPs based on "objectoriented names" such as the leg [of the lion being assembled] (Ibarra and Tanenhaus, 2016, p. 564) is a contextsensitive task which is not only dependent on the previous language used but also on the history of the culminating task as well as future actions and thus entails action awareness, such as by incorporating intent prediction and decision planning -cf. Bard et al. (2008). Thus, we want to limit participants' accumulation of task-related knowledge over time. Referential Aspects In many tasks, such as that of Krauss and Weinheimer (1964), participants can freely chose referents, e.g. which entity to describe. This complicates both manual and automatic annotation of referents and RL and so an ideal experiment should restrict possible referents as much as possible without hindering free dialogue. Likewise, we are interested in CP formation between humans and so the experiment should avoid machine-directed speech, which can differ greatly from human-directed speech (Kriz et al., 2010). Lastly, referent entities should have distinguishing features (Westerbeek et al., 2015) but not show extreme typicality, whereby referent features are strongly correlated: For example, a purple cow is highly atypical (Mitchell et al., 2013, p. 3062). Experimental Paradigms There exist multiple experimental paradigms for taskoriented dialogue, each incorporating different combinations of environmental, task and referential aspects. Map Tasks One form of instructor-manipulator task is that of "map tasks", whereby one participant has information about a spatial area which the other does not. The former must then instruct the latter on how to navigate the map to accomplish a defined goal, e.g. reaching a particular landmark (Thompson et al., 1993;MacMahon et al., 2006). A variation of this are cases where the navigator is in fact situated within the map being navigated (Shimizu and Haas, 2009;Vogel and Jurafsky, 2010;Götze and Boye, 2016). In both cases, the state of the environment is static. However, the task culminates to a predefined goal, leading to confounds. Joint Construction Tasks One experiment design involving dynamic environments is that of "joint construction tasks" (Fong et al., 2006;Foster et al., 2006;Spanger et al., 2012;Yan et al., 2016), where agents (human or otherwise) collaboratively assemble a predefined structure from component pieces. This dynamism makes such tasks well-suited for studying the formation of CPs: Due to the fact that certain physical features are static (e.g. a piece's shape or color) while others are dynamic and change throughout the course of the dialogue (e.g. location), the dynamic nature of RL can be better studied, similarly to how Ibarra and Tanenhaus (2016) observed changes in referring strategy when contrastive features previously used to disambiguate entities are no longer effective due to introducing new entities with similar features. However, these tasks culminate to an end goal, again leading to e.g. "object-oriented names" such as the leg [of the lion being assembled] (Ibarra and Tanenhaus, 2016, p. 564). KTH Tangrams: Dynamic, Repeating Fixed-Referent Tasks We have argued that a corpus ideal for researching CPs involves a repeating, non-culminating task in a dynamic environment while lacking free choice of referent. Moreover, the referents themselves should be abstract enough to elicit descriptive RL. However, in order to capture the full variation of CP formation, the language used should still be relatively unrestricted human-human dialogue; Unlike the datasets reviewed above, our corpus KTH Tangrams fulfills all of these criteria (see Table 1). Experiment Design Each experiment session involves two healthy adults with normal or corrected-to-normal vision and English either as a native language or as a common language used in a professional context. Each participant has their own PC on a LAN, head-mounted microphone and speakers in a room separate from the other's, similarly to the setup of Manuvinakurike et al. (2015): They communicate freely via speech but cannot interact in any other way. Once both participants log into the game, they are simultaneously presented with an identical view of a simulated game board occupied by 20 tangram-like pieces (Gardner, 1974). Reproducible Pseudo-Random Environments The board configuration is determined procedurally: The pieces' initial placements are chosen pseudo-randomly with a seed as positions the board on an invisible 20 × 20 grid. 1 . Likewise, the pieces' visual attributes are chosen pseudo-randomly using the same method as is each piece's subsequent move 2 . • POSITIONX and POSITIONY are the position of the entity's center as a proportion of the total board area. • HUE is derived from the individual sRGB color features RED, GREEN and BLUE (International Electrotechnical Commission, 1999). • EDGECOUNT values are manually annotated for each unique SHAPE value; For the shapes currently present in the corpus, the values thereof range from 6 to 16. • SHAPE is a nominal feature enumerating 17 unique images which can be drawn to visualize an entity. The images, which are shown in Figure 1, were handchosen to have a roughly-even distribution of typicality -cf. Mitchell et al. (2013). • SIZE values are derived from possible entity dimensions 2×2 (small), 3×3 (medium) or 4×4 (large) and are normalized by the total area of the board; Since the board area is always 20 × 20, the effective feature values are 0.01, 0.0225 and 0.04. Since the environments each dialogue is situated in are procedurally-generated, a wide distribution of behavior can be easily created which compensates for possible confounds, such as would be the case if e.g. in every dialogue session, there was a particular piece with a color and shape combination which would have effects on every dialogue Experiment Environment Task Referent Entity type Addressee Language Krauss and Weinheimer (1964) Static Repeating Free Illustration Human Dialogue Schober and Clark (1989) Static Repeating Free Tangram Human Dialogue Thompson et al. (1993) Static Culminating Free Landmark Human Dialogue Barr and Keysar (2002) Dynamic in the corpus either as a distractor or as the piece being referred to itself. Furthermore, since these environmental features are generated using a seeded pseudo-random number generator, any particular experiment can be reproduced at will. Task Description During the task, both dialogue participants are seated at their own computer in separate rooms, each of which displays the current state of the game (see Figure 2). In each game round, the instructor sees a piece randomly highlighted, which is the piece they must instruct the manipulator to select. The manipulator has no indication or prior knowledge of which piece is to be selected, so the instructor must describe the piece well enough for the selector to click on it using a mouse. If the piece is selected correctly, the participants gain one point and proceed to the next round, where the roles are switched and the previously-selected piece moves to a random place on the board. However, if the wrong piece is selected, they lose two points and are required to try again (see Figure 3). Each experiment session is intended to be 15 minutes long 3 3 The mean duration for the corpus is 15:25.38 minutes. and the participants are informed of this before starting, being encouraged to earn as many points as possible in this time. They are explicitly told that they are not restricted in any way regarding their language aside from the one restriction that they focus only on the task at hand. In addition to the participants' speech being recorded and transcribed, the state of the game at the time of each utterance is available, including features representing each piece (i.e. possible referent) on the board at any time. Dialogue Transcription Recordings are manually segmented and transcribed into two channels of utterances composed of tokens Analysis Two different lexical analyses were performed in order to evaluate the appropriateness of the corpus for research in dialogic alignment of RL and conceptual pacts: Firstly, a trend of lexical convergence was observed both within speakers (i.e. a single participant's use of RL becomes less varied with time) as well as between speakers in a single dyad, whereby the RL used by one participant becomes more similar to their partner's RL. Secondly, TF-IDF scores were used to estimate the amount of information contained by language for resolving referents in a given dialogue on a global scale, i.e. not considering dialogue context. Dialogic Convergence Three types of lexical alignment were calculated in order to illustrate a trend of convergence in language use within dyads: Within-speaker convergence shows how an individual participant's use of RL becomes more consistent throughout the course of the dialogue. Between-speaker convergence shows how the use of RL by both participants in a dyad converges on the other's; Comparing this with within-speaker convergence allows effects of dialogic lexical alignment to be discerned from any effects associated with a particular participant (Krauss and Weinheimer, 1964). General convergence shows how much language used to refer to an entity with a given set of features converges as dialogue progresses for the entire corpus; This can be used to control for general convergence effects in discourse (Carroll, 1980;Clark and Wilkes-Gibbs, 1986). Convergence was measured using token type overlap, the number of token types (i.e. unique words) which overlap with the preceding coreference for a given referent r: ∆c r n c r n ∩ c r n−1 c r n ∪ c r n−1(1) where c {t ∈ T | t ∈ c} is the set of all unique tokens (i.e. types) t ∈ T in a coreference c. This is similar to Aina et al. (2017)'s "lexical alignment" metric but considers only the preceding coreference c r n−1 rather than all C r n <n . Thus, token type overlap is relatively better-suited to measuring CP formation because CPs entail similar language in each RE rather than simply over the entire coreference chain (Brennan and Clark, 1996). Rather than manually annotating REs within utterances as done by Aina et al. (2017) and Zarrieß et al. (2016), the metrics were calculated for all tokens in the utterances in a given game round, considering all language produced during the round refer to the piece which must be selected in that roundr. This introduces noise but also facilitates faster data collection and also simulates real-world scenarios, in which RE detection is non-trivial. Moreover, convergence can be calculated not only for language used to refer to a unique entity (i.e. each of 20 possible referents in a session) but also for individual features, as done in this paper with the categorical feature SHAPE. In other words, not only can RL convergence be measured for individual referents but also for features of said referents, which are thus generalizable to other entities with similar features regardless if they have previously been referred to in discourse or not. Preprocessing For evaluation, all utterances from the instructor in a given game round were concatenated in order to create the sets of token types representing a coreference c . Before concatenation, the following tokens were removed from each utterance: Metalanguage such as COUGH and LAUGHTER Disfluencies such as lin big block l-top left Fillers such as um and uh in um blue uh kind of a temple Duplicate tokens such as the second a in it's a a blue mountain Utterances were concatenated in this way in order to mitigate effects of utterance segmentation on token type overlap: For example, there is in fact no overlap of the individual instructor utterances in Table 5 despite the following utterance pink like could be seen as an expansion of the RE initiated in the preceding utterance from the same speaker. Therefore, despite being separate "utterances" for the sake of transcription, they comprise a single referring unit. Likewise, comparing the overlap of the expansion pink like with its immediate predecessor what color for between-speaker convergence is not ideal because what color is not a proper RE but rather a request for expansion of the initiated RE. Secondly, semantically-weak tokens such as this one looks like introduce noise which must be addressed: the token sequences it, 's, a, blue, bird and blue, bird would have an overlap of only 0.40 despite having total overlap in the most-relevant words, blue and bird. Concatenating utterances from the same speaker mitigates this by reducing the amount of comparisons made overall: The two previous sequences would only be compared if they appeared in separate game rounds for the same referent or -in the case of calculating between-speaker convergence -if the other participant referred to the same entity in the role of instructor between the two utterances. Deriving the metric in this manner resulted in a set of 7,818 individual instructor utterances, which was then reduced to 3,288 unified coreferences for individual rounds excluding those comprised solely tokens filtered out in preprocessing. Results A strong effect of within-speaker convergence (WITHIN) effects as well as between speakers (BETWEEN) was found when measuring token type overlap for coreference chains referring to a specific entity c r 1 . . . c r n (see Figure 4). Additionally, there was a weak but very significant inverse relationship of coreference sequence order and token type overlap in GENERAL convergence (see Table 6): This suggests that individual participants' usage of RL converges not only on itself but also on that of their dyad partner's, indicating the formation of CPs specific to that dyad. Table 6: Significance of the correlation between coreference sequence order n (the n th time a round in a game refers to a unique entity r) and instructor token type overlap ∆c r n . A similar relationship between strong within-speaker and slightly weaker between-speaker convergence was seen when analyzing RL referring to specific features rather than entities themselves, i.e. language referring to all entities with a given SHAPE C s {c ∈ C | SHAPE(c) = s} (see Figure 5). Analogously to when measuring overlap of "true" coreference chains for individual entities, there was a weak but very significant inverse relationship of coreference sequence order and token type overlap in GENERAL convergence (see Table 7): This suggests that RL and CPs not only are formed for individual referents but are at least partially generalizable to new referents which share features of previous referents, which warrants further analysis of alignment and CP negotiation in this experimental paradigm. Information Content of RL Finally, we evaluated RL based on how specific it is to the referent r, which the dialogue participants are to move in a given game round: This is done as an estimation of the amount of information contained by a particular set of language in the task of resolving the referent. When formulated in this way, the task of reference resolution can be envisaged as an information retrieval task; For this reason, we calculated the TF-IDF scores (Spärck Jones, 1972) for each trigram of tokens from each utterance of language for both participants in each dialogue g i t i−2 , t i−1 , t i where c t 1 . . . t n and treated each unique referent in the corpus r ∈ R as a "document", where |R| = 840 Table 7: Significance of the correlation between coreference sequence order n (the n th time a round in a game refers to an entity with a unique SHAPE value s) and instructor token type overlap ∆c s n . for |D| = 42 dyads with 20 referents per dyad: tf idf (g, r, R) tf (g, r) · idf (g, R) tf (g, r) f g,r idf (g, R) log |R| |{r ∈ R | f g,r > 0}| (2) However, in order to encode the knowledge that RL converges in dialogue (see Section 5.1.), the TF-IDF score is normalized by the total number of coreferences of r |C r |: tf idf α (g, r, R) tf idf (g, r, R) · α r α r 1 + log|C r |(3) The expression α r 1 + log|C r | encodes the assumption that, as the amount of coreferences |C r | increases, so should the specificity of RL used for r. Trigrams were constructed from each individual utterance in a dialogue u r ∈ U r after applying the token-filtering methods mentioned in Section 5.1.1.. Using this metric to rank trigrams resulted in semantically rich language which is also used repeatedly by participants throughout the course of dialogue - Figure 6 illustrates the 20 referents with the highestscoring trigrams: arg max r∈R,g r ∈C r tf idf α (g, r, R)(4) The illustrated examples suggest that this metric is an effective post-hoc measure of the potential "referentiality" of language given a known referent and it suggests that there are rich, varied usage of RL in this corpus which comprise CPs: Not only is there observable variation of highlyspecific RL (i.e. RL with a high tf idf score) even for similar referents (e.g. the diamond vs. slanted rectangle) but there is also a high intra-document frequency tf of each of them. Moreover, this metric is purely linguistic and does not account for the features of the referents themselves and inter-referent similarities; it is possible that incorporating this knowledge may yet further increase the discriminative power of this metric. Figure 6: TF-IDF scores of language when considering a given unique referent r in a dyad d as a document. |C r | is the number of coreferences of r in a game. Conclusion KTH Tangrams is a corpus of high-quality task-oriented dialogue featuring observable convergence between participants in their use of referring language throughout the course of the dialogues they participate in. This indicates that the task's dynamic yet repeating nature combined with the abstractness of tangram figures lends itself not only to the study of referring language in general but also in the development of conceptual pacts for reference which are individual to a particular dialogue. In future works, we intend to use this dataset to explore the automatic understanding and generation of CPs in a dynamic context (i.e. for unseen dialogues); We encourage others interested in RL and CPs to take advantage of and improve this corpus as well in order to establish a common corpus for comparable studies in referring language and conceptual pacts. Release The linguistic transcriptions and environmental data will be made available under the Open Data Commons Attribution License v1.0 (Open Data Commons, 2010) as part of the forthcoming data bank Språkbanken Tal (Edlund, 2017), associated with the SWE-CLARIN 4 initiative Språkbanken, the Swedish Language Bank 5 (Hinrichs and Krauwer, 2014;Borin and Domeij, 2014); See http://sprakbanken.speech.kth. se/data/kth-tangrams. Acknowledgments Figure 1 : 1The possible shapes of generated game pieces. it looks like a blue crab sticking up his claws Figure 2 : claws2The game board as seen by the respective roles. Figure 3 : 3Feedback for correct and incorrect selections. Figure 4 : 4Instructor token type overlap for rounds referring to a unique entity r for the n th time in a game. Figure 5 : 5Instructor token type overlap for rounds referring to an entity with a unique SHAPE value s for the n th time in a game. Table 1 : 1A comparison of experimental paradigms in task-oriented dialogue. Table 2 : 2Example of transcription where overlapping speech does not affect segmentation.Time Speaker role Dialogue utterances 2:58.07 Manipulator [uh is it the that one or is it not that one LAUGHTER LAUGHTER]u 95 Instructor [the]u 96 [y-yeah so the LAUGHTER]u 97 [the same yellow the]u 98 Table 3 : 3Example of transcription where overlapping speech and disfluencies affect segmentation.defined as a minimal span of uninterrupted language which denotes a dialogue act in the scope of the task at hand. Dis- fluencies and self-repair delimit segmentation boundaries only if there is a significant period of silence after the po- tential boundary or if the other participant takes a dialogue turn, leading the participant to respond to the other's speech act, as shown in Tables 2 and 3 (Schegloff, 2000). An overview of the entire corpus is shown in Table 4. Minutes Rnds. Utts. Tokens Toks./ utt. Min 09:42.5 30 151 858 3.1 Max 17:49.1 138 625 2592 8.6 Mean 15:25.1 78.3 355.8 1616.3 4.7 Sum 647:35.2 3288 14942 67884 198.8 Table 4 : 4Overview of 42 recorded sessions. Table 5 : 5RE expansion across discontinuous utterances. Although the coordinates are not indicated visually, they are still occasionally used by the participants because two or more pieces may randomly line up in rows or columns during the game.2 Random values are generated using a 48-bit seed which is modified using a linear congruential formula(Knuth, 1981, 9-25) from the Java class library (Oracle Corporation, 2015) This work is supported by the SSF (Swedish Foundation for Strategic Research) project COIN. Correlation coefficient significance testing was performed with R version 3.4.1 x86 64 (R Core Team, 2015) and psych v1.7.8 (Revelle (2017); Hollander and Wolfe (1973, 185-194);Best and Roberts (1975)). Plots were made with the L A T E X package pgfplots v1.13(Feuersänger, 2016). The authors would like to thank Jens Edlund for offering KTH Tangrams as one of the first datasets available through Språkbanken Tal. Referring expressions and communicative success in task-oriented dialogues. L Aina, N Philippova, V Vogelmann, R Fernández, Proceedings of the 21 st Workshop on the Semantics and Pragmatics of Dialogue. Volha Petukhova et al.the 21 st Workshop on the Semantics and Pragmatics of DialogueSaarbrücken, GermanyAina, L., Philippova, N., Vogelmann, V., and Fernández, R. (2017). Referring expressions and communicative success in task-oriented dialogues. In Volha Petukhova et al., editors, Proceedings of the 21 st Workshop on the Semantics and Pragmatics of Dialogue, pages 8-16, Saarbrücken, Germany, August. What tunes accessibility of referring expressions in task-related dialogue?. E G Bard, R Hill, M E Foster, Proceedings of the 30 th Annual Meeting of the Cognitive Science Society. Zygmunt Pizlo, et al.the 30 th Annual Meeting of the Cognitive Science SocietyAustin, TX, USACognitive Science SocietyBard, E. G., Hill, R., and Foster, M. E. (2008). What tunes accessibility of referring expressions in task-related di- alogue? In Zygmunt Pizlo, et al., editors, Proceedings of the 30 th Annual Meeting of the Cognitive Science So- ciety, pages 945-950, Austin, TX, USA, July. Cognitive Science Society. Anchoring comprehension in linguistic precedents. D J Barr, B Keysar, Journal of Memory and Language. 462Barr, D. J. and Keysar, B. (2002). Anchoring comprehen- sion in linguistic precedents. Journal of Memory and Language, 46(2):391-418, February. Algorithm as 89: The upper tail probabilities of spearman's Rho. D J Best, D E Roberts, Journal of the Royal Statistical Society. Series C (Applied Statistics). 243Best, D. J. and Roberts, D. E. (1975). Algorithm as 89: The upper tail probabilities of spearman's Rho. Journal of the Royal Statistical Society. Series C (Applied Statis- tics), 24(3):377-379. Språkteknologi och språkresurser för språken i Sverige: En statusrapport. Språk i Norden. L Borin, R Domeij, Borin, L. and Domeij, R. (2014). Språkteknologi och språkresurser för språken i Sverige: En statusrapport. Språk i Norden, pages 33-47. Conceptual pacts and lexical choice in conversation. S E Brennan, H H Clark, Journal of Experimental Psychology: Learning, Memory, and Cognition. 226Brennan, S. E. and Clark, H. H. (1996). Conceptual pacts and lexical choice in conversation. Journal of Experi- mental Psychology: Learning, Memory, and Cognition, 22(6):1482-1493, November. Nicoletta Calzolari, Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC 2016). the Tenth International Conference on Language Resources and Evaluation (LREC 2016)Paris, France, MayEuropean Language Resources Association (ELRANicoletta Calzolari, et al., editors. (2016). Proceed- ings of the Tenth International Conference on Lan- guage Resources and Evaluation (LREC 2016), Paris, France, May. European Language Resources Association (ELRA). Naming and describing in social communication. J M Carroll, Language and Speech. 234Carroll, J. M. (1980). Naming and describing in social communication. Language and Speech, 23(4):309-322. Referring as a collaborative process. H H Clark, D Wilkes-Gibbs, Cognition. 221Clark, H. H. and Wilkes-Gibbs, D. (1986). Referring as a collaborative process. Cognition, 22(1):1-39, February. Skapandet av grunden för en svensk talbank. J Edlund, D2016-0240KTH Speech, Music and Hearing. Stockholm, SwedenTechnical ReportEdlund, J. (2017). Skapandet av grunden för en svensk tal- bank. Technical Report D2016-0240, KTH Speech, Mu- sic and Hearing, Stockholm, Sweden, March. Manual for Package PGFPLOTS: 2D/3D Plots in L A T E X, Version 1.13. C Feuersänger, Feuersänger, C., (2016). Manual for Package PGFPLOTS: 2D/3D Plots in L A T E X, Version 1.13, January. The human-robot interaction operating system. T Fong, C Kunz, L M Hiatt, M Bugajska, Proceedings of the 1 st ACM SIGCHI/SIGART Conference on Human-robot Interaction, HRI '06. the 1 st ACM SIGCHI/SIGART Conference on Human-robot Interaction, HRI '06New York, NY, USAACMFong, T., Kunz, C., Hiatt, L. M., and Bugajska, M. (2006). The human-robot interaction operating system. In Pro- ceedings of the 1 st ACM SIGCHI/SIGART Conference on Human-robot Interaction, HRI '06, pages 41-48, New York, NY, USA. ACM. Human-robot dialogue for joint construction tasks. M E Foster, T By, M Rickert, A Knoll, Proceedings of the 8 th International Conference on Multimodal Interfaces, ICMI '06. the 8 th International Conference on Multimodal Interfaces, ICMI '06New York, NY, USAACMFoster, M. E., By, T., Rickert, M., and Knoll, A. (2006). Human-robot dialogue for joint construction tasks. In Proceedings of the 8 th International Conference on Mul- timodal Interfaces, ICMI '06, pages 68-71, New York, NY, USA. ACM. Mathematical games: On the fanciful history and the creative challenges of the puzzle game of tangrams. M Gardner, Scientific American. 2312Gardner, M. (1974). Mathematical games: On the fanciful history and the creative challenges of the puzzle game of tangrams. Scientific American, 231(2):98-103B. Polyominoes: Puzzles, Patterns, Problems, and Packings. S W Golomb, Princeton University PressPrinceton, NJ, USA, 2nd editionGolomb, S. W. (1994). Polyominoes: Puzzles, Patterns, Problems, and Packings. Princeton University Press, Princeton, NJ, USA, 2 nd edition. SpaceRef: A corpus of street-level geographic descriptions. J Götze, J Boye, Calzolari et al. (Calzolari et al.Götze, J. and Boye, J. (2016). SpaceRef: A corpus of street-level geographic descriptions. In Calzolari et al. (Calzolari et al., 2016). The CLARIN research infrastructure: Resources and tools for ehumanities scholars. E Hinrichs, S Krauwer, Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14). Nicoletta Calzolari, et al.the Ninth International Conference on Language Resources and Evaluation (LREC'14)Reykjavik, Iceland, MayEuropean Language Resources Association (ELRAHinrichs, E. and Krauwer, S. (2014). The CLARIN re- search infrastructure: Resources and tools for ehuman- ities scholars. In Nicoletta Calzolari, et al., editors, Pro- ceedings of the Ninth International Conference on Lan- guage Resources and Evaluation (LREC'14), Reykjavik, Iceland, May. European Language Resources Associa- tion (ELRA). M Hollander, D A Wolfe, Nonparametric Statistical Methods. New York, NY, USAJohn Wiley & SonsHollander, M. and Wolfe, D. A. (1973). Nonparametric Statistical Methods. John Wiley & Sons, New York, NY, USA. The flexibility of conceptual pacts: Referring expressions dynamically shift to accommodate new conceptualizations. A Ibarra, M K Tanenhaus, Frontiers in Psychology. 7Ibarra, A. and Tanenhaus, M. K. (2016). The flexibility of conceptual pacts: Referring expressions dynamically shift to accommodate new conceptualizations. Frontiers in Psychology, 7:561-574. Multimedia systems and equipment -Colour measurement and management -Part 2-1: Colour management -Default RGB colour space -sRGB. International Standard IEC 61966-2-1:1999, International Electrotechnical Commission. International Electrotechnical Commission. International Electrotechnical Commission. (1999). Mul- timedia systems and equipment -Colour measurement and management -Part 2-1: Colour management - Default RGB colour space -sRGB. International Stan- dard IEC 61966-2-1:1999, International Electrotechnical Commission, Geneva, Switzerland. The Art of. D E Knuth, Seminumerical Algorithms. Reading, MA, USA, 2Addison-Wesley2nd editionKnuth, D. E. (1981). The Art of Computer Programming, Volume 2: Seminumerical Algorithms. Addison-Wesley, Reading, MA, USA, 2 nd edition. Evaluating a minimally invasive laboratory architecture for recording multimodal conversational data. S Kousidis, T Pfeiffer, Z Malisz, P Wagner, D Schlangen, Proceedings of the Interdisciplinary Workshop on Feedback Behaviors in Dialog. the Interdisciplinary Workshop on Feedback Behaviors in DialogStevenson, WA, USAKousidis, S., Pfeiffer, T., Malisz, Z., Wagner, P., and Schlangen, D. (2012). Evaluating a minimally invasive laboratory architecture for recording multimodal conver- sational data. In Proceedings of the Interdisciplinary Workshop on Feedback Behaviors in Dialog, pages 39- 42, Stevenson, WA, USA, September. Changes in reference phrases as a function of frequency of usage in social interaction: a preliminary study. R M Krauss, S Weinheimer, Psychonomic Science. 11Krauss, R. M. and Weinheimer, S. (1964). Changes in ref- erence phrases as a function of frequency of usage in so- cial interaction: a preliminary study. Psychonomic Sci- ence, 1(1):113-114, January. Robotdirected speech: Using language to assess first-time users' conceptualizations of a robot. S Kriz, G Anderson, J G Trafton, 2010 5 th ACM/ IEEE International Conference on Human-Robot Interaction (HRI). Osaka, JapanKriz, S., Anderson, G., and Trafton, J. G. (2010). Robot- directed speech: Using language to assess first-time users' conceptualizations of a robot. In 2010 5 th ACM/ IEEE International Conference on Human-Robot Inter- action (HRI), pages 267-274, Osaka, Japan, March. Walk the talk: Connecting language, knowledge, and action in route instructions. M Macmahon, B Stankiewicz, B Kuipers, Proceedings of the Twenty-First National Conference on Artificial Intelligence and the Eighteenth Innovative Applications of Artificial Intelligence Conference (AAAI-06). the Twenty-First National Conference on Artificial Intelligence and the Eighteenth Innovative Applications of Artificial Intelligence Conference (AAAI-06)Menlo Park, CA, USAAAAI PressMacMahon, M., Stankiewicz, B., and Kuipers, B. (2006). Walk the talk: Connecting language, knowledge, and ac- tion in route instructions. In Proceedings of the Twenty- First National Conference on Artificial Intelligence and the Eighteenth Innovative Applications of Artificial Intel- ligence Conference (AAAI-06), pages 1475-1482, Menlo Park, CA, USA, July. AAAI Press. Reducing the cost of dialogue system training and evaluation with online, crowd-sourced dialogue data collection. R Manuvinakurike, M Paetzel, D Devault, Proceedings of the 19 th Workshop on the Semantics and Pragmatics of Dialogue. Christine Howes et al.the 19 th Workshop on the Semantics and Pragmatics of DialogueGothenburg, SwedenManuvinakurike, R., Paetzel, M., and DeVault, D. (2015). Reducing the cost of dialogue system training and eval- uation with online, crowd-sourced dialogue data collec- tion. In Christine Howes et al., editors, Proceedings of the 19 th Workshop on the Semantics and Pragmatics of Dialogue, pages 113-121, Gothenburg, Sweden, August. Open Data Commons Attribution License (ODC-By). M Mitchell, E Reiter, K Van Deemter, v1.0Proceedings of the 35 th Annual Meeting of the Cognitive Science Society. Markus Knauff, et al.the 35 th Annual Meeting of the Cognitive Science SocietyAustin, TX, USAJava TM SE Development Kit 8, update 45 (JDK 8u45Mitchell, M., Reiter, E., and van Deemter, K. (2013). Typ- icality and object reference. In Markus Knauff, et al., editors, Proceedings of the 35 th Annual Meeting of the Cognitive Science Society, Austin, TX, USA, July. Cog- nitive Science Society. Open Data Commons. (2010). Open Data Commons Attribution License (ODC-By) v1.0. https:// opendatacommons.org/licenses/by/1.0/. Last accessed on 12 February 2018. Oracle Corporation. (2015). Java TM SE Develop- ment Kit 8, update 45 (JDK 8u45). http: //www.oracle.com/technetwork/java/ javase/8u45-relnotes-2494160.html. Last accessed on 12 February 2018. R Core Team, R: A Language and Environment for Statistical Computing. R Foundation for Statistical Computing. Vienna, AustriaR Core Team, (2015). R: A Language and Environment for Statistical Computing. R Foundation for Statistical Computing, Vienna, Austria. psych: Procedures for Psychological, Psychometric, and Personality Research. W Revelle, Evanston, Illinois, USANorthwestern UniversityR package version 1.7.8Revelle, W., (2017). psych: Procedures for Psychological, Psychometric, and Personality Research. Northwestern University, Evanston, Illinois, USA. R package version 1.7.8. Overlapping talk and the organization of turn-taking for conversation. E A Schegloff, Language in Society. 291Schegloff, E. A. (2000). Overlapping talk and the organi- zation of turn-taking for conversation. Language in So- ciety, 29(1):1-63. Understanding by addressees and overhearers. M F Schober, H H Clark, Cognitive Psychology. 212Schober, M. F. and Clark, H. H. (1989). Understanding by addressees and overhearers. Cognitive Psychology, 21(2):211-232. Learning to follow navigational route instructions. N Shimizu, A Haas, Proceedings of the 21 st International Joint Conference on Artificial Intelligence. the 21 st International Joint Conference on Artificial IntelligencePasadena, CA, USAIJCAI OrganizationShimizu, N. and Haas, A. (2009). Learning to follow nav- igational route instructions. In Proceedings of the 21 st International Joint Conference on Artificial Intelligence, pages 1488-1493, Pasadena, CA, USA. IJCAI Organiza- tion. REX-J: Japanese referring expression corpus of situated dialogs. Language Resources and Evaluation. P Spanger, M Yasuhara, R Iida, T Tokunaga, A Terai, N Kuriyama, 46Spanger, P., Yasuhara, M., Iida, R., Tokunaga, T., Terai, A., and Kuriyama, N. (2012). REX-J: Japanese refer- ring expression corpus of situated dialogs. Language Re- sources and Evaluation, 46(3):461-491, September. A statistical interpretation of term specificity and its application in retrieval. Spärck Jones, K , Journal of Documentation. 281Spärck Jones, K. (1972). A statistical interpretation of term specificity and its application in retrieval. Journal of Documentation, 28(1):11-21. The HCRC map task corpus: Natural dialogue for speech recognition. H S Thompson, A Anderson, E G Bard, G Doherty-Sneddon, A Newlands, C Sotillo, Proceedings of the Workshop on Human Language Technology, HLT '93. the Workshop on Human Language Technology, HLT '93Stroudsburg, PA, USAAssociation for Computational LinguisticsThompson, H. S., Anderson, A., Bard, E. G., Doherty- Sneddon, G., Newlands, A., and Sotillo, C. (1993). The HCRC map task corpus: Natural dialogue for speech recognition. In Proceedings of the Workshop on Human Language Technology, HLT '93, pages 25-30, Strouds- burg, PA, USA. Association for Computational Linguis- tics. Learning to follow navigational directions. A Vogel, D Jurafsky, Proceedings of the 48 th Annual Meeting of the Association for Computational Linguistics. the 48 th Annual Meeting of the Association for Computational LinguisticsUppsala, SwedenAssociation for Computational LinguisticsVogel, A. and Jurafsky, D. (2010). Learning to follow nav- igational directions. In Proceedings of the 48 th Annual Meeting of the Association for Computational Linguis- tics, pages 806-814, Uppsala, Sweden, July. Association for Computational Linguistics. Stored object knowledge and the production of referring expressions: the case of color typicality. H Westerbeek, R Koolen, A Maes, Frontiers in Psychology. 6Westerbeek, H., Koolen, R., and Maes, A. (2015). Stored object knowledge and the production of referring expres- sions: the case of color typicality. Frontiers in Psychol- ogy, 6:1-12. Task execution based-on human-robot dialogue and deictic gestures. P Yan, B He, L Zhang, J Zhang, 2016 IEEE International Conference on Robotics and Biomimetics (ROBIO). Qingdao, ChinaIEEEYan, P., He, B., Zhang, L., and Zhang, J. (2016). Task execution based-on human-robot dialogue and deictic gestures. In 2016 IEEE International Conference on Robotics and Biomimetics (ROBIO), pages 1918-1923, Qingdao, China, December. IEEE. PentoRef: A corpus of spoken references in task-oriented dialogues. S Zarrieß, J Hough, C Kennington, R Manuvinakurike, D Devault, R Fernández, D Schlangen, Calzolari et al. (Calzolari et al.Zarrieß, S., Hough, J., Kennington, C., Manuvinakurike, R., DeVault, D., Fernández, R., and Schlangen, D. (2016). PentoRef: A corpus of spoken references in task-oriented dialogues. In Calzolari et al. (Calzolari et al., 2016).
2,067,306
Data-Driven Dependency Parsing of New Languages Using Incomplete and Noisy Training Data
We present a simple but very effective approach to identifying high-quality data in noisy data sets for structured problems like parsing, by greedily exploiting partial structures. We analyze our approach in an annotation projection framework for dependency trees, and show how dependency parsers from two different paradigms (graph-based and transition-based) can be trained on the resulting tree fragments. We train parsers for Dutch to evaluate our method and to investigate to which degree graph-based and transitionbased parsers can benefit from incomplete training data. We find that partial correspondence projection gives rise to parsers that outperform parsers trained on aggressively filtered data sets, and achieve unlabeled attachment scores that are only 5% behind the average UAS for Dutch in the CoNLL-X Shared Task on supervised parsing(Buchholz and Marsi, 2006).
[ 10661378, 5151364, 696805, 628455, 259144, 14829769, 2547341, 1916754, 5813778, 15279538, 9431510, 38407095, 1364249, 6681594, 5219389 ]
Data-Driven Dependency Parsing of New Languages Using Incomplete and Noisy Training Data Association for Computational LinguisticsCopyright Association for Computational LinguisticsJune 2009. 2009 Kathrin Spreyer spreyer@ling.uni-potsdam.de Department of Linguistics University of Potsdam Germany Jonas Kuhn kuhn@ling.uni-potsdam.de Department of Linguistics University of Potsdam Germany Data-Driven Dependency Parsing of New Languages Using Incomplete and Noisy Training Data Proceedings of the Thirteenth Conference on Computational Natural Language Learning (CoNLL) the Thirteenth Conference on Computational Natural Language Learning (CoNLL)Boulder, ColoradoAssociation for Computational LinguisticsJune 2009. 2009 We present a simple but very effective approach to identifying high-quality data in noisy data sets for structured problems like parsing, by greedily exploiting partial structures. We analyze our approach in an annotation projection framework for dependency trees, and show how dependency parsers from two different paradigms (graph-based and transition-based) can be trained on the resulting tree fragments. We train parsers for Dutch to evaluate our method and to investigate to which degree graph-based and transitionbased parsers can benefit from incomplete training data. We find that partial correspondence projection gives rise to parsers that outperform parsers trained on aggressively filtered data sets, and achieve unlabeled attachment scores that are only 5% behind the average UAS for Dutch in the CoNLL-X Shared Task on supervised parsing(Buchholz and Marsi, 2006). Introduction Many weakly supervised approaches to NLP rely on heuristics or filtering techniques to deal with noise in unlabeled or automatically labeled training data, e.g., in the exploitation of parallel corpora for crosslingual projection of morphological, syntactic or semantic information. While heuristic approaches can implement (linguistic) knowledge that helps to detect noisy data (e.g., Hwa et al. (2005)), they are typically task-and language-specific and thus introduce a component of indirect supervision. Non-heuristic filtering techniques, on the other hand, employ reliability measures (often unrelated to the task) to predict high-precision data points (e.g., Yarowsky et al. (2001)). In order to reach a sufficient level of precision, filtering typically has to be aggressive, especially for highly structured tasks like parsing. Such aggressive filtering techniques incur massive data loss and enforce trade-offs between the quality and the amount of usable data. Ideally, a general filtering strategy for weakly supervised training of structured analysis tools should eliminate noisy subparts in the automatic annotation without discarding its high-precision aspects; thereby data loss would be kept to a minimum. In this paper, we propose an extremely simple approach to noise reduction which greedily exploits partial correspondences in a parallel corpus, i.e., correspondences potentially covering only substructures of translated sentences. We implemented this method in an annotation projection framework to create training data for two dependency parsers representing different parsing paradigms: The MST-Parser (McDonald et al., 2005) as an instance of graph-based dependency parsing, and the Malt-Parser (Nivre et al., 2006) to represent transitionbased dependency parsing. In an empirical evaluation, we investigate how they react differently to incomplete and noisy training data. Despite its simplicity, the partial correspondence approach proves very effective and leads to parsers that achieve unlabeled attachment scores that are only 5% behind the average UAS for Dutch in the CoNLL-X Shared Task (Buchholz and Marsi, 2006). After a summary of related work in Sec. 2, we discuss dependency tree projection (Sec. 3) and partial correspondence (Sec. 4). In Sec. 5, we give an overview of graph-and transition-based dependency parsing and describe how each can be adapted for training on partial training data in Sec. 6. Experimental results are presented in Sec. 7, followed by an analysis in Sec. 8. Sec. 9 concludes. Related Work Annotation projection has been applied to many different NLP tasks. On the word or phrase level, these include morphological analysis, part-of-speech tagging and NP-bracketing (Yarowsky et al., 2001), temporal analysis (Spreyer and Frank, 2008), or semantic role labeling (Padó and Lapata, 2006). In these tasks, word labels can technically be introduced in isolation, without reference to the rest of the annotation. This means that an aggressive filter can be used to discard unreliable data points (words in a sentence) without necessarily affecting highprecision data points in the same sentence. By using only the bidirectional word alignment links, one can implement a very robust such filter, as the bidirectional links are generally reliable, even though they have low recall for overall translational correspondences (Koehn et al., 2003). The bidirectional alignment filter is common practice (Padó and Lapata, 2006); a similar strategy is to discard entire sentences with low aggregated alignment scores (Yarowsky et al., 2001). On the sentence level, Hwa et al. (2005) were the first to project dependency trees from English to Spanish and Chinese. They identify unreliable target parses (as a whole) on the basis of the number of unaligned or over-aligned words. In addition, they manipulate the trees to accommodate for nonisomorphic sentences. Systematic non-parallelisms between source and target language are then addressed by hand-crafted rules in a post-projection step. These rules account for an enormous increase in the unlabeled f-score of the direct projections, from 33.9 to 65.7 for Spanish and from 26.3 to 52.4 for Chinese. But they need to be designed anew for every target language, which is time-consuming and requires knowledge of that language. Research in the field of unsupervised and weakly supervised parsing ranges from various forms of EM training (Pereira and Schabes, 1992;Klein and Manning, 2004;Smith and Eisner, 2004;Smith and Eisner, 2005) over bootstrapping approaches like selftraining (McClosky et al., 2006) to feature-based enhancements of discriminative reranking models (Koo et al., 2008) and the application of semisupervised SVMs (Wang et al., 2008). The partial correspondence method we present in this paper is compatible with such approaches and can be combined with other weakly supervised machine learning schemes. Our approach is similar to that of Clark and Curran (2006) who use partial training data (CCG lexical categories) for domain adaptation; however, they assume an existing CCG resource for the language in question to provide this data. Projection of Dependency Trees Most state-of-the-art parsers for natural languages are data-driven and depend on the availability of sufficient amounts of labeled training data. However, manual creation of treebanks is time-consuming and labour-intensive. One way to avoid the expensive annotation process is to automatically label the training data using annotation projection (Yarowsky et al., 2001): Given a suitable resource (such as a parser) in language L 1 , and a word-aligned parallel corpus with languages L 1 and L 2 , label the L 1portion of the parallel text (with the parser) and copy the annotations to the corresponding (i.e., aligned) elements in language L 2 . This is illustrated in Fig. 1a. The arrows between English and Dutch words indicate the word alignment. Assuming we have a parser to produce the dependency tree for the English sentence, we build the tree for the Dutch sentence by establishing arcs between words w D (e.g., Ik) and h D (heb) if there are aligned pairs (w D , w E ) (Ik and I) and (h D , h E ) (heb and have) such that h E is the head of w E in the English tree. Annotation projection assumes direct correspondence (Hwa et al., 2005) between languages (or annotations), which-although it is valid in many cases-does not hold in general: non-parallelism between corresponding expressions in L 1 and L 2 causes errors in the target annotations. The word alignment constitutes a further source for errors if it is established automatically-which is typically the case in large parallel corpora. We have implemented a language-independent framework for dependency projection and use the Europarl corpus (Koehn, 2005) as the parallel text. Europarl consists of the proceedings of the European Parliament, professionally translated in 11 languages (approx. 30mln words per language). The data was aligned on the word level with GIZA++ (Och and Ney, 2003). 1 In the experiments reported here, we use the language pair English-Dutch, with English as the source for projection (L 1 ) and Dutch as L 2 . The English portion of the Europarl corpus was lemmatized and POS tagged with the Tree-Tagger (Schmid, 1994) and then parsed with Malt-Parser (which is described in Sec. 6), trained on a dependency-converted version of the WSJ part from the Penn Treebank (Marcus et al., 1994), but with the automatic POS tags. The Dutch sentences were only POS tagged (with TreeTagger). 2 Data Loss Through Filtering We quantitatively assess the impact of various filtering techniques on a random sample of 100,000 English-Dutch sentence pairs from Europarl (avg. 1 Following standard practice, we computed word alignments in both directions (L1 → L2 and L2 → L1); this gives rise to two unidirectional alignments. The bidirectional alignment is the intersection of the two unidirectional ones. 2 The Dutch POS tags are used to train the monolingual parsers from the projected dependency trees (Sec. 7). 24.9 words/sentence). The English dependency trees are projected to their Dutch counterparts as explained above for Fig. 1a. The first filter we examine is the one that considers exclusively bidirectional alignments. It admits dependency arcs to be projected only if the head h E and the dependent w E are each aligned bidirectionally with some word in the Dutch sentence. This is indicated in Fig. 1b, where the English verb are is aligned with the Dutch translation heeft only in one direction. This means that none of the dependencies involving are are projected, and the projected structure is not connected. We will discuss in subsequent sections how less restricted projection methods can still incorporate such data. Table 1 shows the quantitative effect of the bidirectional filter in the row labeled 'bidirectional'. The proportion of usable sentences is reduced to 2.11%. Consequently, the vocabulary size diminishes by a factor of 10, and the average sentence length drops considerably from almost 25 to less than 7 words, suggesting that most non-trivial examples are lost. Constrained Fallback Projection As an instance of a more relaxed projection of complete structures, we also implemented a fallback to unidirectional links which projects further dependencies after a partial structure has been built based on the more reliable bidirectional links. That is, the dependencies established via unidirectional alignments are constrained by the existing subtrees, and are subject to the wellformedness conditions for dependency trees. 3 Fig. 1c shows how the fallback mechanism, initialized with the unconnected structure built with the bidirectional filter, recovers a parse tree for the weakly aligned sentence pair in Fig. 1b. Starting with the leftmost word in the Dutch sentence and its English translation (U and You), there is a unidirectional alignment for the head of You: are is aligned to heeft, so U is established as a dependent of heeft via fallback. Likewise, heeft can now be identified as the root node. Note that the (incorrect) alignment between heeft and You will not be pursued because it would lead to heeft being a dependent of itself and thus violating the wellformed- ness conditions. Finally, the subtree rooted in gelijk is incorporated as the second dependent of heeft. As expected, the proportion of examples that pass this filter rises, to 6.42% (Table 1, 'fallback'). However, we will see in Sec. 7 that parsers trained on this data do not improve over parsers trained on the bidirectionally aligned sentences alone. This is presumably due to the noise that inevitably enters the training data through fallback. Partial Correspondence Projection So far, we have only considered complete trees, i.e., projected structures with exactly one root node. This is a rather strict requirement, given that even state-of-the-art parsers sometimes fail to produce plausible complete analyses for long sentences, and that non-sentential phrases such as complex noun phrases still contain valuable, non-trivial information. We therefore propose partial correspondence projection which, in addition to the complete annotations produced by tree-oriented projection, yields partial structures: It admits fragmented analyses in case the tree-oriented projection cannot construct a complete tree. Of course, the nature of those fragments needs to be restricted so as to exclude data with no (interesting) dependencies. E.g., a sentence of five words with a parse consisting of five fragments provides virtually no information about dependency structure. Hence, we impose a limit (fixed at 3 after quick preliminary tests on automatically labeled development data) on the number of fragments that can make up an analysis. Alternatively, one could require a minimum fragment size. As an example, consider again Fig. 1b. This example would be discarded in strict tree projection, but under partial correspondence it is included as a partial analysis consisting of three fragments: U heeft volkomen gelijk Although the amount of information provided in this analysis is limited, the arc between gelijk and volkomen, which is strongly supported by the alignment, can be established without including potentially noisy data points that are only weakly aligned. We use partial correspondence in combination with bidirectional projection. 4 As can be seen in Table 1 ('bi+frags ≤3 '), this combination boosts the amount of usable data to a range similar to that of the fallback technique for trees; but unlike the latter, partial correspondence continues to impose a highprecision filter (bidirectionality) while improving recall through relaxed structural requirements (partial correspondence). Table 2 shows how fragment size varies with sentence length. Data-driven Dependency Parsing Models for data-driven dependency parsing can be roughly divided into two paradigms: Graph-based and transition-based models (McDonald and Nivre, 2007). Graph-based Models In the graph-based approach, global optimization considers all possible arcs to find the treeT s.t. T = arg max T ∈D s(T ) = arg max T ∈D (i,j,l)∈A T s(i, j, l) where D is the set of all well-formed dependency trees for the sentence, A T is the set of arcs in T , and s(i, j, l) is the score of an arc between words w i and w j with label l. The specific graph-based parser we use in this paper is the MSTParser of McDonald et al. (2005). The MSTParser learns the scoring function s using an online learning algorithm (Crammer and Singer, 2003) which maximizes the margin be-tweenT and D \ {T }, based on a loss function that counts the number of words with incorrect parents relative to the correct tree. Transition-based Models In contrast to the global optimization employed in graph-based models, transition-based models construct a parse tree in a stepwise way: At each point, the locally optimal parser action (transition) t * is determined greedily on the basis of the current configuration c (previous actions plus local features): t * = arg max t∈T s(c, t) where T is the set of possible transitions. As a representative of the transition-based paradigm, we use the MaltParser (Nivre et al., 2006). It implements incremental, deterministic parsing algorithms and employs SVMs to learn the transition scores s. Parsing with Fragmented Trees To make effective use of the fragmented trees produced by partial correspondence projection, both parsing approaches need to be adapted for training on sentences with unconnected substructures. Here we briefly discuss how we represent these structures, and then describe how we modified the parsers. We use the CoNLL-X data format for dependency trees (Buchholz and Marsi, 2006) to encode partial structures. Specifically, every fragment root specifies as its head an artificial root token w 0 (distinguished from a true root dependency by a special relation FRAG). Thus, sentences with a fragmented parse are still represented as a single sentence, including all words; the difference from a fully parsed sentence is that unconnected substructures are attached directly under w 0 . For instance, the partial parse in Fig. 1b would be represented as follows (details omitted): (1) 1 U pron 0 FRAG 2 heeft verb 0 ROOT 3 volkomen adj 4 mod 4 gelijk noun 0 FRAG Graph-based Model: fMST In the training phase, the MSTParser tries to maximize the scoring margin between the correct parse and all other valid dependency trees for the sentence. However, in the case of fragmented trees, the training example is not strictly speaking correct, in the sense that it does not coincide with the desired parse tree. In fact, this desired tree is among the other possible trees that MST assumes to be incorrect, or at least suboptimal. In order to relax this assumption, we have to ensure that the loss of the desired tree is zero. While it is impossible to single out this one tree (since we do not know which one it is), we can steer the margin in the right direction with a loss function that assigns zero loss to all trees that are consistent with the training example, i.e., trees that differ from the training example at most on those words that are fragment roots (e.g., gelijk in Fig. 1). To reflect this notion of loss during optimization, we also adjust the definition of the score of a tree: s(T ) = (i,j,l)∈A T : l =FRAG s(i, j, l) We refer to this modified model as f(iltering)MST. Transition-based Model: fMalt In the transition-based paradigm, it is particularly important to preserve the original context (including unattached words) of a partial analysis, because the parser partly bases its decisions on neighboring words in the sentence. Emphasis of the role of isolated FRAG dependents as context rather than proper nodes in the tree can be achieved, as with the MSTParser, by eliminating their effect on the margin learned by the SVMs. Since MaltParser scores local decisions, this simply amounts to suppressing the creation of SVM training instances for such nodes (U and gelijk in (1)). That is, where the feature model refers to context information, unattached words provide this information (e.g., the feature vector for volkomen in (1) contains the form and POS of gelijk), but there are no instances indicating how they should be attached themselves. This technique of excluding fragment roots during training will be referred to as fMalt. Experiments Setup We train instances of the graph-and the transitionbased parser on projected dependencies, and occasionally refer to these as "projected parsers". 5 All results were obtained on the held-out CoNLL-X test set of 386 sentences (avg. 12.9 words/sentence) from the Alpino treebank (van der Beek et al., 2002). The Alpino treebank consists mostly of newspaper text, which means that we are evaluating the projected parsers, which are trained on Europarl, in an out-of-domain setting, in the absence of manually annotated Europarl test data. Parsing performance is measured in terms of unlabeled attachment score (UAS), i.e., the proportion of tokens that are assigned the correct head, irrespective of the label. 6 To establish upper and lower bounds for our task of weakly supervised dependency parsing, we proceed as follows. We train MaltParsers and MST-Parsers on (i) the CoNLL-X training portion of the Alpino treebank (195,000 words), (ii) 100,000 Europarl sentences parsed with the parser obtained from (i), and (iii) the concatenation of the data sets (i) and (ii). The first is a supervised upper bound (80.05/82.43% UAS) 7 trained on manually labeled in-domain data, while the second constitutes a weaker bound (75.33/73.09%) subject to the same out-of-domain evaluation as the projected parsers, and the third (77.47%) is a self-trained version of (i). We note in passing that the supervised model does not benefit from self-training. Two simple baselines provide approximations to a lower bound: Baseline 1 attaches every word to the preceding word, achieving 23.65%. Analogously, baseline 2 attaches every word to the following word (27.63%). These systems are summarized in Table 3. 6 The labeled accuracy of our parsers lags behind the UAS, because the Dutch dependency relations in the projected annotations arise from a coarse heuristic mapping from the original English labels. We therefore report only UAS. 7 The upper bound models are trained with the same parameter settings as the projected parsers (see fn. 5), which were adjusted for noisy training data. Thus improvements are likely with other settings: Nivre et al. (2006) Table 4: UAS of parsers trained on projected dependency structures for (a) a sample of 100,000 sentences, subject to filtering, (b) 10 random samples, each with 100,000 words after filtering (average scores given), and (c) the entire Europarl corpus, subject to filtering. Table 4a summarizes the results of training parsers on the 100,000-sentence sample analyzed above. Both the graph-based (MST) and the transitionbased (Malt) parsers react similarly to the more or less aggressive filtering methods, but to different degrees. The first two rows of the table show the parsers trained on complete trees ('trees (bidirectional)' and 'trees (fallback)'). In spite of the additional training data gained by the fallback method, the resulting parsers do not achieve higher accuracy; on the contrary, there is a drop in UAS, especially in the transition-based model (−6.66%). The increased level of noise in the fallback data has less (but significant) 8 impact on the graph-based counterpart (−2.68%). Results Turning to the parsers trained on partial correspondence data ('bi+frags ≤3 '), we observe even greater deterioration in both parsing paradigms if the data is used as is. However, in combination with the fMalt/fMST systems ('bi+frags ≤3 (fMalt/fMST)'), both parsers significantly outperform the tree- oriented models ('trees (bidirectional)') by 3.21% (Malt) and 2.26% (MST). It would be natural to presume that the superiority of the partial correspondence filter is merely due to the amount of training data, which is larger by a factor of 5.04. We address this issue by isolating the effect on the quality of the data, and hence the success at noise reduction: In Table 4b, we control for the amount of data that is effectively used in training, so that each filtered training set consists of 100,000 words. Considering the Malt models, we find that the trends suggested in Table 4a are confirmed: The pattern of relative performance emerges even though any quantitative (dis-)advantages have been eliminated. 9 10 Interestingly, the MSTParser does not appear to gain from the increased variety (cf. Table 1) in the partial data: it does not differ significantly from the 'trees (bi.)' model. Finally, Table 4c provides the results of training on the entire Europarl, or what remains of the corpus after the respective filters have applied. The results corroborate those obtained for the smaller samples. In summary, the results support our initial hypothesis that partial correspondence for sentences containing a highly reliable part is preferable to 9 The degree of skewedness in the filtered data is not controlled, as it is an important characteristic of the filters. 10 Some of the parsers trained on the larger data sets (Table 4b+c) achieve worse results than their smaller counterparts in Table 4a. We conjecture that it is due to the thresholded POSbased data split, performed prior to SVM training: Larger training sets induce decision models with more specialized SVMs, which are more susceptible to tagging errors. This could be avoided by increasing the threshold for splitting. relaxing the reliability citerion, and-in the case of the transition-based MaltParser-also to aggressively filtering out all but the reliable complete trees. With UASs around 70%, both systems are only 5% behind the average 75.07% UAS achieved for Dutch in the CoNLL-X Shared Task. Analysis We have seen that the graph-and the transitionbased parser react similarly to the various filtering methods. However, there are interesting differences in the magnitude of the performance changes. If we compare the two tree-oriented filters 'trees (bi.)' and 'trees (fb.)', we observe that, although both Malt and MST suffer from the additional noise that is introduced via the unidirectional alignments, the drop in accuracy is much less pronounced in the latter, graph-based model. Recall that in this paradigm, optimization is performed over the entire tree by scoring edges independenly; this might explain why noisy arcs in the training data have only a negligible impact. Conversely, the transition-based Malt-Parser, which constructs parse trees in steps of locally optimal decisions, has an advantage when confronted with partial structures: The individual fragments provide exactly the local context, plus lexical information about the (unconnected) wider context. To give a more detailed picture of the differences between predicted and actual annotations, we show the performance (of the parsers from Table 4b) separately for binned arc length (Table 5) and sentence length ( Table 6). As expected, the performance of both the supervised upper bounds (Alpino- Malt/MST) and the projected parsers degrades as dependencies get longer, and the difference between the two grows. Performance across sentence length remains relatively stable. But note that both tables again reflect the pattern we saw in Table 4. Importantly, the relative ranking (in terms of f-score, not shown, resp. UAS) is still in place even in long distance dependencies and long sentences. This indicates that the effects we have described are not artifacts of a bias towards short dependencies. In addition, Table 5 sheds some light on the impact of fMalt/fMST in terms of the trade-off between precision and recall. Without the specific adjustments to handle fragments, partial structures in the training data lead to an immense drop in recall. By contrast, when the adapted parsers fMalt/fMST are applied, they boosts recall back to a level comparable to or even above that of the tree-oriented projection parsers, while maintaining precision. Again, this effect can be observed across all arc lengths, except arcs to root, which naturally the 'bi+frags' models are overly eager to predict. Finally, the learning curves in Fig. 2 illustrate how much labeled data would be required to achieve comparable performance in a supervised setting. The graph-based upper bound (Alpino-MST) reaches the performance of fMST (trained on the entire Europarl) with approx. 25,000 words of manually labeled treebank data; Alpino-Malt achieves the performance of fMalt with approx. 35,000 words. The manual annotation of even these moderate amounts of data involves considerable efforts, including the creation of annotation guidelines Conclusion In the context of dependency parsing, we have proposed partial correspondence projection as a greedy method for noise reduction, and illustrated how it can be integrated with data-driven parsing. Our experimental results show that partial tree structures are well suited to train transition-based dependency parsers. Graph-based models do not benefit as much from additional partial structures, but instead are more robust to noisy training data, even when the training set is very small. In future work, we will explore how well the techniques presented here for English and Dutch work for languages that are typologically further apart, e.g., English-Greek or English-Finnish. Moreover, we are going to investigate how our approach, which essentially ignores unknown parts of the annotation, compares to approaches that marginalize over hidden variables. We will also explore ways of combining graph-based and transition-based parsers along the lines of Nivre and McDonald (2008). Figure 2 : 2Learning curves for the supervised upper bounds. They reach the performance of the projected parsers with ∼25,000 (MST) resp. 35,000 (Malt) words. and tools, the training of annotators etc. Table 2 : 2Fragmented parses projected with the alignment filter. The sentences included in the data set 'bi+frags ≤3 ' are in boldface. Table 3 : 3Upper and lower bounds (UAS). Table 5 : 5Performance relative to dependency length. (a) Projected MaltParsers and (b) projected MSTParsers. Table 6 : 6≤3 (fMalt) 73.51 65.69 71.70 68.49 63.71 ≤3 (fMST) 73.24 67.84 73.46 70.04 62.92 Alpino-MST 81.98 72.24 85.10 83.86 78.51 UAS relative to sentence length. (a) Projected MaltParsers and (b) projected MSTParsers.sent. length <4 4-9 10-19 20-30 > 30 a. trees (bi.) 73.87 62.13 65.67 60.81 55.18 trees (fb.) 69.91 57.84 62.29 60.04 55.47 bi+frags ≤3 74.14 54.40 56.62 54.07 48.95 bi+fr Alpino-Malt 81.98 69.81 81.11 82.82 76.02 b. trees (bi.) 76.67 70.16 73.09 69.56 63.57 trees (fb.) 73.24 64.93 67.79 64.98 57.70 bi+frags ≤3 77.48 59.65 55.96 55.27 52.74 bi+fr I.e., single headedness and acyclicity; we do not require the trees to be projective, but instead train pseudo-projective models(Nivre and Nilsson, 2005) on the projected data (cf. fn. 5). Fragments from fallback projection turned out not to be helpful as training data for dependency parsers. The MaltParsers use the projective Nivre arc-standard parsing algorithm. For SVM training, data are split on the coarse POS tag, with a threshold of 5,000 instances. MSTParser instances use the projective Eisner parsing algorithm, and firstorder features. The input for both systems is projectivized using the head+path schema(Nivre and Nilsson, 2005). Significance testing (p<.01) was performed by means of the t-test on the results of 10 training cycles(Table 4c'trees (fb.)' only 2 cycles due to time constraints). For the experiments inTable 4aand 4c, the cycles differed in terms of the order in which sentences where passed to the parser. InTable 4bwe base significance on 10 true random samples for training. AcknowledgmentsThe research reported in this paper has been supported by the German Research Foundation DFG as part of SFB 632 "Information structure" (project D4; PI: Kuhn). Conll-x shared task on multilingual dependency parsing. Sabine Buchholz, Erwin Marsi, Proceedings of CoNLL-X. CoNLL-XNew York CitySabine Buchholz and Erwin Marsi. 2006. Conll-x shared task on multilingual dependency parsing. In Proceed- ings of CoNLL-X, pages 149-164, New York City, June. Partial training for a lexicalized-grammar parser. Stephen Clark, R James, Curran, Proceedings of HLT-NAACL 2006. HLT-NAACL 2006New YorkStephen Clark and James R. Curran. 2006. Partial train- ing for a lexicalized-grammar parser. In Proceed- ings of HLT-NAACL 2006, pages 144-151, New York, June. Ultraconservative online algorithms for multiclass problems. Koby Crammer, Yoram Singer, Journal of Machine Learning Reseach. 3Koby Crammer and Yoram Singer. 2003. Ultraconserva- tive online algorithms for multiclass problems. Jour- nal of Machine Learning Reseach, 3:951-991, Jan- uary. Bootstrapping parsers via syntactic projection across parallel texts. Rebecca Hwa, Philip Resnik, Amy Weinberg, Clara Cabezas, Okan Kolak, Natural Language Engineering. 113Rebecca Hwa, Philip Resnik, Amy Weinberg, Clara Cabezas, and Okan Kolak. 2005. Bootstrapping parsers via syntactic projection across parallel texts. Natural Language Engineering, 11(3):311-325. Corpusbased induction of syntactic structure: Models of dependency and constituency. Dan Klein, Christopher D Manning, Proceedings of HLT-NAACL 2003. HLT-NAACL 2003Barcelona, Spain. Philipp Koehn, Franz Josef OchProceedings of ACL 2004Dan Klein and Christopher D. Manning. 2004. Corpus- based induction of syntactic structure: Models of de- pendency and constituency. In Proceedings of ACL 2004, pages 478-485, Barcelona, Spain. Philipp Koehn, Franz Josef Och, and Daniel Marcu. 2003. Statistical phrase-based translation. In Proceed- ings of HLT-NAACL 2003, pages 127-133. Europarl: A Parallel Corpus for Statistical Machine Translation. Philipp Koehn, Proceedings of the MT Summit. the MT SummitPhilipp Koehn. 2005. Europarl: A Parallel Corpus for Statistical Machine Translation. In Proceedings of the MT Summit 2005. Simple semi-supervised dependency parsing. Terry Koo, Xavier Carreras, Michael Collins, Proceedings of ACL-HLT 2008). ACL-HLT 2008)OhioTerry Koo, Xavier Carreras, and Michael Collins. 2008. Simple semi-supervised dependency parsing. In Pro- ceedings of ACL-HLT 2008), pages 595-603, Colum- bus, Ohio, June. The Penn treebank: Annotating predicate argument structure. Mitchell Marcus, Grace Kim, Mary Marcinkiewicz, Robert Macintyre, Ann Bies, Mark Ferguson, Karen Katz, Britta Schasberger, ARPA Human Language Technology Workshop. Mitchell Marcus, Grace Kim, Mary Marcinkiewicz, Robert MacIntyre, Ann Bies, Mark Ferguson, Karen Katz, and Britta Schasberger. 1994. The Penn tree- bank: Annotating predicate argument structure. In ARPA Human Language Technology Workshop. Effective self-training for parsing. David Mcclosky, Eugene Charniak, Mark Johnson, Proceedings of HLT-NAACL 2006. HLT-NAACL 2006New YorkDavid McClosky, Eugene Charniak, and Mark Johnson. 2006. Effective self-training for parsing. In Proceed- ings of HLT-NAACL 2006, pages 152-159, New York, June. Characterizing the errors of data-driven dependency parsing models. Ryan Mcdonald, Joakim Nivre, Proceedings of EMNLP-CoNLL. EMNLP-CoNLLRyan McDonald and Joakim Nivre. 2007. Characteriz- ing the errors of data-driven dependency parsing mod- els. In Proceedings of EMNLP-CoNLL 2007, pages 122-131. Non-projective dependency parsing using spanning tree algorithms. Ryan Mcdonald, Fernando Pereira, Kiril Ribarov, Proceedings of HLT-EMNLP. HLT-EMNLPRyan McDonald, Fernando Pereira, Kiril Ribarov, and Jan Hajič. 2005. Non-projective dependency pars- ing using spanning tree algorithms. In Proceedings of HLT-EMNLP 2005). Multilingual dependency analysis with a twostage discriminative parser. Ryan Mcdonald, Kevin Lerman, Fernando Pereira, Proceedings of CoNLL-X. CoNLL-XRyan McDonald, Kevin Lerman, and Fernando Pereira. 2006. Multilingual dependency analysis with a two- stage discriminative parser. In Proceedings of CoNLL- X. Integrating graph-based and transition-based dependency parsers. Joakim Nivre, Ryan Mcdonald, Proceedings of ACL-HLT 2008. ACL-HLT 2008Columbus, OhioJoakim Nivre and Ryan McDonald. 2008. Integrating graph-based and transition-based dependency parsers. In Proceedings of ACL-HLT 2008, pages 950-958, Columbus, Ohio, June. Pseudo-projective dependency parsing. Joakim Nivre, Jens Nilsson, Proceedings of ACL 2005. ACL 2005Joakim Nivre and Jens Nilsson. 2005. Pseudo-projective dependency parsing. In Proceedings of ACL 2005, pages 99-106. Labeled pseudoprojective dependency parsing with support vector machines. Joakim Nivre, Johan Hall, Jens Nilsson, Gülşen Eryiǧit, Svetoslav Marinov, Proceedings of CoNLL-X. CoNLL-XJoakim Nivre, Johan Hall, Jens Nilsson, Gülşen Eryiǧit, and Svetoslav Marinov. 2006. Labeled pseudo- projective dependency parsing with support vector ma- chines. In Proceedings of CoNLL-X, pages 221-225. A systematic comparison of various statistical alignment models. Josef Franz, Hermann Och, Ney, Computational Linguistics. 291Franz Josef Och and Hermann Ney. 2003. A system- atic comparison of various statistical alignment mod- els. Computational Linguistics, 29(1):19-51. Optimal constituent alignment with edge covers for semantic projection. Sebastian Padó, Mirella Lapata, Proceedings of COLING/ACL. COLING/ACLSydney, AustraliaSebastian Padó and Mirella Lapata. 2006. Optimal con- stituent alignment with edge covers for semantic pro- jection. In Proceedings of COLING/ACL 2006, Syd- ney, Australia. Insideoutside reestimation from partially bracketed corpora. Fernando Pereira, Yves Schabes, Proceedings of ACL 1992. ACL 1992Fernando Pereira and Yves Schabes. 1992. Inside- outside reestimation from partially bracketed corpora. In Proceedings of ACL 1992, pages 128-135. Probabilistic part-of-speech tagging using decision trees. Helmut Schmid, International Conference on New Methods in Language Processing. Manchester, EnglandHelmut Schmid. 1994. Probabilistic part-of-speech tag- ging using decision trees. In International Conference on New Methods in Language Processing, pages 44- 49, Manchester, England. Annealing techniques for unsupervised statistical language learning. A Noah, Jason Smith, Eisner, Proceedings of ACL 2004. ACL 2004BarcelonaNoah A. Smith and Jason Eisner. 2004. Annealing techniques for unsupervised statistical language learn- ing. In Proceedings of ACL 2004, pages 487-494, Barcelona, July. Contrastive estimation: Training log-linear models on unlabeled data. A Noah, Jason Smith, Eisner, Proceedings of ACL 2005. ACL 2005Ann Arbor, MINoah A. Smith and Jason Eisner. 2005. Contrastive esti- mation: Training log-linear models on unlabeled data. In Proceedings of ACL 2005, pages 354-362, Ann Ar- bor, MI, June. Leonoor van der Beek, Gosse Bouma, Robert Malouf, and Gertjan van Noord. 2002. The Alpino dependency treebank. Kathrin Spreyer, Anette Frank, Computational Linguistics in the Netherlands (CLIN). Hyderabad, IndiaProceedings of IJCNLPKathrin Spreyer and Anette Frank. 2008. Projection- based acquisition of a temporal labeller. In Proceed- ings of IJCNLP 2008, Hyderabad, India, January. Leonoor van der Beek, Gosse Bouma, Robert Malouf, and Gertjan van Noord. 2002. The Alpino depen- dency treebank. In Computational Linguistics in the Netherlands (CLIN). Semi-supervised convex training for dependency parsing. Qin Iris Wang, Dale Schuurmans, Dekang Lin, Proceedings of ACL-HLT 2008. ACL-HLT 2008Columbus, OhioQin Iris Wang, Dale Schuurmans, and Dekang Lin. 2008. Semi-supervised convex training for dependency pars- ing. In Proceedings of ACL-HLT 2008, pages 532- 540, Columbus, Ohio, June. Inducing multilingual text analysis tools via robust projection across aligned corpora. David Yarowsky, Grace Ngai, Richard Wicentowski, Proceedings of HLT. HLTDavid Yarowsky, Grace Ngai, and Richard Wicentowski. 2001. Inducing multilingual text analysis tools via ro- bust projection across aligned corpora. In Proceedings of HLT 2001.
16,700,314
MLSA -A Multi-layered Reference Corpus for German Sentiment Analysis
In this paper, we describe MLSA, a publicly available multi-layered reference corpus for German-language sentiment analysis. The construction of the corpus is based on the manual annotation of 270 German-language sentences considering three different layers of granularity. The sentence-layer annotation, as the most coarse-grained annotation, focuses on aspects of objectivity, subjectivity and the overall polarity of the respective sentences. Layer 2 is concerned with polarity on the word-and phrase-level, annotating both subjective and factual language. The annotations on Layer 3 focus on the expression-level, denoting frames of private states such as objective and direct speech events. These three layers and their respective annotations are intended to be fully independent of each other. At the same time, exploring for and discovering interactions that may exist between different layers should also be possible. The reliability of the respective annotations was assessed using the average pairwise agreement and Fleiss' multi-rater measures. We believe that MLSA is a beneficial resource for sentiment analysis research, algorithms and applications that focus on the German language.
[ 3264224, 1026757, 9405068, 7105713, 15307540 ]
MLSA -A Multi-layered Reference Corpus for German Sentiment Analysis Simon Clematide Institute of Comp. Linguistics, * Department of New Media Technology Υ Department of Computer Science Natural Language Processing Group + Artificial Intelligence Group φ University of Zürich MODUL University Vienna University of Leipzig University of Hildesheim ψ University of Bielefeld Saarland University Spoken Language Systems π Stefan Gindl stefan.gindl@modul.ac.atυ Institute of Comp. Linguistics, * Department of New Media Technology Υ Department of Computer Science Natural Language Processing Group + Artificial Intelligence Group φ University of Zürich MODUL University Vienna University of Leipzig University of Hildesheim ψ University of Bielefeld Saarland University Spoken Language Systems π Manfred Klenner Institute of Comp. Linguistics, * Department of New Media Technology Υ Department of Computer Science Natural Language Processing Group + Artificial Intelligence Group φ University of Zürich MODUL University Vienna University of Leipzig University of Hildesheim ψ University of Bielefeld Saarland University Spoken Language Systems π Stefanos Petrakis petrakis@cl.uzh.ch* Institute of Comp. Linguistics, * Department of New Media Technology Υ Department of Computer Science Natural Language Processing Group + Artificial Intelligence Group φ University of Zürich MODUL University Vienna University of Leipzig University of Hildesheim ψ University of Bielefeld Saarland University Spoken Language Systems π Robert Remus Institute of Comp. Linguistics, * Department of New Media Technology Υ Department of Computer Science Natural Language Processing Group + Artificial Intelligence Group φ University of Zürich MODUL University Vienna University of Leipzig University of Hildesheim ψ University of Bielefeld Saarland University Spoken Language Systems π Josef Ruppenhofer josef.ruppenhofer@uni-hildesheim.deψ Institute of Comp. Linguistics, * Department of New Media Technology Υ Department of Computer Science Natural Language Processing Group + Artificial Intelligence Group φ University of Zürich MODUL University Vienna University of Leipzig University of Hildesheim ψ University of Bielefeld Saarland University Spoken Language Systems π Ulli Waltinger uwalting@techfak.uni-bielefeld.deφ Institute of Comp. Linguistics, * Department of New Media Technology Υ Department of Computer Science Natural Language Processing Group + Artificial Intelligence Group φ University of Zürich MODUL University Vienna University of Leipzig University of Hildesheim ψ University of Bielefeld Saarland University Spoken Language Systems π Michael Wiegand michael.wiegand@lsv.uni-saarland.deπ Institute of Comp. Linguistics, * Department of New Media Technology Υ Department of Computer Science Natural Language Processing Group + Artificial Intelligence Group φ University of Zürich MODUL University Vienna University of Leipzig University of Hildesheim ψ University of Bielefeld Saarland University Spoken Language Systems π MLSA -A Multi-layered Reference Corpus for German Sentiment Analysis Sentiment AnalysisEmotion detectionLexical resource In this paper, we describe MLSA, a publicly available multi-layered reference corpus for German-language sentiment analysis. The construction of the corpus is based on the manual annotation of 270 German-language sentences considering three different layers of granularity. The sentence-layer annotation, as the most coarse-grained annotation, focuses on aspects of objectivity, subjectivity and the overall polarity of the respective sentences. Layer 2 is concerned with polarity on the word-and phrase-level, annotating both subjective and factual language. The annotations on Layer 3 focus on the expression-level, denoting frames of private states such as objective and direct speech events. These three layers and their respective annotations are intended to be fully independent of each other. At the same time, exploring for and discovering interactions that may exist between different layers should also be possible. The reliability of the respective annotations was assessed using the average pairwise agreement and Fleiss' multi-rater measures. We believe that MLSA is a beneficial resource for sentiment analysis research, algorithms and applications that focus on the German language. Introduction Sentiment analysis is a highly active research area that embraces not only work on the identification of opinions, emotions and appraisals, but also on the construction of corpora and dictionaries. While various approaches and resources have been proposed for polarity or subjectivity classification for English (Pang et al., 2002;, relatively few benchmark collections and corpora that focus on German have been made available. Moreover, with respect to existing work on corpora for sentiment analysis and opinion mining, most approaches have focused on userrated product reviews at document-level, even though multiple opinions and factual information may be found within single sentences. In this paper, we present MLSA, the result from a European research collaboration that aims to provide a publicly 1 available multi-layered reference corpus for sentiment analysis in German. The compilation of the MLSA corpus is based on manual annotation at different layers of granularity (cf. Figure 1.) using a set of 270 sentences. Within Layer 1, each sentence has been analyzed according to the notions of subjectivity/objectivity and their polarity, i.e. positive, negative or neutral. On Layer 2, the word-and phrase-level has been targeted, focusing on aspects of subjective and factual language. Layer 3 covers annotations on the expression-level, using the notions of private state and speech. Included in its annotations are the sources and targets of opinions. Each layer has been annotated by mul- 1 The corpus is publicly available: http://synergy. sentimental.li/Downloads tiple raters, and the annotations' quality has been assessed by two different inter-annotator agreement measures. The rest of the paper is structured as follows: In Section 2. we present related work. Section 3. describes the multi-layered reference corpus for German-language sentiment analysis and provides an overview of the data representation and the annotation schemata applied. Section 4. presents the assessment of the inter-annotator agreement and finally, Section 5. concludes this paper. Related Work A plethora of sentiment-related corpora is available for English. Whereas earlier work strongly focuses on coarsegrained classification tasks, such as document-level polarity classification (Pang et al., 2002) there has lately been a shift of attention towards more fine-grained tasks dealing with polarity and subjectivity on sentence-level, phraselevel or even expression-level. Though for the former labeled data can be automatically generated (Pang and Lee, 2005;Blitzer et al., 2007), for instance by deriving the polarity from user ratings in product reviews, the latter requires manual annotation Toprak et al., 2010). The increasing significance of sentiment analysis in natural language processing is also reflected by two benchmark tasks: TAC Opinion Question Answering (Dang, 2009) and NTCIR Multilingual Opinion Annotation Task (Seki et al., 2010), providing text collections for their respective tasks as well. Comparing the availability of English-language resources with the few corpora that are currently available for German (e.g. Remus and Hänig (2011)), the need for further resources becomes obvious. A Multi-layered Reference Corpus for German Sentiment Analysis For the construction of the multi-layered MLSA reference corpus, we used a set of sentences extracted from the DeWaC Corpus (Baroni et al., 2009). The DeWaC Corpus is a collection of German-language documents of various genres obtained from the web. DeWaC does include, but does not exclusively consist of, opinionated, expressive or polarity specific language. Its main properties are its generic nature, its sheer size and its concurrent representation of language used on the web. In order to sample sentences that better suit our research goals, we extracted those where negation of, intensification of, as well as contrasts between polar words were detected. Using such simple heuristics allowed for constructing a dataset that was sufficiently biased -up to a certain degree -towards "sentimentality", while still being generic enough. This detection was based on Clematide and Klenner (2010)'s polarity lexicon and resulted in a set of 270 sentences. Consequently, all sentences were manually annotated at three layers of granularity: we now describe each annotation layer in detail. Layer 1: Sentence-level Annotations Sentence-layer annotation is the most coarse-grained annotation in the corpus. We adhere to definitions of objectivity and subjectivity introduced in . Additionally, we followed guidelines drawn from Balahur and Steinberger (2009). Their clarifications proved to be quite effective, raising inter-annotator agreement in a sentencelayer polarity annotation task from about 50% to more than 80%. All sentences were annotated with respect to two dimensions, subjectivity and polarity (cf. Table 1, 2). Subjectivity covers the existence of an actual attitude within a statement. Statements with purely informative content and without an explicit attitude are considered as objective, whereas statements with affective content are subjective. Factuality has two possible values, objective vs. subjective. The second dimension is the polarity of a statement. Neg-ative polarity is equal to negative sentiment, positive polarity denotes positive sentiment and neutral polarity either denotes the lack of explicit sentiment or ambiguity within the sentence. An example of a subjective sentence with negative polarity is: (1) "Das Schlimmste aber war eine mir unerklärliche starke innere Unruhe und das gleichzeitige Unvermögen, mich normal zu bewegen." ["But the worst thing was an inexplicable severe inner restlessness and the concomitant inability to move normal."] The sentence does not contain any obvious factual information, but only expresses the inner state of a person. An example of an objective sentence without any overt polarity is: (2) "Die Bewegung der extrem detaillierten Raumschiffe basiert auf realen physikalischen Gesetzen." ["The movement of the extremely detailed spaceships is based on real physical laws."] Non-neutral polarity can also be assigned to an objective sentence. This sounds like an oxymoron in the first place, but it becomes obvious with an example: (3) "Die Folge war hohe Arbeitslosigkeit im Textilgewerbe, das hauptsächlich für den Export produzierte." ["The result was high unemployment in textile industry, which mainly produced for export."] From a factuality point of view the sentence is objective, since it simply expresses a statement concerning the "high unemployment in textile industry". However, high unemployment is a problem for a society, rendering it's existence a negative matter-of-fact (provided someone does not argue from an industrialists point of view, where high unemployment decreases production costs). Thus, an objective sentence might also contain a piece of information causing a negative/positive emotional response in a reader. The different layers of MLSA are not synchronized, i.e. the annotations on one layer cannot be used to derive annotations on a different layer. MLSA contains sentences where the simple aggregation of phrase-layer polarity assessments would deliver results different from the sentence-layer assessment: (4) "Wenn du nicht in die Hölle willst, dann sei demütig und ertrage auch die schlimmste Folter ohne Hass auf deine Peiniger, denn es ist letztlich nur um deiner Seele Willen, sie vor der Hölle zu bewahren." ["If you are not willing to go to hell, then be humble and endure the worst torture without hatred for your tormentors, because ultimately it is only to save your soul from hell."] The phrase-level annotation lists four negative phrases in total, with only one positive phrase ("without hatred for your tormentors"; the negative phrase "for your tormentors" is embedded in the positive phrase). Such an annotation would suggest a negative annotation on the sentencelevel as well. However, only one of the three sentence-level annotators assigned a negative label to this sentence. The same is true for the following sentence: (5) "Sie liefert Meldungenüber das politische Ortsgeschehen, interessante Bräuche und kulturelle Veranstaltungen oder greift ernste, soziale, kirchliche, lustige oder kuriose Themen auf." ["It provides news about the local political events, interesting traditions and cultural events or serious takes on social, religious, funny or strange issues."] Although consisting of only positive phrases this sentence gets an exclusively neutral assessment on the sentencelevel. These "inconsistencies" show the difficulties arising when creating a corpus for sentiment analysis. Annotations from one level cannot be easily transferred or summed up to be used on another level. However, these inconsistencies also emphasize the relevance of MLSA. The annotations on all three levels were done independently, which guarantees that there are no distortions introduced by a transfer from one level to the other. Researchers interested in different aspects of Sentiment Analysis will find different aspects of the corpus useful. Moreover, it also allows for holistic approaches, which have inter-dependencies between different layers as an explicit goal. Layer 2: Word-and Phrase-level Annotations On Layer 2, we are concerned with polarity on the wordand phrase-level (specifically nominal phrases (NPs) and prepositional phrases (PPs)), annotating both subjective and factual language. We exploit the syntactic structure of these phrases and annotate their polarity following the interaction between their structural elements. This is a major difference compared to existing annotation efforts and is driven by what we see as the need for an annotation that is based on the syntactic structure of the textual unit at hand, which in turn could lead to an explicit compositional treatment of the polarity of complex phrases, i.e. a system that learns how to determine the polarity of a complex phrase based on its parts. We segment NPs and PPs according to the TIGER guidelines (Brants and Hansen, 2002). Relative clauses and adjective phrase boundaries are not yet marked up as this paper is written. On the phrase-level the following polarity tags are used: + for positive,for negative, 0 for neutral polarity and # for bipolar phrases. Moreover, phrase borders are indicated by square brackets and respective polarities are attached to the closing brackets. On the word-level three additional tags are used: % for diminishers (low), ∧ for intensifiers (high) and ∼ for shifters (inversion). We apply manual word-sense disambiguation as we consider word polarities to be context-dependent, e.g. "menschlich" in "menschliche+ Geste" (human gesture) compared to "menschlicher0 Körper" (human body). We exclusively focus on annotating phrases where -via compositionality -the sentiment of a phrase could be derived from the sentiment of its constituents, either words or phrases. Because of our focus, we only annotate phrases which contain polarized constituents. An example of our annotation scheme which exhibits the compositional aspects of sentiment is the following: (6) "ohne Hass auf deine Peiniger" ["without hatred for your torturers"] We start from the word-level, assigning the appropriate polarity tags where applicable, and get: (7) "ohne∼ Hass-auf deine Peiniger-" We then segment the phrase into NPs and PPs, and assign polarity to the segments: (8) "[ohne∼ Hass-[auf deine Peiniger-]-]+" Finally, the overall polarity is assigned, which in this case is positive. Another example, following the extact same steps, takes as input the phrase: (9) "keine Angst vor dem schrecklichen Phantom" ["no fear for the horrible phantom"] and outputs the following annotation with an overall positive polarity: (10) "[keine∼ Angst-[vor dem schrecklichen Phantom-]-]+" Table 3 provides some descriptive statistics regarding the annotations produced on Layer 2. The Top Phrases column contains the counts for phrases that stand directly below the sentence-level, i.e. if such a phrase was to be composed into a higher level textual unit, that unit would be the sentence at hand. In a similar way, the All Phrases column contains the counts for all possible phrases below the sentencelevel that have been annotated with polarity, including the top phrases. As a first general remark we can observe a slight tendency for negativity in our dataset, both on wordand phrase-level, while neutrality is observed seldom. Secondly, we can see that primary examples of compositionality, like the intensification and shifting phenomena also have a significant presence in our dataset. Finally, coming back to neutrality, although it was observed less frequently, we can see how a number of phrases have in fact been assigned an overall neutral polarity although they contain polar words and/or phrases. For example the phrase: (11) "Trotz dieser erheblichen Steigerung der absoluten Zahlen" ["Despite this considerable increase of absolute numbers"] is assigned an overall neutral polarity despite the presence of shifters and positive words: (12) "[Trotz∼ dieser erheblichen+ Steigerung+ der absoluten Zahlen]" which provides us with an example where compositionality does not always break through to the top level. In other words, a phrase's overall polarity will not necessarily always be positive, negative or bipolar, although it contains polarized constituents. Layer 3: Expression-level Annotations The annotation scheme of Layer 3 adheres to the main concepts of expression-level annotation of the MPQA corpus . This type of annotation is important for building systems for sentiment-related information extraction tasks, such as opinion summarization or opinion question answering (Stoyanov et al., 2005;Stoyanov and Cardie, 2011). In those tasks, the sentiment towards a specific entity, e.g. a person, an organization or a commercial product, is to be extracted. Sentiment annotation on the sentence-level (Layer 1) or on complex phrases (Layer 2) are less helpful for such applications. We annotate lexical units denoting frames of private states, i.e. states that are not open to observation and verification and their corresponding frame elements. We distinguish between the three types, Objective Speech Events (OSEs), such as sentence (13), Direct Speech Events (DSEs), such as sentence (14), and Explicit Subjective Expressions (ESEs), such as sentence (15). The latter are used by speakers to express their frustration, wonder, positive sentiment, mirth, etc., without explicitly stating that they are frustrated, etc. . Each frame can be assigned optional frame flags. The flag inventory consists of the prior polarity of a frame (i.e. positive, negative, or both) and a label denoting backgrounded sentiment. Lexical units conveying such a sentiment entail sentiment information but their primary meaning conveys something else. For example, the verb "ermorden" ("to murder") means "to kill another being" but this usually entails that the perpetrator has a negative sentiment towards its victim. Typical frame elements are the source and the target of a frame, modulation (i.e. diminishers and intensifiers) and operator by which context modification such as negation or modal embedding is captured. Another element called polarity denotes markers that indicate the polarity towards the target. Note that this is different from the polarity frame flag which indicates the prior polarity of the lexical unit evoking the pertaining frame. For example, the verb "criticize" evokes a DSE with a negative polarity frame flag. The noun "Kampagne" ("campaign"), by contrast, evokes a DSE without a polarity flag since "Kampagne" is underspecified for polarity towards its target. Its source can, in principle, have either positive or negative polarity towards the target. Prepositional markers that appear on the dependents of such a predicate, for example "für/gegen" ("for/against") in "Kampagne für/gegen höhere Steuern" ("campaign for/against higher taxes"), are considered a marker indicating the contextual polarity towards the target (as it has not been specified by the target itself). Those markers are assigned the polarity frame element. Some important descriptive statistics of the annotations on Layer 3 are given in Tables 4 and 5, which represent the counts for each individual annotator as well as of the adjudicated version. As can be seen from Table 4, we have very few instances of OSEs in our data. One important reason for this is that, unlike in the MPQA, we did not annotate frames for the top-level writer's speech event because it is always unexpressed and there is no syntactic predicate for us to target. As Table 5 shows, we have far fewer Source elements annotated than we do Targets. This has two reasons. First, the former often correspond to the implicit writer of the text and thus are not available for annotation. Second, we have a relatively high number of ESEs among the subjective frame types: ESEs by definition cannot realize Sources as syntactic dependents. Another interesting observation (not spelled out in either table) is that specifications of Polarity, though rare overall, are more common with DSEs: only two cases occur with ESEs. The most common type of Polarity element is an adjective such as positive or negative modifying a noun DSE, as in "negative Reaktionen der Mitmenschen" ("negative reactions by others"). Inter-annotator Agreements In order to measure the reliability of our annotations, we computed inter-annotator agreements by means of two measures for all layers: average pairwise agreement and (Fleiss, 1981)'s multi-rater Kappa. Calculations are based on all sentences for Layer 1 and on a 30 sentence test set for Layer 2 and Layer 3 (cf. Table 6). On all three layers we reached at least "substantial agreement", for phrase-level polarity and expression-level polarity even "almost perfect agreement" (Landis and Koch, 1977). Conclusions In this paper, we described the creation of MLSA, a multilayered reference corpus for German sentiment analysis. The corpus contains sentences annotated on sentence-level, word-and phrase-level and expression-level. Due to its multiple layers, it is applicable to various sentiment analysis approaches. Used as a gold standard, such a corpus facilitates comparability and reproducibility. Moreover, it frees the researcher from the burden to collect and annotate data by themselves. Thus, we believe that establishing our corpus as a standard resource in Germanlanguage sentiment analysis will be beneficial for the research field. Figure 1 : 1Excerpt from the multi-layered annotation. Table 2 : 2Distribution of the positive, negative and neutral tags annotators reached a consensus on in Layer 1. Table 4 : 4Major annotation frame types in Layer 3.Merged Annotator 1 Annotator 2 Source 261 254 249 Target 1124 1053 1074 Operator 60 54 58 Modulation 160 147 155 Polarity 23 23 18 Support 130 126 127 Table 5 : 5Major frame label categories in Layer 3. viel]modulation [über das Wetter]target." ["[Peter]source does [not]operator [complain]DSE [much]modulation [about the weather]target."](16) "[Peter]source [schimpft]DSE [nicht]operator [ AcknowledgementsWe gratefully acknowledge financial support of the German Research Foundation (DFG) through the EC 277 Cognitive Interaction Technology at Bielefeld University, the German Federal Ministry of Education and Research (BMBF) under grant no. "01IC10S01", and of the Swiss National Science Foundation (grant 100015 122546/1). Rethinking sentiment analysis in the news: from theory to practice and back. A Balahur, R Steinberger, Proceeding of the 1st Workshop on Opinion Mining and Sentiment Analysis (WOMSA). eeding of the 1st Workshop on Opinion Mining and Sentiment Analysis (WOMSA)A. Balahur and R. Steinberger. 2009. Rethinking sentiment analysis in the news: from theory to practice and back. In Proceeding of the 1st Workshop on Opinion Mining and Sentiment Analysis (WOMSA). The wacky wide web: A collection of very large linguistically processed webcrawled corpora. Marco Baroni, Silvia Bernardini, Adriano Ferraresi, Eros Zanchetta, Language Resources and Evaluation. 433Marco Baroni, Silvia Bernardini, Adriano Ferraresi, and Eros Zanchetta. 2009. The wacky wide web: A collection of very large linguistically processed web- crawled corpora. Language Resources and Evaluation, 43(3):209-226. Biographies, Bollywood, Boom-boxes and Blenders: Domain Adaptation for Sentiment Classification. John Blitzer, Mark Dredze, Fernando Pereira, Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics (ACL). the 45th Annual Meeting of the Association for Computational Linguistics (ACL)John Blitzer, Mark Dredze, and Fernando Pereira. 2007. Biographies, Bollywood, Boom-boxes and Blenders: Domain Adaptation for Sentiment Classification. In Pro- ceedings of the 45th Annual Meeting of the Association for Computational Linguistics (ACL), pages 440-447. Developments in the tiger annotation scheme and their realization in the corpus. Sabine Brants, Silvia Hansen, Proceedings of the Third Conference on Language Resources and Evaluation. the Third Conference on Language Resources and EvaluationLas PalmasSabine Brants and Silvia Hansen. 2002. Developments in the tiger annotation scheme and their realization in the corpus. In Proceedings of the Third Conference on Lan- guage Resources and Evaluation, pages 1643-1649, Las Palmas. Evaluation and extension of a polarity lexicon for german. Simon Clematide, Manfred Klenner, Proceedings of the First Workshop on Computational Approaches to Subjectivity and Sentiment Analysis. the First Workshop on Computational Approaches to Subjectivity and Sentiment AnalysisSimon Clematide and Manfred Klenner. 2010. Evaluation and extension of a polarity lexicon for german. In Pro- ceedings of the First Workshop on Computational Ap- proaches to Subjectivity and Sentiment Analysis, pages 7-13. Overview of the TAC 2008 Opinion Question Answering and Summarization Tasks. Hoa Trang Dang, Proceedings of the Text Analysis Conference (TAC). the Text Analysis Conference (TAC)Gaithersburg, MD, USAHoa Trang Dang. 2009. Overview of the TAC 2008 Opin- ion Question Answering and Summarization Tasks. In Proceedings of the Text Analysis Conference (TAC), Gaithersburg, MD, USA. Statistical Methods for Rates and Proportions. Wiley series in probability and mathematical statistics. L Joseph, Fleiss, John Wiley & SonsNew Yorksecond editionJoseph L. Fleiss. 1981. Statistical Methods for Rates and Proportions. Wiley series in probability and mathemati- cal statistics. John Wiley & Sons, New York, second edi- tion. The Measurement of Observer Agreement for Categorical Data. J , Richard Landis, Gary G Koch, Biometrics. 331J. Richard Landis and Gary G. Koch. 1977. The Measure- ment of Observer Agreement for Categorical Data. Bio- metrics, 33(1):159-174. Seeing stars: Exploiting class relationships for sentiment categorization with respect to rating scales. Bo Pang, Lillian Lee, Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL'05). the 43rd Annual Meeting of the Association for Computational Linguistics (ACL'05)Ann Arbor, MichiganAssociation for Computational LinguisticsBo Pang and Lillian Lee. 2005. Seeing stars: Exploiting class relationships for sentiment categorization with re- spect to rating scales. In Proceedings of the 43rd Annual Meeting of the Association for Computational Linguis- tics (ACL'05), pages 115-124, Ann Arbor, Michigan, June. Association for Computational Linguistics. Thumbs up? sentiment classification using machine learning techniques. Bo Pang, Lillian Lee, Shivakumar Vaithyanathan, Proceedings of the 2002 Conference on Empirical Methods in Natural Language Processing. the 2002 Conference on Empirical Methods in Natural Language ProcessingAssociation for Computational LinguisticsBo Pang, Lillian Lee, and Shivakumar Vaithyanathan. 2002. Thumbs up? sentiment classification using ma- chine learning techniques. In Proceedings of the 2002 Conference on Empirical Methods in Natural Language Processing, pages 79-86. Association for Computational Linguistics, July. Towards Wellgrounded Phrase-level Polarity Analysis. Robert Remus, Christian Hänig, Proceedings of the 12th International Conference on Intelligent Text Processing and Computational Linguistics (CICLing), number 6608 in LNCS. the 12th International Conference on Intelligent Text Processing and Computational Linguistics (CICLing), number 6608 in LNCSSpringerRobert Remus and Christian Hänig. 2011. Towards Well- grounded Phrase-level Polarity Analysis. In Proceedings of the 12th International Conference on Intelligent Text Processing and Computational Linguistics (CICLing), number 6608 in LNCS, pages 380-392. Springer. Overview of Multilingual Opinion Analysis Task at NTCIR-8 -A Step Toward Cross Lingual Opinion Analysis. Yohei Seki, Lun-Wei Ku, Le Sun, Hsin-Hsi Chen, Noriko Kando, Proceedings of NTCIR-8 Workshop Meeting. NTCIR-8 Workshop MeetingYohei Seki, Lun-Wei Ku, Le Sun, Hsin-Hsi Chen, , and Noriko Kando. 2010. Overview of Multilingual Opin- ion Analysis Task at NTCIR-8 -A Step Toward Cross Lingual Opinion Analysis. In Proceedings of NTCIR-8 Workshop Meeting. Automatically creating general-purpose opinion summaries from text. Veselin Stoyanov, Claire Cardie, Proceedings of the International Conference Recent Advances in Natural Language Processing. the International Conference Recent Advances in Natural Language ProcessingHissar, BulgariaOrganising CommitteeVeselin Stoyanov and Claire Cardie. 2011. Automatically creating general-purpose opinion summaries from text. In Proceedings of the International Conference Recent Advances in Natural Language Processing 2011, pages 202-209, Hissar, Bulgaria, September. RANLP 2011 Or- ganising Committee. Multi-perspective question answering using the opqa corpus. Veselin Stoyanov, Claire Cardie, Janyce Wiebe, Proceedings of Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing. Human Language Technology Conference and Conference on Empirical Methods in Natural Language ProcessingBritish Columbia, CanadaAssociation for Computational LinguisticsVeselin Stoyanov, Claire Cardie, and Janyce Wiebe. 2005. Multi-perspective question answering using the opqa corpus. In Proceedings of Human Language Technol- ogy Conference and Conference on Empirical Methods in Natural Language Processing, pages 923-930, Van- couver, British Columbia, Canada, October. Association for Computational Linguistics. Sentence and expression level annotation of opinions in user-generated discourse. Cigdem Toprak, Niklas Jakob, Iryna Gurevych, Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics. the 48th Annual Meeting of the Association for Computational LinguisticsUppsala, SwedenAssociation for Computational LinguisticsCigdem Toprak, Niklas Jakob, and Iryna Gurevych. 2010. Sentence and expression level annotation of opinions in user-generated discourse. In Proceedings of the 48th An- nual Meeting of the Association for Computational Lin- guistics, pages 575-584, Uppsala, Sweden, July. Associ- ation for Computational Linguistics. Annotating expressions of opinions and emotions in language ann. Janyce Wiebe, Theresa Wilson, Claire Cardie, Language Resources and Evaluation. 392/3Janyce Wiebe, Theresa Wilson, and Claire Cardie. 2005. Annotating expressions of opinions and emotions in language ann. Language Resources and Evaluation, 39(2/3):164-210. Recognizing contextual polarity in phrase-level sentiment analysis. Theresa Wilson, Janyce Wiebe, Paul Hoffmann, Proceedings of Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing. Human Language Technology Conference and Conference on Empirical Methods in Natural Language ProcessingVancouver, British Columbia, Canada, October. Association for Computational LinguisticsTheresa Wilson, Janyce Wiebe, and Paul Hoffmann. 2005. Recognizing contextual polarity in phrase-level senti- ment analysis. In Proceedings of Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing, pages 347- 354, Vancouver, British Columbia, Canada, October. As- sociation for Computational Linguistics.
74,975
Effacement de dimensions de similarité textuelle pour l'exploration de collections de rapports d'incidents aéronautiques
Deletion of dimensions of textual similarity for the exploration of collections of accident reports in aviationIn this paper we study the relationship between external classification and textual similarity in collections of incident reports. Our goal is to complement the existing classification-based analysis strategies by automatically establishing similarity links between documents in such a way that they do not reflect the dominant organisation of the classification schemas. In order to discover such transversal dimensions of similarity, we compute association scores between terms and classes and exlude the most correlated terms from the similarity calculation. We demonstrate on a 500 document corpus that by using this method, we can isolate topics that would otherwise have been masked by the dominant dimensions of similarity in the collection. MOTS-CLÉS : similarité textuelle, classification de documents, corpus spécialisé.
[ 232021662 ]
Effacement de dimensions de similarité textuelle pour l'exploration de collections de rapports d'incidents aéronautiques TALNCopyright TALN2012 Actes De La Conférence Conjointe Jep-Taln-Recital Effacement de dimensions de similarité textuelle pour l'exploration de collections de rapports d'incidents aéronautiques GrenobleTALN22012 Deletion of dimensions of textual similarity for the exploration of collections of accident reports in aviationIn this paper we study the relationship between external classification and textual similarity in collections of incident reports. Our goal is to complement the existing classification-based analysis strategies by automatically establishing similarity links between documents in such a way that they do not reflect the dominant organisation of the classification schemas. In order to discover such transversal dimensions of similarity, we compute association scores between terms and classes and exlude the most correlated terms from the similarity calculation. We demonstrate on a 500 document corpus that by using this method, we can isolate topics that would otherwise have been masked by the dominant dimensions of similarity in the collection. MOTS-CLÉS : similarité textuelle, classification de documents, corpus spécialisé. Effacement de dimensions de similarité textuelle pour l'exploration de collections de rapports d'incidents aéronautiques Tulechki Nikola 1,2 Tanguy Ludovic 1 (1) CLLE-ERSS : CNRS et Université de Toulouse 2, 5 allées Antonio Machado, 31058 Toulouse CEDEX 9 (2) Conseil en Facteurs Humains, 4 impasse Montcabrier. 31500 Toulouse {tanguy,tulechki}@univ-tlse2.fr RÉSUMÉ Cet article étudie le lien entre la similarité textuelle et une classification extrinsèque dans des collections de rapports d'incidents aéronautiques. Nous cherchons à compléter les stratégies d'analyse de ces collections en établissant automatiquement des liens de similarité entre les documents de façon à ce qu'ils ne reflètent pas l'organisation des schémas de codification utilisés pour leur classement. Afin de mettre en évidence les dimensions de variation transversales à la classification, nous calculons un score de dépendance entre les termes et les classes et excluons du calcul de similarité les termes les plus corrélés à une classe donnée. Nous montrons par une application sur 500 documents que cette méthode permet effectivement de dégager des thématiques qui seraient passées inaperçues au vu de la trop grande saillance des similarités de haut niveau. Introduction et contexte applicatif Dans toute industrie à risque, le retour d'expérience (REX) occupe une place capitale dans les mécanismes de gestion de la sûreté. Des politiques de recueil, d'analyse et de stockage sont mises en place afin de garder une trace de tout évènement qui s'écarte de la norme, de tout incident ou accident qui survient lors des opérations. Les informations ainsi recueillies servent ensuite de support aux experts de sûreté pour mettre à jour les règles et les procédures d'exploitation en les adaptant à un contexte en perpétuelle évolution. Texte et codification des rapports d'incidents aéronautiques L'objet de notre étude est un sous-ensemble particulier de REX, les rapports de type Aircraft Safety Report (ASR) recueillis dans le service de sécurité de la compagnie aérienne Air France. Les ASR sont des textes relativement courts (105 mots en moyenne) rédigés par les pilotes eux-mêmes immédiatement ou peu après qu'un incident s'est produit, et décrivant celui-ci en langage libre. Lorsqu'ils sont soumis, ces rapports sont saisis dans la base de données de la compagnie et enrichis d'un certain nombre d'informations factuelles, telles que le modèle de l'avion, les conditions météo, la localisation ou encore le poids de l'appareil le jour de l'incident. Ensuite les rapports subissent une première analyse visant à "coder" l'évènement suivant un schéma préétabli. Un schéma de codification est une abstraction d'un scénario d'accident, composée de plusieurs taxonomies de codes en rapport avec différents aspects d'un accident. En pratique, l'expert en charge du codage doit décrire l'évènement en utilisant quelques centaines de codes, à partir de listes fermées. (Voir (Ponvert, 2009) pour les détails de l'élaboration et la mise en place du schéma de codification actuel d'Air France). Une fois codés, les rapports sont stockés dans la base de données et peuvent être interrogés via des requêtes portant sur les informations factuelles et la codification. Un expert peut ainsi, par exemple, extraire de la base l'ensemble d'incidents, où il y a eu une panne du radar météo, dans un Boeing 747 survenue lorsque l'avion était en phase de montée initiale. Limites de la codification Avec du recul, on peut voir le procédé de codification comme un effort visant à maîtriser la variation inhérente des rapports afin d'atteindre un niveau d'abstraction suffisamment stable pour une exploitation informatisée d'une base de REX. Sans rentrer dans les détails, nous dirons que cet effort est nécessairement accompagné d'un appauvrissement du contenu informationnel directement accessible aux experts. Le fait de réduire un texte à un squelette prédéterminé a pour effet de ne garder que les éléments les plus saillants de l'évènement au détriment de subtilités qui, tout en étant présentes dans le texte original, ne trouvent pas leur place dans la codification. Une autre limite de ces stratégies est leur caractère intrinsèquement réactif. Un schéma de codification est une représentation de la réalité figée à un instant précis, alors que la réalité qu'elle reflète est en perpétuelle évolution. Toute changement majeur du contexte doit être reflété dans le schéma, ce qui correspond à un effort considérable et prends un temps précieux aux experts, pendant lequel un risque nouvellement apparu peut se trouver sans code associé. Objectifs applicatifs Conscients des limites des stratégies d'analyse des REX par codification, notre objectif est de concevoir des techniques et outils venant en complément de ces stratégies et permettant aux experts d'explorer les collections de rapports en fonction des particularités de leur contenu textuel et de leur distribution chronologique. S'affranchissant de la rigidité de la codification, dans l'idéal ces outils devront être capables d'alerter leurs usagers de configurations particulières d'évènements, de tendances émergentes ou encore d'évènements anormaux (Tulechki, 2011). Similarité textuelle Dans un premier temps nous avons cherché à rapprocher les textes en fonction de leur contenu en utilisant des méthodes classiques en recherche d'information (RI) : la similarité cosinus (Salton et al., 1975), une mesure du recouvrement lexical qui attribue un score compris entre 0 et 1 à chaque paire de documents dans la collection. Un score de 0 signifie une absence de termes en commun et un score de 1 une identité complète du contenu lexical des deux textes. Ce score est obtenu en calculant le cosinus entre deux vecteurs dans un espace à n dimensions correspondant aux termes présents dans la collection. En plus de son utilisation immédiate dans des applications de type moteur de recherche, ce calcul permet de superposer automatiquement une couche de structure sur une collection et transformer un matériau symbolique et qualitatif en des données numériques et ouvre la voie à d'autres traitements comme l'apprentissage non supervisé (Steinbach et al., 2000), ou encore la détection d'anomalies (Chandola et al., 2009) pour en citer quelques uns. Toutefois à l'heure actuelle, nous avons préféré tout d'abord évaluer l'apport per se de la similarité textuelle en développant un outil utilisant ce calcul et en le soumettant aux experts de sûreté. FIGURE 1 -Similarité de rapports d'incidents de sur un axe chronologique L'outil timePlot présenté en figure 1 permet, à partir d'un rapport pivot, de visualiser les rapports similaires et leur distribution dans le temps. Les rapports sont présentés à l'utilisateur sur un graphique interactif qui permet un accès direct à leur contenu. Différentes configurations chronologiques peuvent apparaître, comme ici le pic de rapports associés à l'éruption volcanique du printemps 2010, ou encore des phénomènes saisonniers, comme par exemple des incidents liés à la neige qui, naturellement, apparaissent dans les périodes de grand froid. Limites de la similarité L'approche par similarité textuelle a connu un succès auprès des experts qui ont apprécié son coté intuitif et le potentiel de rapprochement de rapports dont le lien n'est nullement reflété par un codage commun. La logique d'utilisation simplifiée et l'intuitivité de l'interface, conçue d'emblée comme un support à une exploration de la collection sans a priori, ont aussi contribué à la validation de cette approche par ses usagers. Néanmoins, nous nous sommes vite aperçus que le calcul de similarité, dans notre contexte précis et compte tenu des spécificités du matériau textuel auquel nous avons affaire, souffrait d'un manque de finesse évident. Étant donné que l'ensemble des textes sont issus d'un domaine très circonscrit, celui de l'aviation civile, ils sont tous plus ou moins similaires, la plupart parlant d'"avions", de "pilotes" et de "vols". Ce fond lexical commun est en partie géré par les techniques de pondération, comme le TF/IDF (Spärck-Jones, 2004), mais compte tenu de la variation lexicale inhérente du domaine, notamment la multitude de termes 1 désignant un même objet qui sont employés par les rédacteurs, des rapprochements sont faits sans pour autant designer des facteurs de similarité pertinents pour une analyse. Une des thématiques actuelles en recherche d'information est de regrouper les résultats des moteurs de recherche par thème en utilisant des méthodes de clustering, afin de mettre en évidence les différentes thématiques qui sont présentes dans la liste des résultats. Une requête comme "japon" par exemple, peut ramener des documents traitant du tourisme au japon et de gastronomie japonaise (Navarro et al., 2011). En se basant sur les similitudes entre ces documents un système d'apprentissage non supervisé regroupe ensuite ces résultats en deux paquets et permet à l'utilisateur de focaliser sa recherche sur le sous-ensemble qui l'intéresse. Ces méthodes, tout en raffinant et classant les résultats ne peuvent pas encore gérer des collections où les thématiques varient simultanément sur plusieurs dimensions. Les résultats sur le japon peuvent concerner des localisations différentes ("Tokyo" et "Osaka", par exemple) sans qu'une localisation soit particulièrement associée à un des thèmes. Un système de clustering peinera à isoler ces deux dimensions de variation et à proposer un découpage des résultats selon les deux critères (thème et localisation) simultanément. De travaux sont en cours visant à développer des méthodes efficaces de clustering avec recouvrement 3 , notamment pour répondre à l'unidimensionalité des techniques actuelles. 1) Choc aviaire au décollage. 2) Turbulences au décollage. 3) Choc aviaire à l'atterrissage. Entre ces trois textes un score de similarité comparable sera calculé entre 1) et 2) et entre 1) et 3). Pourtant, les raisons de ce rapprochement sont différentes dans les deux cas. 1) et 3) traitent d'un même type d'incident, alors que 1) et 2) partagent les circonstances dans lesquels sont survenus des incidents différents. Ces deux aspects sont pris en compte dans le schéma de codification, grâce aux champs "type d'incident" et "phase de vol". Le "type d'incident" pour 1) et 3) sera choc aviaire et turbulences pour 2). La "phase de vol" sera décollage pour 1) et 2) et atterrissage pour 3). Nous allons donc regarder de près comment mettre en évidence le lien entre le codage des rapports et leur contenu textuel. Lien entre codage et contenu Nous avons déjà vu que certains termes des textes étaient fortement liés à certaines classes du schéma de codification et que ces mêmes termes font en sorte que la similarité textuelle retrouve souvent les classes du schéma. (Manning et al., 2008, Section 13.5.1) pour l'algorithme utilisé). En RI cette technique permet, pour une collection catégorisée, de réduire l'espace des termes en ne sélectionnant que ceux qui sont statistiquement corrélés à une classe donnée. L'IM est aussi communément utilisée en classification automatique. Étant donné un terme t et une classe C, plus l'information mutuelle IM(t,C) est élevée, plus t permet de correctement prédire C. Effacement des dimensions principales Pour la RI, l'hypothèse sous-jacente qui justifie ce procédé est que ce sont typiquement les termes décrivant au mieux la variation relative à une organisation particulière repérée par un humain via une classification donnée qui seront aussi les plus performants pour l'indexation de la même collection. Notre objectif est exactement inverse. Nous allons exclure ces termes du calcul de similarité, afin qu'il ne reflète pas l'organisation déjà présente dans le schéma de codification. Nous avons calculé l'IM entre tous les termes d'un corpus de 4450 rapports, et les 4 classes que nous avons isolées. Voici les 5 termes les plus corrélés par catégorie. 1 vent aviaire approche décollage 2 turbulence collision finale poussée 3 gaz oiseau atterrissage rotation 4 arrière impact stabilisation t/o 5 5 windshear 6 7 bird 7 arrondir vr 8 Turbulences Choc aviaire Atterrissage Décollage Nous avons de nouveau mesuré la moyenne de recouvrement (MR) entre similarité et codification, mais cette fois ci en excluant soit les 50 termes les plus associés aux deux phases de vol (phVol), soit les 50 termes les plus associés aux types d'évènement (typEve On peut voir que le recouvrement entre la similarité textuelle et une dimension donnée varie en fonction du filtrage des termes associés à cette même dimension, alors que le recouvrement sur l'autre dimension est moins affecté. Après filtrage on trouve, en moyenne, respectivement 9,8 et 13,6 nouveaux documents dans la liste des 30 premiers, ce qui témoigne de l'effet du filtrage sur le classement des résultats. Concrètement ceci signifie que, pour un rapport traitant de turbulences au décollage, un filtrage des termes associés avec les phases de vol privilégiera les rapports traitant de turbulences alors qu'un filtrage des termes associés avec le type d'évènement privilégiera les rapports traitant d'évènements survenus lors du décollage. Dimensions transversales En effaçant ces dimensions de similarité, le filtrage des termes associés possède la capacité de mettre en évidence des facteurs de similarité secondaires. Voici un exemple d'une telle dimension qui a émergé de notre corpus. Le rapport suivant traite de turbulences à l'atterrissage, mais mentionne en plus un double pilotage 10 : INCURSION Conclusion et perspectives La technique que nous avons présentée s'inscrit dans un effort global dont l'objectif est de proposer des outils d'exploration de collections de documents et la mise en évidence de liens de similarité "faibles" entre les documents qui seraient autrement masqués par la dimension de similarité la plus saillante. D'emblée conçues pour un usage par un utilisateur averti, notre intention est d'en évaluer l'apport applicatif en les proposant à des experts en sûreté aérienne sous forme d'un outil de visualisation et d'exploration permettant dynamiquement à l'utilisateur de choisir les dimensions à ne pas prendre en compte lors du calcul à partir d'une liste des dimensions les plus saillantes pour le sous-ensemble en cours d'analyse. Le fait de se baser sur la codification assure que les choix de filtres qu'auront les experts reflètent des concepts qu'ils ont l'habitude de manipuler dans leur activité d'analyse. Disposant à l'heure actuelle d'une preuve de concept, nous comptons, dans les mois qui viennent, passer à l'échelle en prenant en compte l'intégralité de la codification des collections. S'agissant de techniques exploratoires et fortement dépendantes du domaine et de leur objectif applicatif précis, nous ne sommes pas en mesure de proposer un protocole d'évaluation classique et comptons sur un évaluation par l'usage et un échange constant avec les usagers pour pouvoir juger de la pertinence de ces méthodes. Afin d'étudier ce lien, nous avons constitué un corpus de test en prenant des rapports traitant de chocs aviaires et de turbulences, survenus lors de l'atterrissage et lors du décollage, de manière à avoir une collection équilibrée que nous savons varier sur deux dimensions, la phase de vol et le type d'incident. Le corpus est constitué de 482 rapports que nous avons choisis en nous basant sur le codage de leur champs type d'incident et phase de vol :Turbulences Choc aviaire Total Atterrissage 118 133 251 Décollage 107 124 231 Total 225 257 482 Le premier test a consisté à mesurer le degré de recouvrement entre similarité et catégorisation dans le corpus. Pour cela nous avons, pour chaque document, automatiquement sélectionné les 30 documents les plus similaires et, pour chacun de ces documents, testé s'il partage la même valeur pour les champs type d'incident et phase de vol. En moyenne, 89% des documents partagent la catégorie et 75% des documents partagent la phase de vol, alors que si aucun lien entre codage et similarité n'existait, nous nous attendrions à ce que ces valeurs avoisinent les 50%. VFE SUITE CISAILLEMENT EN FINALE.[REPORT]. Fort cisaillement en finale reporté par les avions précédents. La soudaineté du phénomène surprend l'OPL PF. Légère incursion dans la VFE (3 ou 4 kts). Réponse des commandes par CDB (double pilotage pendant 1 à 2 s.). Avion stabilisé, l'OPL reprend les commandes. Atterrissage sans problème. -FIN-Lorsque nous regardons la liste des rapports similaires sans filtre, au premiers rangs nous trouvons ceux qui évoquent des turbulences à l'atterrissage, comme par exemple :FORT CISAILLEMENT DE VENT EN FINALE 26R CDG. [REPORT]. FORT CISAILLEMENT DE VENT EN FINALE. -FIN-Pour le même document, lorsque l'on filtre les termes associés avec la phase de vol et le type d'évènement, les rapports parlant uniquement de turbulences à l'atterrissage apparaissent plus bas dans la liste des rapports similaires et on retrouvera leur place ceux qui partagent des termes non associés avec les phases et les types d'évènement, notamment des rapports traitant de double pilotage, information qui n'est pas reflétée dans le codage. Ce facteur commun permet d'établir un lien entre ces deux rapports, qui dans certains cas peut s'avérer pertinent pour un expert.BREF DOUBLE PILOTAGE AU DECOLLAGE. [REPORT]. OPL PF au décollage. Vent travers avec rafales. Brève action réflexe en latéral du CDB pour contrer rafale et début d'inclinaison à droite. Prise de priorité peu pertinente pour effet immédiat. -FIN-9. Les valeurs ici sont des moyennes pour les 482 documents. 10. Double pilotage signifie que les deux pilotes agissent simultanément sur les commandes de l'avion. Une autre limite directement liée à la visée applicative de nos travaux apparaît aussi. On s'aperçoit de l'existence d'un fort recouvrement entre le codage des rapports et les regroupements mises à l'évidence par le calcul de similarité textuelle. Un lien entre le termes du texte et leur codage existe dans le corpus. Il est clair que dans les rapports parlant de chocs aviaires 2 , on trouve des termes comme "oiseau", "mouette", et "aviaire". Un rapprochement basé sur ces termes retrouvera plus ou moins la catégorie choc aviaire qui est déjà mise en évidence dans le codage. Dit autrement, la similarité textuelle à tendance de retrouver les dimensions les plus saillantes dans le corpus, dimensions qui sont pour la plupart déjà bien identifiées et reflétées dans les schémas de codification.Par ailleurs, un système de classification automatique de ces données, permet déjà, sur la base du contenu textuel, de proposer des codes aux experts(Hermann et al., 2008). Or, un de nos objectifs est notamment de chercher des facteurs communs plus subtils, pouvant rapprocher des incidents sur des critères différents et transversaux au codage.3 Dimensions de la similarité textuelleLa similarité textuelle, telle qu'elle est calculée, représente toute parenté qui peut exister entre deux textes sur une dimension unique, sans tenir compte des multiples facteurs qui peuvent contribuer à cette parenté.1. Pour parler du pilote, on trouve dans les textes le terme "pilote", mais aussi un ensemble de termes et d'acronymes spécifiques au domaine comme "cdb" (commandant de bord), "opl" (officier pilote de ligne), "copilote", "copi", "pf" (pilot flying), "pnf" (pilote not flying) etc.2. Il arrive très fréquemment que les avions percutent des oiseaux. Principalement orientées vers un usage dans un moteur de recherche "classique" et sur des collections larges de textes hétérogènes, ces méthodes n'assument pas une organisation à priori de la collection. Or dans un corpus spécialisé, comme les bases de rapports d'incidents, les schémas de codification visent justement à organiser la collection, de façon pertinente compte tenu de spécificités de son contexte d'utilisation, tout en intégrant l'hétérogénéité des facteurs de similarité et en représentant. Illustrons ceci par les trois exemples suivants que nous avons construit en nous inspirant de textes réels intitulés comme suit : . Concrètement,une telle méthode doit être capable de classer un même document dans plusieurs classes, en fonction de critères de rapprochement différents. . version disponible en ligne à http://nlp.stanford.edu/IR-book/ 5. Take-off (Décollage) 6. Cisaillement du vent 7. Il est très courant que des termes anglais soient employés dans ces textes pourtant écrits en français. 8. Vitesse de rotation Anomaly detection : A survey. V Chandola, A Banerjee, V Kumar, ACM Computing Surveys (CSUR). 41315CHANDOLA, V., BANERJEE, A. et KUMAR, V. (2009). Anomaly detection : A survey. ACM Computing Surveys (CSUR), 41(3):15. Outils de Traitement Automatique des Langues appliqués aux comptes rendus d'incidents et d'accidents. E Hermann, S Leblois, M Mazeau, D Bourigault, C Fabre, S Travadel, P Durgeat, D Nouvel, 16e Congrès de Maîtrise des Risques et de Sûreté de Fonctionnement. AvignonHERMANN, E., LEBLOIS, S., MAZEAU, M., BOURIGAULT, D., FABRE, C., TRAVADEL, S., DURGEAT, P. et NOUVEL, D. (2008). Outils de Traitement Automatique des Langues appliqués aux comptes rendus d'incidents et d'accidents. In 16e Congrès de Maîtrise des Risques et de Sûreté de Fonction- nement, Avignon. Introduction to information retrieval. C D Manning, P Raghavan, H Schütze, Cambridge University PressNew YorkMANNING, C. D., RAGHAVAN, P. et SCHÜTZE, H. (2008). Introduction to information retrieval. Cambridge University Press, New York. Kodex ou comment organiser les résultats d'une recherche d'information par détection de communautés sur un graphe biparti ?. E Navarro, Y Chudy, B Gaume, G Cabanac, K Pinel-Sauvagnat, Actes de Coria 2011 : Conférence en Recherche d'Information et Applications. s de Coria 2011 : Conférence en Recherche d'Information et ApplicationsNAVARRO, E., CHUDY, Y., GAUME, B., CABANAC, G. et PINEL-SAUVAGNAT, K. (2011). Kodex ou comment organiser les résultats d'une recherche d'information par détection de communautés sur un graphe biparti ? In Actes de Coria 2011 : Conférence en Recherche d'Information et Applications. Définition des besoins nécessaires à la mise en place d'un Data WareHouse dans le cadre du SGS Air France. M Ponvert, G Salton, A Wong, C Yang, Communications of the ACM. 1811A vector space model for automatic indexingPONVERT, M. (2009). Définition des besoins nécessaires à la mise en place d'un Data WareHouse dans le cadre du SGS Air France. Mémoire de D.E.A., École Nationale de l'Aviation Civile. SALTON, G., WONG, A. et YANG, C. (1975). A vector space model for automatic indexing. Communications of the ACM, 18(11):613-620. A statistical interpretation of term specificity and its application in retrieval. K Spärck-Jones, Journal of documentation. 605SPÄRCK-JONES, K. (2004). A statistical interpretation of term specificity and its application in retrieval. Journal of documentation, 60(5):493-502. A comparison of document clustering techniques. M Steinbach, G Karypis, V Kumar, KDD workshop on text mining. Boston400STEINBACH, M., KARYPIS, G., KUMAR, V. et al. (2000). A comparison of document clustering techniques. In KDD workshop on text mining, volume 400, pages 525-526. Boston. Des outils de TAL en support aux experts de sûreté industrielle pour l'exploitation de bases de données de retour d'expérience. N Tulechki, Actes des 13èmes Rencontre des Étudiants Chercheurs en Informatique pour le Traitement Automatique des Langues. s des 13èmes Rencontre des Étudiants Chercheurs en Informatique pour le Traitement Automatique des LanguesRECITAL 2011TULECHKI, N. (2011). Des outils de TAL en support aux experts de sûreté industrielle pour l'exploitation de bases de données de retour d'expérience. In Actes des 13èmes Rencontre des Étudiants Chercheurs en Informatique pour le Traitement Automatique des Langues (RECITAL 2011).
6,309,434
The Extended DIRNDL Corpus as a Resource for Automatic Coreference and Bridging Resolution
DIRNDL is a spoken and written corpus based on German radio news, which features coreference and information-status annotation (including bridging anaphora and their antecedents), as well as prosodic information. We have recently extended DIRNDL with a finegrained two-dimensional information status labeling scheme. We have also applied a state-of-the-art part-of-speech and morphology tagger to the corpus, as well as highly accurate constituency and dependency parsers. In the light of this development we believe that DIRNDL is an interesting resource for NLP researchers working on automatic coreference and bridging resolution. In order to enable and promote usage of the data, we make it available for download in an accessible tabular format, compatible with the formats used in the CoNLL and SemEval shared tasks on automatic coreference resolution.
[ 8267681, 935332, 989721, 41479182, 17209169, 14747729, 7708374, 10977241, 11898554, 2291483, 2381180 ]
The Extended DIRNDL Corpus as a Resource for Automatic Coreference and Bridging Resolution Anders Björkelund anders.bjoerkelund@ims.uni-stuttgart.de Institute for Natural Language Processing (IMS) Pfaffenwaldring 5B University of Stuttgart 70569StuttgartGermany Kerstin Eckart kerstin.eckart@ims.uni-stuttgart.de Institute for Natural Language Processing (IMS) Pfaffenwaldring 5B University of Stuttgart 70569StuttgartGermany Arndt Riester arndt.riester@ims.uni-stuttgart.de Institute for Natural Language Processing (IMS) Pfaffenwaldring 5B University of Stuttgart 70569StuttgartGermany Nadja Schauffler nadja.schauffler@ims.uni-stuttgart.de Institute for Natural Language Processing (IMS) Pfaffenwaldring 5B University of Stuttgart 70569StuttgartGermany Katrin Schweitzer katrin.schweitzer@ims.uni-stuttgart.de Institute for Natural Language Processing (IMS) Pfaffenwaldring 5B University of Stuttgart 70569StuttgartGermany The Extended DIRNDL Corpus as a Resource for Automatic Coreference and Bridging Resolution anaphoraprosodycorpus annotation DIRNDL is a spoken and written corpus based on German radio news, which features coreference and information-status annotation (including bridging anaphora and their antecedents), as well as prosodic information. We have recently extended DIRNDL with a finegrained two-dimensional information status labeling scheme. We have also applied a state-of-the-art part-of-speech and morphology tagger to the corpus, as well as highly accurate constituency and dependency parsers. In the light of this development we believe that DIRNDL is an interesting resource for NLP researchers working on automatic coreference and bridging resolution. In order to enable and promote usage of the data, we make it available for download in an accessible tabular format, compatible with the formats used in the CoNLL and SemEval shared tasks on automatic coreference resolution. Introduction The Discourse Information Radio News Database for Linguistic analysis (DIRNDL) is a spoken corpus resource of German radio news (ca. 50.000 tokens, 3221 sentences). In its first release (Eckart et al., 2012), it was manually annotated for referential information status, i.e. the givennew classification of referring expressions (Riester et al. 2010) as well as prosodic GToBI(S) labels (Mayer, 1995). Constituent-structure annotations originated with the XLE parser (Crouch et al., 2011) and the LFG grammar by Rohrer and Forst (2006). Aligning spoken language with its written transcript (or text with one of its read realizations) in a single resource is challenging for several reasons. Obviously, speech has a temporal determination which written language lacks. Punctuation marks (e.g. decimal points/commas) and compound words may receive different tokenisations in the different processing pipelines for written and spoken language, respectively. Moreover, speech, even the one by trained newsreaders, is seldom flawless and contains disfluencies and slips of the tongue, which are not contained in the written transcripts. (If they were, this would cause trouble to the parser.) In DIRNDL, these problems are tackled by integrating both sets of data with their different tokenisations in a PostgreSQL database and providing an alignment that uses multiple links (e.g. for accidental repetitions). The database has proven to be a valuable resource for testing linguistic hypotheses at the interface between discourse, information structure, morpho-syntax and prosody. For instance, Riester and Piontek (submitted) extract all adjective-noun sequences from the corpus, together with their prosodic realization, in order to test whether NPs with accented adjectives necessitate the existence of contrastive alternatives. Augurzky et al. (submitted) investigate the influence of segmental clashes on the frequency of prosodic phrase breaks at the transition between two referring expressions, and between nominal heads and their embedded arguments. Rosenberg et al. (2012) and Soto et al. (2013) use DIRNDL for training an automatic prosodic labeler. We have recently improved DIRNDL by revising existing annotations, and by adding new annotation layers, e.g. constituent trees and dependency trees from Björkelund et al. (2013), and named entities using Finkel et al. (2005) and Faruqui and Padó (2010). Based on this extension, we extracted DIRNDL anaphora as a resource for evaluation of automatic coreference and bridging resolvers. We exported the corpus in three tabular formats, two from recent shared tasks on automatic coreference resolution, i.e. the SemEval 2010 (Recasens et al., 2010) and CoNLL 2012 (Pradhan et al., 2012) shared tasks, and a third tabular format containing additional annotation layers which are not represented in the CoNLL or SemEval format, but might be useful for the resolution task, e.g. information status labels and pitch accents. In this paper we describe the exported annotation layers as well as the formats used. The export is freely available for download. 1 Annotation layers In this section we review the various layers of annotation in the DIRNDL corpus and the new export. Table 1 gives an overview of the annotation layers in the DIRNDL corpus as described by Eckart et al. (2012) and in the new DIRNDL anaphora release. Pragmatic annotations The corpus was previously annotated for referential information status following Riester et al. (2010). These annotations were replaced by two-dimensional informationstatus annotations following the RefLex scheme (Baumann and Riester, 2012), which distinguishes between referential and lexical information status. For example, (1) contains a referentially given (coreferential) phrase which comes with new lexical material (a so-called epithet). By contrast, in (2) there is a referentially new phrase which features lexically given material. For the referential as well as for the lexical level, the corpus contains links between the anaphor and its antecedent. DIRNDL DIRNDLanaphora (Eckart et al., 2012) Pragmatic annotations information status information status according to according to RefLex scheme Riester et al. (2010) (Baumann and Riester, 2012) Prosodic annotations GToBI(S) labels for pitch GToBI(S) labels for pitch accents and accents and boundary tones boundary tones (revised manual annotations) Morpho-syntactic annotations constituent trees by i) lemmas predicted XLE parser with by the Mate lemmatizer LFG grammar of (Bohnet, 2010) Rohrer andForst (2006) ii) part-of-speech tags and morphological tags by MarMoT iii) constituent trees and dependency trees from Björkelund et al. (2013) Semantic annotations named entities predicted by the Stanford named entity recognizer (Finkel et al., 2005) By referential information status we refer to the classical notion of information status discussed in the literature, e.g. Prince (1981), Nissim et al. (2004), Riester et al. (2010). A referring expression is R-GIVEN if it is a coreference anaphor. The label R-BRIDGING indicates that we are dealing with a bridging anaphor (Asher and Lascarides, 1998;Poesio and Vieira, 1998), i.e. a non-coreferring but nevertheless context-dependent expression, e.g. the European Union . . . [the member states] R-BRIDGING . Lexical information status captures semantic relations (e.g. a noun, verb or adjective is L-GIVEN if it is identical, a synonym or a hypernym of word contained in the context). An overview of the basic labels is shown in Table 2. These labels also have subcategories, and we refer the reader to Baumann and Riester (2012) for further details. Interannotator agreement on radio news data was determined in Riester and Baumann (2013) at κ = .75 for the referential level, and κ = .64 for the lexical level. For both levels of information status, Baumann and Riester (2013) show that increased givenness on both levels leads to a lower accent rate and/or the use of perceptually less prominent (e.g. L*) accents. In particular, it is well-known that (noncontrastive) coreferential anaphors in English and German are often deaccented, a fact which is well-described in the literature (see e.g. Halliday (1967), Schwarzschild (1999), Umbach (2002), Büring (2007) and many others). It is therefore likely that information about pitch accents will be a useful feature in coreference resolution. Prosodic annotations: GToBI(S) labels DIRNDL comprises information about intonation, i.e. the way an utterance is organized tonally. A group of intonation models -autosegmental-metrical models (essentially all based on Pierrehumbert, 1980) -is well accepted and widely used when describing prosody. For a subset of DIRNDL (approximately 5hrs of speech), tonal events were annotated manually according to an autosegmental intonation model for German (GToBI(S), cf. Mayer, 1995). Tonal events are pitch accents, which mark some of the words in a phrase as being more prominent than others, and boundary tones, which mark the tonal phrasing of the utterance. Essentially, a tonal event can be described as a local maximum or minimum in the intonation contour. Therefore, GToBI(S) labels describe the pitch contour by means of two levels, low (L) and high (H) representing regions in the speaker's register. That is, H describes a high local target (a peak) and L indicates a low local target in the contour. For example, a rising accent is composed of a low target on the accented syllable followed by a rise of the contour on the post-accented syllable, and is therefore labelled as L*H. Analogously, H*L marks a falling accent. The GToBI(S) inventory also includes labels for the boundaries of tonal phrase: intermediate phrases, which are minor tonal phrases, are marked with the label "-", intonation phrases, which correspond to major tonal phrases, are marked with the label "%". The latter can also be marked with a tone if the contour rises, or falls, respectively, at the end of the phrase (H% or L%). Table 3 gives an overview of the complete label set. Pitch accents are annotated on the syllable level. To make the annotations available on the word level, in DIRNDL each accent was enclosed in two "|" symbols and if several accents occurred on one word token, they were concatenated in the order of appearance. For example if a token on the word level was accented with a rising accent followed by a falling one, it is represented as |L*H||H*L| For DIRNDL anaphora , the GToBI(S) annotations were checked for plausibility using pitch accent shape information as retrieved by a parametric intonation model (Möhler, 2001), and corrected if necessary. Automatic morpho-syntactic annotations The DIRNDL corpus was originally parsed with the XLE parser (Crouch et al., 2011) and the LFG grammar by Rohrer and Forst (2006). XLE provides deep LFG constituent structure analyses 2 but unfortunately yields fragmented parses in a substantial number of cases. In order to provide additional, more robust syntactic information, as well as more fine-grained morphological annotations, we apply several other automatic tools to the corpus. Specifically, we added the following annotations: (i) automatically predicted part-of-speech tags and morphological tags, predicted with MarMoT , which has been shown to outperform other available partof-speech and morphology taggers; (ii) predicted lemmas, using the lemmatizer of the Mate tools toolkit (Bohnet, 2010), a state-of-the-art statistical lemmatizer; (iii) automatically predicted constituent trees with the constituent parser from Björkelund et al. (2013); (iv) automatically predicted dependency trees with the dependency parser from Björkelund et al. (2013). The constituent and dependency parsers by Björkelund et al. (2013) have shown state-of-the-art performance and recently obtained the best 2 LFG F-structures are currently not integrated in the database. results for German in the recent SPMRL 2013 Shared Task on parsing of morphologically rich languages (Seddah et al., 2013). In contrast to the LFG parser, which is rulebased and driven by a grammar, all these tools are datadriven. They were all trained on the TiGer treebank (Brants et al., 2002;Seeker and Kuhn, 2012) and therefore provide annotations adhering to the TiGer annotation scheme. Example constituent and dependency analyses of a fragment of (1) is shown in Figures 1 and 2, respectively. Since we have no gold standard annotations for these layers we are unable to evaluate the accuracy of these tools on the DIRNDL data set, however we refer the reader to the papers of the respective tools for evaluations on other data sets. Automatic named entity annotations Named entities are closely related to coreferentiality. In the RefLex scheme, named entities typically receive an R-UNUSED label 3 on their first occurrence, and an R-GIVEN label on subsequent occurrences. We added named entities using the Stanford named entity recognizer (Finkel et al., 2005). Specifically, we used the German model created by Faruqui and Padó (2010), which, in addition to standard DIRNDL export This section describes the basic constitution of DIRNDL anaphora and gives an example of the tabular export format used in the release. Constitution The DIRNDL corpus consists of hourly radio news broadcasts from three days during 2007. The respective text transcripts were retrieved from the website of the corresponding radio station. The export of DIRNDL we describe in this paper does not contain the audio files of the spoken news, but is restricted to the transcripts. It is important to note that, since the news broadcasts were consecutive, several items are repeated across broadcasts, sometimes with minor changes in between. 4 When using this resource either for training or testing automatic systems, we advise users to pay attention to these repetitions while conducting their experiments. Tabular format The original representation of the DIRNDL corpus is a relational database. While a relational database enables elaborate SQL queries, interfacing with a relational database is not the most convenient approach for NLP developers that are working on training and testing automatic systems. We therefore provide the new DIRNDL export in a tabular format, similar to the one used in the CoNLL 2011 and 2012 shared tasks (Pradhan et al., 2011;Pradhan et al., 2012). This also means that existing evaluation tools for automatic coreference can be used off the shelf against DIRNDL anaphora . An example of the tabular format is given in Figure 3, representing the two sentences from (1). The format represents each token on a single line and sentences separated by blank lines. Document boundaries are represented by the lines #begin document and #end document, where the former also contains a document identifier. In addition to the surface forms of each token, annotations are provided as additional columns in each line. A summary of the contents of the columns is displayed in Table 4. The first two columns hold document identifiers (document name and part); the following two hold sentence and token indexes, followed by the surface form of the word. Columns 6 through 9 hold predicted lemma, part-of-speech tag, morphological analysis, and named entity, respectively. The next three columns correspond to the syntactic structure: with the token index of the head word and the edge label according to the dependency tree (columns 10 and 11), followed by the constituent structure (column 12). Columns 13 and 14 encode the prosodic features, as described in Section 2.2. i.e. the pitch accents, followed by the boundary tones. As outlined above, multiple pitch accents are concatenated. For instance, the second token in the first sentence (UNO-Sondergesandte) was realised with two pitch accents, a rising and a falling one, and was followed by an intermediate phrase boundary. If a word was realised without a pitch accent or without a boundary tone, the respective entry in the column is marked with the label "NONE". In the absence of an adequate mapping of the spoken realization and the textual tokenization, the label Table 4: Column numbers, content our format "N/A" was applied in DIRNDL anaphora , for instance in the case of punctuation tokens or major deviations due to slips of the tongue. This label was also used for those cases, where no prosodic annotations were available. The final three columns represent the pragmatic annotations: first the lexical layer (column 15), then the referential layer (column 16). The very last column encodes coreference, by grouping mentions into sets with common identifiers, as is the case in the CoNLL shared task format (Pradhan et al., 2012). For instance, the mention des Kosovo is R-UNUSED and belongs to the coreference cluster with id 901. The word Kosovo as such is labeled L-NEW. In addition to the referential and lexical information status labels, each mention that bears such a label has a unique identifier associated, separated by the $ sign, i.e., des Kosovo has the identifier 6372 in the referential layer and Kosovo has the identifier 10513 in the lexical layer. The purpose of these identifiers is to simplify parsing the format in case of nested mentions, e.g. the full phrase für eine Unabhängigkeit des Kosovo unter internationaler Aufsicht is labeled R-BRIDGING-CONTAINED, 5 whereas the underlined subphrase is labeled R-GENERIC. Since some of the RefLex labels have anchors in other mentions, these are also included as part of the labels in columns 15 and 16, separated by another $. For instance, the L-GIVEN-SUPER on Provinz in the second sentence indicates that this word is a hypernym of its anchor. The anchor is indicated by the last part of the label in column 15, 1-9-9, which denote sentence number, first token, and last token, respectively. That is, the anchor for this label is the span Kosovo in the first sentence. As mentioned above, in addition to the format described here, the release of the corpus also includes two other tabular versions of the same data. Specifically, it includes the tables in the CoNLL 2011/2012 shared task format as well as the SemEval 2012 shared task format. Conclusion We presented DIRNDL anaphora , a resource for anaphora resolution created from the extended DIRNDL corpus, which contains spoken and written radio news amounting to roughly 50,000 words. The corpus has been manually annotated for prosodic and pragmatic information. The new version includes a revised and updated version of the pragmatic annotations, as well as automatic predictions by state-of-the-art morphosyntactic tools, including part-ofspeech and morphology as well as dependency and phrasestructure syntactic trees. Since our explicit goal is to enable developers of automatic tools for coreference and bridging resolution to use DIRNDL anaphora as a resource for evaluation, we are making the corpus available for download 6 in established textbased formats previously used for coreference resolution. Figure 2 : 2DIRNDL s1730, dependency tree training data, also exploits large amounts of unlabeled data in its model. Figure 3 : 3Example of the export format Table 1 : 1Overview of annotation layers in the first DIRNDL release and in DIRNDL anaphora (1) DIRNDL s1730, 2007-26-03, 17:00: Ahtisaari plädiert für eine Unabhängigkeit des Kosovo unter internationaler Aufsicht. Dies sei die einzige poli- tische und wirtschaftliche Option für die Zukunft [der [serbischen] L-NEW [Provinz] L-NEW ] R-GIVEN Ahtisaari is making the case for an independence of Kosovo under international control. This would be the only political and economic option for the future of the Serbian province (2) DIRNDL s206, 2007-25-03, 06:00: Ein Erdbeben der Stärke 7,2 hat Zentral-Japan erschüttert. Auch im Inselstaat Vanuatu im Südpazifik wurden [zwei [Beben] L-GIVEN der [Stärke] L-GIVEN 7,1 und 7,3 ] R-NEW registriert. An earthquake measuring 7.2 has hit Central Japan. Also in the island state of Vanuatu in the Southern Pacific two quakes measuring 7.1 and 7.3 have been registered. Table 2 : 2Overview basic RefLex schemePitch accents Boundary tones L*H rise % intonation phrase boundary H*L fall H% high end of intonation phrase H* high peak with potential late fall L% low end of intonation phrase L* low target with potential late rise %H high beginning of intonation phrase L*HL rise-fall - intermediate phrase boundary HH*L early peak H*M stylised contour ! diacritic for tonal declination *? marker for uncertain accent placement ? diacritic for uncertainty Table 3 : 3Overview of the GToBI(S) inventory http://www.ims.uni-stuttgart.de/data/ dirndl R-UNUSED entities may be subclassified as to whether the annotator decides them to be KNOWN or UNKNOWN. As part of the download package we provide a mapping from document identifiers to topics of the news items, which enables the extraction of repeated news items. This label is a special case of R-BRIDGING, where the "antecedent" is contained within the referring expression itself. http://www.ims.uni-stuttgart.de/data/ dirndl . N Asher, A Lascarides, Bridging. Journal of Semantics. 15Asher, N. and Lascarides, A. (1998). Bridging. Journal of Semantics, 15:83-113. Segmental effects on prosody: modelling German argument structure. Phonetik & Phonologie 9. P Augurzky, A Riester, F Tomaschek, ZurichsubmittedAugurzky, P., Riester, A., and Tomaschek, F. (submitted). Segmental effects on prosody: modelling German argu- ment structure. Phonetik & Phonologie 9, Zurich. Referential and Lexical Givenness: Semantic, Prosodic and Cognitive Aspects. S Baumann, A Riester, Prosody and Meaning. Elordieta, G. and Prieto, P.Berlin25Mouton de GruyterBaumann, S. and Riester, A. (2012). Referential and Lex- ical Givenness: Semantic, Prosodic and Cognitive As- pects. In Elordieta, G. and Prieto, P., editors, Prosody and Meaning, volume 25 of Interface Explorations, pages 119-162. Mouton de Gruyter, Berlin. Coreference, Lexical Givenness and Prosody in German. S Baumann, A Riester, Lingua. 136Baumann, S. and Riester, A. (2013). Coreference, Lexical Givenness and Prosody in German. Lingua, 136:16-37. (re)ranking meets morphosyntax: State-of-the-art results from the SPMRL 2013 shared task. A Björkelund, O Cetinoglu, R Farkas, T Mueller, W Seeker, Proceedings of the Fourth Workshop on Statistical Parsing of Morphologically-Rich Languages. the Fourth Workshop on Statistical Parsing of Morphologically-Rich LanguagesSeattle, Washington, USA,October. Association for Computational LinguisticsBjörkelund, A., Cetinoglu, O., Farkas, R., Mueller, T., and Seeker, W. (2013). (re)ranking meets morphosyntax: State-of-the-art results from the SPMRL 2013 shared task. In Proceedings of the Fourth Workshop on Statis- tical Parsing of Morphologically-Rich Languages, pages 135-145, Seattle, Washington, USA, October. Associa- tion for Computational Linguistics. Top accuracy and fast dependency parsing is not a contradiction. B Bohnet, Proceedings of the 23rd International Conference on Computational Linguistics. the 23rd International Conference on Computational LinguisticsBeijing, ChinaBohnet, B. (2010). Top accuracy and fast dependency pars- ing is not a contradiction. In Proceedings of the 23rd International Conference on Computational Linguistics (Coling 2010), pages 89-97, Beijing, China, August. Coling 2010 Organizing Committee. The TIGER treebank. S Brants, S Dipper, S Hansen, W Lezius, G Smith, Proceedings of the First Workshop on Treebanks and Linguistic Theories. Hinrichs, E. and Simov, K.the First Workshop on Treebanks and Linguistic TheoriesSozopol, BulgariaBrants, S., Dipper, S., Hansen, S., Lezius, W., and Smith, G. (2002). The TIGER treebank. In Hinrichs, E. and Simov, K., editors, Proceedings of the First Workshop on Treebanks and Linguistic Theories (TLT 2002), pages 24-41, Sozopol, Bulgaria. Intonation, semantics and information structure. D Büring, The Oxford Handbook of Linguistic Interfaces. Ramchand, G. and Reiss, C.Oxford University PressBüring, D. (2007). Intonation, semantics and information structure. In Ramchand, G. and Reiss, C., editors, The Oxford Handbook of Linguistic Interfaces. Oxford Uni- versity Press. . D Crouch, M Dalrymple, R Kaplan, T King, J Maxwell, P Newman, XLE DocumentationCrouch, D., Dalrymple, M., Kaplan, R., King, T., Maxwell, J., and Newman, P. (2011). XLE Documentation. A Discourse Information Radio News Database for Linguistic Analysis. K Eckart, A Riester, K Schweitzer, Linked Data in Linguistics. Representing and Connecting Language Data and Language Metadata. Chiarcos, C., Nordhoff, S., and Hellmann, S.HeidelbergEckart, K., Riester, A., and Schweitzer, K. (2012). A Discourse Information Radio News Database for Linguistic Analysis. In Chiarcos, C., Nordhoff, S., and Hellmann, S., editors, Linked Data in Linguistics. Representing and Connecting Language Data and Language Metadata, pages 65-76. Springer, Heidel- berg. Training and evaluating a german named entity recognizer with semantic generalization. M Faruqui, S Padó, Proceedings of KONVENS 2010. KONVENS 2010Saarbrücken, GermanyFaruqui, M. and Padó, S. (2010). Training and evaluating a ger- man named entity recognizer with semantic generalization. In Proceedings of KONVENS 2010, Saarbrücken, Germany. Incorporating non-local information into information extraction systems by gibbs sampling. J R Finkel, T Grenager, C Manning, Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL'05). the 43rd Annual Meeting of the Association for Computational Linguistics (ACL'05)Ann Arbor, MichiganAssociation for Computational LinguisticsFinkel, J. R., Grenager, T., and Manning, C. (2005). Incorporat- ing non-local information into information extraction systems by gibbs sampling. In Proceedings of the 43rd Annual Meet- ing of the Association for Computational Linguistics (ACL'05), pages 363-370, Ann Arbor, Michigan, June. Association for Computational Linguistics. Notes on Transitivity and Theme in English. Part 2. M Halliday, Journal of Linguistics. 3Halliday, M. (1967). Notes on Transitivity and Theme in English. Part 2. Journal of Linguistics, 3:199-244. Transcription of German Intonation. The Stuttgart System. J Mayer, Mayer, J. (1995). Transcription of German Intona- tion. The Stuttgart System. http://www.ims.uni- stuttgart.de/phonetik/joerg/labman/STGTsystem.html. Improvements of the PaIntE model for F0 parametrization. G Möhler, Institute of Natural Language Processing, University of Stuttgart. Draft versionTechnical reportMöhler, G. (2001). Improvements of the PaIntE model for F0 parametrization. Technical report, Institute of Natural Lan- guage Processing, University of Stuttgart. Draft version. Efficient higherorder CRFs for morphological tagging. T Mueller, H Schmid, H Schütze, Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing. the 2013 Conference on Empirical Methods in Natural Language ProcessingSeattle, Washington, USA, OctoberAssociation for Computational LinguisticsMueller, T., Schmid, H., and Schütze, H. (2013). Efficient higher- order CRFs for morphological tagging. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 322-332, Seattle, Washington, USA, Octo- ber. Association for Computational Linguistics. An Annotation Scheme for Information Status in Dialogue. M Nissim, S Dingare, J Carletta, M Steedman, Proceedings of the Fourth International Conference on Language Resources and Evaluation (LREC). the Fourth International Conference on Language Resources and Evaluation (LREC)LisbonNissim, M., Dingare, S., Carletta, J., and Steedman, M. (2004). An Annotation Scheme for Information Status in Dialogue. In Proceedings of the Fourth International Conference on Lan- guage Resources and Evaluation (LREC), Lisbon. The phonology and phonetics of English intonation. J B Pierrehumbert, Massachusetts Institute of TechnologyPh.D. thesisPierrehumbert, J. B. (1980). The phonology and phonetics of En- glish intonation. Ph.D. thesis, Massachusetts Institute of Tech- nology, September. A Corpus-Based Investigation of Definite Description Use. M Poesio, R Vieira, Computational Linguistics. 242Poesio, M. and Vieira, R. (1998). A Corpus-Based Investiga- tion of Definite Description Use. Computational Linguistics, 24(2):183-216. Conll-2011 shared task: Modeling unrestricted coreference in ontonotes. S Pradhan, L Ramshaw, M Marcus, M Palmer, R Weischedel, N Xue, Proceedings of the Fifteenth Conference on Computational Natural Language Learning: Shared Task. the Fifteenth Conference on Computational Natural Language Learning: Shared TaskPortland, Oregon, USAAssociation for Computational LinguisticsPradhan, S., Ramshaw, L., Marcus, M., Palmer, M., Weischedel, R., and Xue, N. (2011). Conll-2011 shared task: Modeling un- restricted coreference in ontonotes. In Proceedings of the Fif- teenth Conference on Computational Natural Language Learn- ing: Shared Task, pages 1-27, Portland, Oregon, USA, June. Association for Computational Linguistics. Conll-2012 shared task: Modeling multilingual unrestricted coreference in ontonotes. S Pradhan, A Moschitti, N Xue, O Uryupina, Y Zhang, Joint Conference on EMNLP and CoNLL -Shared Task. Jeju Island, KoreaAssociation for Computational LinguisticsPradhan, S., Moschitti, A., Xue, N., Uryupina, O., and Zhang, Y. (2012). Conll-2012 shared task: Modeling multilingual un- restricted coreference in ontonotes. In Joint Conference on EMNLP and CoNLL -Shared Task, pages 1-40, Jeju Island, Korea, July. Association for Computational Linguistics. Toward a Taxonomy of Given-New Information. E F Prince, Radical Pragmatics. Cole, P.New YorkAcademic PressPrince, E. F. (1981). Toward a Taxonomy of Given-New Informa- tion. In Cole, P., editor, Radical Pragmatics, pages 233-255. Academic Press, New York. Semeval-2010 task 1: Coreference resolution in multiple languages. M Recasens, L Màrquez, E Sapena, M A Martí, M Taulé, V Hoste, M Poesio, Y Versley, Proceedings of the 5th International Workshop on Semantic Evaluation. the 5th International Workshop on Semantic EvaluationUppsala, SwedenAssociation for Computational LinguisticsRecasens, M., Màrquez, L., Sapena, E., Martí, M. A., Taulé, M., Hoste, V., Poesio, M., and Versley, Y. (2010). Semeval-2010 task 1: Coreference resolution in multiple languages. In Pro- ceedings of the 5th International Workshop on Semantic Evalu- ation, pages 1-8, Uppsala, Sweden, July. Association for Com- putational Linguistics. Focus Triggers and Focus Types from a Corpus Perspective. A Riester, S Baumann, Dialogue & Discourse. 42Riester, A. and Baumann, S. (2013). Focus Triggers and Focus Types from a Corpus Perspective. Dialogue & Discourse, 4(2). A Riester, J Piontek, Anarchy in the NP. When new nouns get deaccented and given nouns don't. submittedRiester, A. and Piontek, J. (submitted). Anarchy in the NP. When new nouns get deaccented and given nouns don't. A Recursive Annotation Scheme for Referential Information Status. A Riester, D Lorenz, N Seemann, Proceedings of the Seventh International Conference on Language Resources and Evaluation (LREC). the Seventh International Conference on Language Resources and Evaluation (LREC)Valletta, MaltaRiester, A., Lorenz, D., and Seemann, N. (2010). A Recursive Annotation Scheme for Referential Information Status. In Pro- ceedings of the Seventh International Conference on Language Resources and Evaluation (LREC), pages 717-722, Valletta, Malta. Improving Coverage and Parsing Quality of a Large-Scale LFG for German. C Rohrer, M Forst, Proceedings of the Fifth International Conference on Language Resources and Evaluation (LREC). the Fifth International Conference on Language Resources and Evaluation (LREC)GenovaRohrer, C. and Forst, M. (2006). Improving Coverage and Pars- ing Quality of a Large-Scale LFG for German. In Proceedings of the Fifth International Conference on Language Resources and Evaluation (LREC), Genova. Cross-Language Prominence Detection. A Rosenberg, E Cooper, R Levitan, J Hirschberg, 6th International Conference on Speech Prosody. ShanghaiRosenberg, A., Cooper, E., Levitan, R., and Hirschberg, J. (2012). Cross-Language Prominence Detection. In 6th International Conference on Speech Prosody, Shanghai. R Schwarzschild, GIVENness, AvoidF, and Other Constraints on the Placement of Accent. Natural Language Semantics. 7Schwarzschild, R. (1999). GIVENness, AvoidF, and Other Con- straints on the Placement of Accent. Natural Language Seman- tics, 7(2):141-177. Overview of the SPMRL 2013 shared task: A cross-framework evaluation of parsing morphologically rich languages. D Seddah, R Tsarfaty, S Kübler, M Candito, J D Choi, R Farkas, J Foster, I Goenaga, K Gojenola Galletebeitia, Y Goldberg, S Green, N Habash, M Kuhlmann, W Maier, Y Marton, J Nivre, A Przepiórkowski, R Roth, W Seeker, Y Versley, V Vincze, M Woliński, A Wróblewska, Proceedings of the Fourth Workshop on Statistical Parsing of Morphologically-Rich Languages. the Fourth Workshop on Statistical Parsing of Morphologically-Rich LanguagesSeattle, Washington, USAAssociation for Computational LinguisticsSeddah, D., Tsarfaty, R., Kübler, S., Candito, M., Choi, J. D., Farkas, R., Foster, J., Goenaga, I., Gojenola Galletebeitia, K., Goldberg, Y., Green, S., Habash, N., Kuhlmann, M., Maier, W., Marton, Y., Nivre, J., Przepiórkowski, A., Roth, R., Seeker, W., Versley, Y., Vincze, V., Woliński, M., and Wróblewska, A. (2013). Overview of the SPMRL 2013 shared task: A cross-framework evaluation of parsing morphologically rich languages. In Proceedings of the Fourth Workshop on Statis- tical Parsing of Morphologically-Rich Languages, pages 146- 182, Seattle, Washington, USA, October. Association for Com- putational Linguistics. Making ellipses explicit in dependency conversion for a german treebank. W Seeker, J ; N Kuhn, K Choukri, T Declerck, M U Dogan, B Maegaard, J Mariani, J Odijk, S Piperidis, Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC-2012). the Eighth International Conference on Language Resources and Evaluation (LREC-2012)Turkey, May. European Language Resources Association (ELRA). ACL Anthology IdentifierSeeker, W. and Kuhn, J. (2012). Making ellipses explicit in dependency conversion for a german treebank. In Calzolari, N., Choukri, K., Declerck, T., Dogan, M. U., Maegaard, B., Mariani, J., Odijk, J., and Piperidis, S., editors, Proceed- ings of the Eighth International Conference on Language Re- sources and Evaluation (LREC-2012), pages 3132-3139, Istan- bul, Turkey, May. European Language Resources Association (ELRA). ACL Anthology Identifier: L12-1088. Cross-Language Phrase Boundary Detection. V Soto, E Cooper, A Rosenberg, J Hirschberg, Proceedings of ICASSP. ICASSPVancouverSoto, V., Cooper, E., Rosenberg, A., and Hirschberg, J. (2013). Cross-Language Phrase Boundary Detection. In Proceedings of ICASSP, Vancouver. C Umbach, De)accenting Definite Descriptions. Theoretical Linguistics. 27Umbach, C. (2002). (De)accenting Definite Descriptions. Theo- retical Linguistics, 27(2/3).
10,994,429
Syntactic SMT Using a Discriminative Text Generation Model
We study a novel architecture for syntactic SMT. In contrast to the dominant approach in the literature, the system does not rely on translation rules, but treat translation as an unconstrained target sentence generation task, using soft features to capture lexical and syntactic correspondences between the source and target languages. Target syntax features and bilingual translation features are trained consistently in a discriminative model. Experiments using the IWSLT 2010 dataset show that the system achieves BLEU comparable to the state-of-the-art syntactic SMT systems.
[ 10752264, 6677774, 17555617, 1388578, 7580069, 1391785, 438829, 1557806, 934325, 9146682, 10107837, 8806211, 1613767, 7803592, 11981387, 384994, 15929202, 5902329, 252796, 10313983 ]
Syntactic SMT Using a Discriminative Text Generation Model Association for Computational LinguisticsCopyright Association for Computational LinguisticsOctober 25-29, 2014. 2014 Yue Zhang yuezhang@sutd.edu.sgsongkai.sk@alibaba-inc.com Kai Song Linfeng Song songlinfeng@ict.ac.cn Jingbo Zhu zhujingbo@mail.neu.edu.cnqliu@computing.dcu.ie Qun Liu SUTD Singapore ICT/CAS NEU China, China CNGL, Ireland and ICT/CAS NEU China, China Syntactic SMT Using a Discriminative Text Generation Model Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP) the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)Doha, Qatar. cAssociation for Computational LinguisticsOctober 25-29, 2014. 2014 We study a novel architecture for syntactic SMT. In contrast to the dominant approach in the literature, the system does not rely on translation rules, but treat translation as an unconstrained target sentence generation task, using soft features to capture lexical and syntactic correspondences between the source and target languages. Target syntax features and bilingual translation features are trained consistently in a discriminative model. Experiments using the IWSLT 2010 dataset show that the system achieves BLEU comparable to the state-of-the-art syntactic SMT systems. Introduction Translation rules have been central to hierarchical phrase-based and syntactic statistical machine translation (SMT) (Galley et al., 2004;Chiang, 2005;Liu et al., 2006;Quirk et al., 2005;Shen and Joshi, 2008;Xie et al., 2011). They are attractive by capturing the recursiveness of languages and syntactic correspondences between them. One important advantage of translation rules is that they allow efficient decoding by treating MT as a statistical parsing task, transforming a source sentence to its translation via recursive rule application. The efficiency takes root in the fact that target word orders are encoded in translation rules. This fact, however, also leads to rule explosion, noise and coverage problems (Auli et al., 2009), which can hurt translation quality. Flexibility of function word usage, rich morphology and paraphrasing all add to the difficulty of rule extraction. In addition, restricting target word orders by hard translation rules can also hurt output fluency. * * Work done while visiting Singapore University of Technology and Design (SUTD) Figure 1: Overall system architecture. A potential solution to the problems above is to treat translation as a generation task, representing syntactic correspondences using soft features. Both adequacy and fluency can potentially be improved by giving full flexibility to target synthesis, and leaving all options to the statistical model. The main challenge to this method is a significant increase in the search space (Knight, 1999). To this end, recent advances in tackling complex search tasks for text generation offer some solutions (White and Rajkumar, 2009;Zhang and Clark, 2011). In this short paper, we present a preliminary investigation on the possibility of building a syntactic SMT system that does not use hard translation rules, by utilizing recent advances in statistical natural language generation (NLG). The overall architecture is shown in Figure 1. Translation is performed by first parsing the source sentence, then transferring source words and phrases to their target equivalences, and finally synthesizing the target output. We choose dependency grammar for both the source and the target syntax, and adapt the syntactic text synthesis system of Zhang (2013), which performs dependency-based linearization. The linearization task for MT is different from the monolingual task in that not all translation options are used to build the output, and that bilingual correspondences need to be taken into account dur-ing synthesis. The algorithms of Zhang (2013) are modified to perform word selection as well as ordering, using two sets of features to control translation adequacy and fluency, respectively. Preliminary experiments on the IWSLT 1 2010 data show that the system gives BLEU comparable to traditional tree-to-string and string-to-tree translation systems. It demonstrates the feasibility of leveraging statistical NLG techniques for SMT, and the possibility of building a statistical transferbased MT system. Approach The main goal being proof of concept, we keep the system simple by utilizing existing methods for the main components, minimizing engineering efforts. Shown in Figure 1, the end-to-end system consists of two main components: lexical transfer and synthesis. The former provides candidate translations for (overlapping) source words and phrases. Although lexicons and rules can be used for this step, we take a simple statistical alignment-based approach. The latter searches for a target translation by constructing dependency trees bottom-up. The process can be viewed as a syntax-based generation process from a bag of overlapping translation options. Lexical transfer We perform word alignment using IBM model 4 (Brown et al., 1993), and then extract phrase pairs according to the alignment and automaticallyannotated target syntax. In particular, consistent (Och et al., 1999) and cohesive (Fox, 2002) phrase pairs are extracted from intersected alignments in both directions: the target side must form a projective span, with a single root, and the source side must be contiguous. A resulting phrase pair consists of the source phrase, its target translation, as well as the head position and head part-of-speech (POS) of the target span, which are useful for target synthesis. We further restrict that neither the source nor the target side of a valid phrase pair contains over s words. Given an input source sentence, the lexical transfer unit finds all valid target translation options for overlapping source phrases up to size s, and feeds them as inputs to the target synthesis decoder. The translation options with a probability below λ · P max are filtered out, where P max is the probability of the most probable translation. Here the probability of a target translation is calculated as the count of the translation divided by the count of all translations of the source phrase. Synthesis The synthesis module is based on the monolingual text synthesis algorithm of Zhang (2013), which constructs an ordered dependency tree given a bag of words. In the bilingual setting, inputs to the algorithm are translation options, which can be overlapping and mutually exclusive, and not necessarily all of which are included in the output. As a result, the decoder needs to perform word selection in addition to word ordering. Another difference between the bilingual and monolingual settings is that the former requires translation adequacy in addition to output fluency. We largely rely on the monolingual system for MT decoding. To deal with overlapping translation options, a source coverage vector is used to impose mutual exclusiveness on input words and phrases. Each element in the coverage vector is a binary value that indicates whether a particular source word has been translated in the corresponding target hypothesis. For translation adequacy, we use a set of bilingual features on top of the set of monolingual features for text synthesis. Search The search algorithm is the best-first algorithm of Zhang (2013). Each search hypothesis is a partial or full target-language dependency tree, and hypotheses are constructed bottom-up from leaf nodes, which are translation options. An agenda is used to maintain a list of search hypothesis to be expanded, and a chart is used to record a set of accepted hypotheses. Initially empty, the chart is a beam of size k · n, where n is the number of source words and k is a positive integer. The agenda is a priority queue, initialized with all leaf hypotheses (i.e. translation options). At each step, the highest-scored hypothesis e is popped off the agenda, and expanded by combination with all hypotheses on the chart in all possible ways, with the set of newly generated hypotheses e 1 , e 2 , ...e N being put onto the agenda, and e being put onto the chart. When two hypotheses are combined, they can be put in two different orders, and in each case different dependencies can be constructed between their head words, leading to different new dependency syntax WORD(h) hypotheses. The decoder expands a fixed number L hypotheses, and then takes the highest-scored chart hypothesis that contains over β · n words as the output, where β is a real number near 1.0. · POS(h) · NORM(size) , WORD(h) · NORM(size), POS(h) · NORM(size) POS(h) · POS(m) · POS(b) · dir POS(h) · POS(h l ) · POS(m) · POS(mr) · dir (h > m), POS(h) · POS(hr ) · POS(m) · POS(m l ) · dir (h < m) WORD(h) · POS(m) · POS(m l ) · dir , WORD(h) · POS(m) · POS(mr) · dir POS(h) · POS(m) · POS(m1) · dir , POS(h) · POS(m1) · dir , POS(m) · POS(m1) · dir WORD(h) · POS(m) · POS(m1) · POS(m2) · dir , POS(h) · POS(m) · POS(m1) · POS(m2) · dir , ... dependency syntax for completed words WORD(h) · POS(h) · WORD(h l ) · POS(h l ), POS(h) · POS(h l ), WORD(h) · POS(h) · POS(h l ), POS(h) · WORD(h l ) · POS(h l ) , WORD(h) · POS(h) · WORD(hr ) · POS(hr ), POS(h) · POS(hr ), ... surface string patterns (B-bordering index) WORD(B − 1) · WORD(B), POS(B − 1) · POS(B), WORD(B − 1) · POS(B), POS(B − 1) · WORD(B), WORD(B − 1) · WORD(B) · WORD(B + 1), WORD(B − 2) · WORD(B − 1) · WORD(B), POS(B − 1) · POS(B) · POS(B + 1), ... surface string patterns for complete sentences WORD(0), WORD(0) · WORD(1), WORD(size − 1), WORD(size − 1) · WORD(size − 2), POS(0), POS(0) · POS(1), POS(0) · POS(1) · POS(2), ... Model and training A scaled linear model is used by the decoder to score search hypotheses: Score(e) = θ · Φ(e) |e| , where Φ(e) is the global feature vector of the hypothesis e, θ is the parameter vector of the model, and |e| is the number of leaf nodes in e. The scaling factor |e| is necessary because hypotheses with different numbers of words are compared with each other in the search process to capture translation equivalence. While the monolingual features of Zhang (2013) are applied (example feature templates from the system are shown in Table 1), an additional set of bilingual features is defined, shown in Table 2. In the tables, s and t represent the source and target, respectively; h and m represent the head and modifier in a dependency arc, respectively; h l and h r represent the neighboring words on the left and right of h, respectively; m l and m r represent the neighboring words on the left and right of m, respectively; m 1 and m 2 represent the closest and second closest sibling of m on the side of h, respectively. dir represents the arc direction (i.e. left or right); PHRASE represents a lexical phrase; P(trans) represents the sourceto-target translation probability from the phrasetable, used as a real-valued feature; path represents the shortest path in the source dependency tree between the two nodes that correspond to the target head and modifier, respectively; LEN(path) represents the number of arcs on path, normalized to bins of [5, 10, 20, 40+]; LABELS(path) represents the array of dependency arc labels on path; LABELSPOS(path) represents the array of dependency arc labels and source POS on path. In addition, a real-valued four-gram language model feature is also used, with four-grams extracted from the surface boundary when two hypothesis are combined. We apply the discriminative learning algorithm of Zhang (2013) to train the parameters θ. The algorithm requires training examples that consist of full target derivations, with leaf nodes being input translation options. However, the readily available training examples are automatically-parsed target derivations, with leaf nodes being the reference translation. As a result, we apply a search procedure to find a derivation process, through which the target dependency tree is constructed from a subset of input translation options. The search procedure can be treated as a constrained decoding process, where only the oracle tree and its sub trees can be constructed. In case the set of translation options cannot lead to the oracle tree, we ignore the training instance. 2 Although the ignored training sentence pairs cannot be utilized for training the discriminative synthesizer, they are nevertheless used for building the phrase table and training the language model. Experiments We perform experiments on the IWSLT 2010 Chinese-English dataset, which consists of training sentence pairs from the dialog task (dialog) and Basic Travel and Expression Corpus (BTEC). The union of dialog and BTEC are taken as our training set, which contains 30,033 sentence pairs. For system tuning, we use the IWSLT 2004 test set (also released as the second development test set of IWSLT 2010), which contains 500 sentences. For final test, we use the IWSLT 2003 test set (also released as the first development test set of IWSLT 2010), which contains 506 sentences. The Chinese sentences in the datasets are segmented using NiuTrans 3 (Xiao et al., 2012), while POS-tagging of both English and Chinese is performed using ZPar 4 version 0.5 (Zhang and Clark, 2011). We train the English POS-tagger using the WSJ sections of the Penn Treebank (Marcus et al., 1993), turned into lower-case. For syntactic parsing of both English and Chinese, we use the default models of ZPar 0.5. We choose three baseline systems: a string-totree (S2T) system, a tree-to-string (T2S) system and a tree-to-tree (T2T) system (Koehn, 2010). The Moses release 1.0 implementations of all three systems are used, with default parameter settings. IRSTLM 5 release 5.80.03 (Federico et al., 2008) is used to train a four-gram language models over the English training data, which is applied to the baseline systems and our system. Kneser-Ney smoothing is used to train the language model. We use the tuning set to determine the optimal number of training iterations. The translation option filter λ is set to 0.1; the phrase size limit s is set to 5 in order to verify the effectiveness of synthesis; the number of expanded nodes L is set to 200; the chart factor k is set to 16 for a balance between efficiency and accuracy; the goal parameter β is set to 0.8. The final scores of our system and the baselines are shown in Table 3. Our system gives a BLEU of 34.24, which is comparable to the baseline systems. Some example outputs are shown in Table 4. Manual comparison does not show significant differences in overall translation adequacy or fluency between the outputs of the four systems. However, an observation is that, while our system can produce more fluent outputs, the choice of translation options can be more frequently incorrect. This suggests that while the target synthesis component is effective under the bilingual setting, a stronger lexical selection component may be necessary for better translation quality. Related work As discussed in the introduction, our work is closely related to previous studies on syntactic MT, with the salient difference that we do not rely on hard translation rules, but allow free target synthesis. The contrast can be summarized as "translation by parsing" vs "translation by generation". There has been a line of research on generation for translation. Soricut and Marcu (2006) use a form of weighted IDL-expressions (Nederhof and Satta, 2004) for generation. Bangalore et al. (2007) treats MT as a combination of global lexical transfer and word ordering; their generation component does not perform lexical selection, relying on an n-gram language model to order target words. Goto et al. (2012) use a monotonic phrasebased system to perform target word selection, and treats target ordering as a post-processing step. More recently, Chen et al. (2014) translate source dependencies arc-by-arc to generate pseudo target dependencies, and generate the translation by reordering of arcs. In contrast with these systems, our system relies more heavily on a syntax-based synthesis component, in order to study the usefulness of statistical NLG on SMT. With respect to syntax-based word ordering, Chang and Toutanova (2007) and He et al. (2009) study a simplified word ordering problem by assuming that the un-ordered target dependency tree is given. Wan et al. (2009) and Zhang and Clark (2011) study the ordering of a bag of words, without input syntax. , Zhang (2013) and Song et al. (2014) further extended this line of research by adding input syntax and allowing joint inflection and ordering. de Gispert et al. (2014) use a phrase-structure grammer for word ordering. Our generation system is based on the work of Zhang (2013), but further allows lexical selection. Our work is also in line with the work of Liang et al. (2006), Blunsom et al. (2008), Flanigan et al. (2013) and Yu et al. (2013) in that we build a discriminative model for SMT. Conclusion We investigated a novel system for syntactic machine translation, treating MT as an unconstrained generation task, solved by using a single discriminative model with both monolingual syntax and bilingual translation features. Syntactic correspondence is captured by using soft features rather than hard translation rules, which are used by most syntax-based statistical methods in the literature. Our results are preliminary in the sense that the experiments were performed using a relatively small dataset, and little engineering effort was made on fine-tuning of parameters for the baseline and proposed models. Our Python implementation gives the same level of BLEU scores compared with baseline syntactic SMT systems, but is an order of magnitude slower than Moses. However, the results demonstrate the feasibility of leveraging text generation techniques for machine translation, directly connecting the two currently rather separated research fields. The system is not strongly dependent on the specific generation algorithm, and one potential of the SMT architecture is that it can directly benefit from advances in statistical NLG technology. Table 1 : 1Monolingual feature templates. Table 2 : 2Bilingual feature templates. Table 3 : 3Final results.SOURCE: 我 现在 头痛 的 厉害 。 REF: I have a terrible headache . OURS: now , I have a headache . SOURCE: 我 要 带 浴缸 的 双人房 。 REF: I 'd like a twin room with a bath please . OURS: a twin room , I 'll find a room with a bath . SOURCE: 请 把 日元 兑换 成 美元 。 REF: can you change yen into dollars ? OURS: please change yen into dollars . SOURCE: 请 给 我 烤鸡 。 REF: roast chicken , please . OURS: please have roast chicken . SOURCE: 请 每 次 饭 后 吃 两 粒 。 REF: take two tablets after every meal . OURS: please eat after each meal . SOURCE: 请 结帐 。 REF: check , please . OURS: I have to check -out , please . SOURCE: 对 呀 那 是 本店 最 拿手 的 菜 啊 。 REF: yes , well , that 's our specialty . OURS: ah , the food that 's right . SOURCE: 空调 坏 了 。 REF: my air conditioner is n't working . OURS: the air -conditioner does n't work . Table 4 : 4Sample output sentences. International Workshop on Spoken Language Translation, http://iwslt2010.fbk.eu This led to the ignoring of over 40% of the training sentence pairs. For future work, we will consider substitute oracles from reachable target derivations by using maximum sentence level BLEU approximation(Nakov et al., 2012) or METEOR(Denkowski and Lavie, 2011) as selection criteria. 3 http://www.nlplab.com/NiuPlan/NiuTrans.ch.html 4 http://sourceforge.net/projects/zpar/ 5 http://sourceforge.net/apps/mediawiki/irstlm AcknowledgementThe work has been supported by the Singapore Ministration of Education Tier 2 project T2MOE201301 and the startup grant SRG ISTD 2012 038 from SUTD. We thank the anonymous reviewers for their constructive comments. WORD(sh) · WORD(sm) · dir · LEN(path), WORD(sh) · WORD(th) · dir · LEN(path), WORD(sm) · WORD(tm) · dir · LEN(path), bilingual syntactic features (LEN(path) ≤ 3) POS(th) · POS(tm) · dir · LABELS(path), WORD(th) · POS(tm) · dir · LABELS(path), POS(th) · WORD(tm) · dir · LABELS(path), WORD(th) · WORD(tm) · dir · LABELS(path), WORD(sh) · WORD(sm) · dir · LABELS(path), WORD(sh) · WORD(th) · dir · LABELS(path), WORD(sm) · WORD(tm) · dir · LABELS(path), POS(th) · POS(tm) · dir · LABELSPOS(path), WORD(th) · POS(tm) · dir · LABELSPOS(path), POS(th) · WORD(tm) · dir · LABELSPOS(path), WORD(th) · WORD(tm) · dir · LABELSPOS(path), WORD(sh) · WORD(sm) · dir · LABELSPOS(path. translation features PHRASE(m) · PHRASE(t), P (trans), bilingual syntactic features POS(th) · POS(tm) · dir · LEN(path), WORD(th) · POS(tm) · dir · LEN(path), POS(th) · WORD(tm) · dir · LEN(path). WORD(th) · WORD(tm) · dir · LEN(path). sh) · WORD(th) · dir · LABELSPOS(path), WORD(sm) · WORD(tm) · dir · LABELSPOS(pathtranslation features PHRASE(m) · PHRASE(t), P (trans), bilingual syntactic features POS(th) · POS(tm) · dir · LEN(path), WORD(th) · POS(tm) · dir · LEN(path), POS(th) · WORD(tm) · dir · LEN(path), WORD(th) · WORD(tm) · dir · LEN(path), WORD(sh) · WORD(sm) · dir · LEN(path), WORD(sh) · WORD(th) · dir · LEN(path), WORD(sm) · WORD(tm) · dir · LEN(path), bilingual syntactic features (LEN(path) ≤ 3) POS(th) · POS(tm) · dir · LABELS(path), WORD(th) · POS(tm) · dir · LABELS(path), POS(th) · WORD(tm) · dir · LABELS(path), WORD(th) · WORD(tm) · dir · LABELS(path), WORD(sh) · WORD(sm) · dir · LABELS(path), WORD(sh) · WORD(th) · dir · LABELS(path), WORD(sm) · WORD(tm) · dir · LABELS(path), POS(th) · POS(tm) · dir · LABELSPOS(path), WORD(th) · POS(tm) · dir · LABELSPOS(path), POS(th) · WORD(tm) · dir · LABELSPOS(path), WORD(th) · WORD(tm) · dir · LABELSPOS(path), WORD(sh) · WORD(sm) · dir · LABELSPOS(path), WORD(sh) · WORD(th) · dir · LABELSPOS(path), WORD(sm) · WORD(tm) · dir · LABELSPOS(path), A systematic analysis of translation model search spaces. Michael Auli, Adam Lopez, Hieu Hoang, Philipp Koehn, Proc. WMT. WMTMichael Auli, Adam Lopez, Hieu Hoang, and Philipp Koehn. 2009. A systematic analysis of translation model search spaces. In Proc. WMT, pages 224- 232. Statistical machine translation through global lexical selection and sentence reconstruction. Srinivas Bangalore, Patrick Haffner, Stephan Kanthak, Proc. ACL. ACLSrinivas Bangalore, Patrick Haffner, and Stephan Kan- thak. 2007. Statistical machine translation through global lexical selection and sentence reconstruction. In Proc. ACL, pages 152-159. A discriminative latent variable model for statistical machine translation. Phil Blunsom, Trevor Cohn, Miles Osborne, Proc. ACL. ACLPhil Blunsom, Trevor Cohn, and Miles Osborne. 2008. A discriminative latent variable model for statistical machine translation. In Proc. ACL, pages 200-208. The mathematics of statistical machine translation: Parameter estimation. F Peter, Stephen Della Brown, Vincent J Pietra, Robert L Della Pietra, Mercer, Computational Linguistics. 192Peter F. Brown, Stephen Della Pietra, Vincent J. Della Pietra, and Robert L. Mercer. 1993. The mathe- matics of statistical machine translation: Parameter estimation. Computational Linguistics, 19(2):263- 311. A discriminative syntactic word order model for machine translation. Pi-Chuan Chang, Kristina Toutanova, Proc. ACL. ACLPi-Chuan Chang and Kristina Toutanova. 2007. A dis- criminative syntactic word order model for machine translation. In Proc. ACL, pages 9-16. A dependency edgebased transfer model for statistical machine translation. Hongshen Chen, Jun Xie, Fandong Meng, Wenbin Jiang, Qun Liu, Proc. COLING 2014. COLING 2014Hongshen Chen, Jun Xie, Fandong Meng, Wenbin Jiang, and Qun Liu. 2014. A dependency edge- based transfer model for statistical machine transla- tion. In Proc. COLING 2014, pages 1103-1113. A hierarchical phrase-based model for statistical machine translation. David Chiang, Proc. ACL. ACLDavid Chiang. 2005. A hierarchical phrase-based model for statistical machine translation. In Proc. ACL, pages 263-270. Word ordering with phrase-based grammars. Marcus Adrià De Gispert, Bill Tomalin, Byrne, Proc. EACL. EACLAdrià de Gispert, Marcus Tomalin, and Bill Byrne. 2014. Word ordering with phrase-based grammars. In Proc. EACL, pages 259-268. Meteor 1.3: Automatic metric for reliable optimization and evaluation of machine translation systems. Michael Denkowski, Alon Lavie, Proc. WMT. WMTMichael Denkowski and Alon Lavie. 2011. Meteor 1.3: Automatic metric for reliable optimization and evaluation of machine translation systems. In Proc. WMT, pages 85-91. IRSTLM: an open source toolkit for handling large scale language models. Marcello Federico, Nicola Bertoldi, Mauro Cettolo, Proc. Interspeech. InterspeechMarcello Federico, Nicola Bertoldi, and Mauro Cet- tolo. 2008. IRSTLM: an open source toolkit for handling large scale language models. In Proc. In- terspeech, pages 1618-1621. Large-scale discriminative training for statistical machine translation using held-out line search. Jeffrey Flanigan, Chris Dyer, Jaime Carbonell, Proc. NAACL. NAACLJeffrey Flanigan, Chris Dyer, and Jaime Carbonell. 2013. Large-scale discriminative training for statis- tical machine translation using held-out line search. In Proc. NAACL, pages 248-258. Phrasal cohesion and statistical machine translation. Heidi Fox, Proc. EMNLP. EMNLPHeidi Fox. 2002. Phrasal cohesion and statistical ma- chine translation. In Proc. EMNLP, pages 304-311. What's in a translation rule?. Michel Galley, Mark Hopkins, Kevin Knight, Daniel Marcu, Proc. HLT-NAACL. HLT-NAACLMichel Galley, Mark Hopkins, Kevin Knight, and Daniel Marcu. 2004. What's in a translation rule? In Proc. HLT-NAACL, pages 273-280. Post-ordering by parsing for Japanese-English statistical machine translation. Isao Goto, Masao Utiyama, Eiichiro Sumita, Proc. ACL. ACLIsao Goto, Masao Utiyama, and Eiichiro Sumita. 2012. Post-ordering by parsing for Japanese-English sta- tistical machine translation. In Proc. ACL, pages 311-316. Dependency based Chinese sentence realization. Wei He, Haifeng Wang, Yuqing Guo, Ting Liu, Proc. ACL/AFNLP. ACL/AFNLPWei He, Haifeng Wang, Yuqing Guo, and Ting Liu. 2009. Dependency based Chinese sentence realiza- tion. In Proc. ACL/AFNLP, pages 809-816. Squibs and Discussions: Decoding Complexity in Word-Replacement Translation Models. Kevin Knight, Computational Linguistics. 254Kevin Knight. 1999. Squibs and Discussions: Decod- ing Complexity in Word-Replacement Translation Models. Computational Linguistics, 25(4):607- 615. Statistical Machine Translation. Phillip Koehn, Cambridge University PressPhillip Koehn. 2010. Statistical Machine Translation. Cambridge University Press. An end-to-end discriminative approach to machine translation. P Liang, A Bouchard-Cote, D Klein, B Taskar, Proc. COLING/ACL. COLING/ACLP. Liang, A. Bouchard-Cote, D. Klein, and B. Taskar. 2006. An end-to-end discriminative approach to machine translation. In Proc. COLING/ACL, pages 761-768. Treeto-string alignment template for statistical machine translation. Yang Liu, Qun Liu, Shouxun Lin, Proc. COLING/ACL. COLING/ACLYang Liu, Qun Liu, and Shouxun Lin. 2006. Tree- to-string alignment template for statistical machine translation. In Proc. COLING/ACL, pages 609-616. SPMT: Statistical machine translation with syntactified target language phrases. Daniel Marcu, Wei Wang, Abdessamad Echihabi, Kevin Knight, Proc. EMNLP. EMNLPDaniel Marcu, Wei Wang, Abdessamad Echihabi, and Kevin Knight. 2006. SPMT: Statistical machine translation with syntactified target language phrases. In Proc. EMNLP, pages 44-52. Building a large annotated corpus of English: The penn treebank. Mitchell P Marcus, Mary Ann Marcinkiewicz, Beatrice Santorini, Computational linguistics. 192Mitchell P. Marcus, Mary Ann Marcinkiewicz, and Beatrice Santorini. 1993. Building a large anno- tated corpus of English: The penn treebank. Com- putational linguistics, 19(2):313-330. Optimizing for sentence-level BLEU+1 yields short translations. Preslav Nakov, Francisco Guzman, Stephan Vo, Proc. Coling. ColingPreslav Nakov, Francisco Guzman, and Stephan Vo- gel. 2012. Optimizing for sentence-level BLEU+1 yields short translations. In Proc. Coling, pages 1979-1994. Idlexpressions: a formalism for representing and parsing finite languages in natural language processing. Jan Mark, Giorgio Nederhof, Satta, J. Artif. Intell. Res.(JAIR). 21Mark-Jan Nederhof and Giorgio Satta. 2004. Idl- expressions: a formalism for representing and pars- ing finite languages in natural language processing. J. Artif. Intell. Res.(JAIR), 21:287-317. Improved alignment models for statistical machine translation. Franz Josef Och, Christoph Tillmann, Hermann Ney, Proc. EMNLP. EMNLPFranz Josef Och, Christoph Tillmann, and Hermann Ney. 1999. Improved alignment models for statis- tical machine translation. In Proc. EMNLP, pages 20-28. Dependency treelet translation: Syntactically informed phrasal smt. Chris Quirk, Arul Menezes, Colin Cherry, Proc. ACL. ACLChris Quirk, Arul Menezes, and Colin Cherry. 2005. Dependency treelet translation: Syntactically in- formed phrasal smt. In Proc. ACL, pages 271-279. LTAG dependency parsing with bidirectional incremental construction. Libin Shen, Aravind Joshi, Proc. EMNLP. EMNLPLibin Shen and Aravind Joshi. 2008. LTAG depen- dency parsing with bidirectional incremental con- struction. In Proc. EMNLP, pages 495-504. Joint morphological generation and syntactic linearization. Linfeng Song, Yue Zhang, Kai Song, Qun Liu, Proc. AAAI. AAAILinfeng Song, Yue Zhang, Kai Song, and Qun Liu. 2014. Joint morphological generation and syntactic linearization. In Proc. AAAI, pages 1522-1528. Stochastic language generation using widl-expressions and its application in machine translation and summarization. Radu Soricut, Daniel Marcu, Proc. ACL. ACLRadu Soricut and Daniel Marcu. 2006. Stochastic lan- guage generation using widl-expressions and its ap- plication in machine translation and summarization. In Proc. ACL, pages 1105-1112. Improving grammaticality in statistical sentence generation: Introducing a dependency spanning tree algorithm with an argument satisfaction model. Stephen Wan, Mark Dras, Robert Dale, Cécile Paris, Proc. EACL. EACLStephen Wan, Mark Dras, Robert Dale, and Cécile Paris. 2009. Improving grammaticality in statisti- cal sentence generation: Introducing a dependency spanning tree algorithm with an argument satisfac- tion model. In Proc. EACL, pages 852-860. Perceptron reranking for CCG realization. Michael White, Rajakrishnan Rajkumar, Proc. the EMNLP. the EMNLPMichael White and Rajakrishnan Rajkumar. 2009. Perceptron reranking for CCG realization. In Proc. the EMNLP, pages 410-419. NiuTrans: An open source toolkit for phrasebased and syntax-based machine translation. Tong Xiao, Jingbo Zhu, Hao Zhang, Qiang Li, Proc. ACL Demos. ACL DemosTong Xiao, Jingbo Zhu, Hao Zhang, and Qiang Li. 2012. NiuTrans: An open source toolkit for phrase- based and syntax-based machine translation. In Proc. ACL Demos, pages 19-24. A novel dependency-to-string model for statistical machine translation. Jun Xie, Haitao Mi, Qun Liu, Proc. EMNLP. EMNLPJun Xie, Haitao Mi, and Qun Liu. 2011. A novel dependency-to-string model for statistical machine translation. In Proc. EMNLP, pages 216-226. Max-violation perceptron and forced decoding for scalable MT training. Heng Yu, Liang Huang, Haitao Mi, Kai Zhao, Proc. EMNLP. EMNLPHeng Yu, Liang Huang, Haitao Mi, and Kai Zhao. 2013. Max-violation perceptron and forced decod- ing for scalable MT training. In Proc. EMNLP, pages 1112-1123. Syntax-based grammaticality improvement using CCG and guided search. Yue Zhang, Stephen Clark, Proc. EMNLP. EMNLPYue Zhang and Stephen Clark. 2011. Syntax-based grammaticality improvement using CCG and guided search. In Proc. EMNLP, pages 1147-1157. Syntax-based word ordering incorporating a large-scale language model. Yue Zhang, Graeme Blackwood, Stephen Clark, Proc. EACL. EACLYue Zhang, Graeme Blackwood, and Stephen Clark. 2012. Syntax-based word ordering incorporating a large-scale language model. In Proc. EACL, pages 736-746. Partial-tree linearization: Generalized word ordering for text synthesis. Yue Zhang, Proc. IJ-CAI. IJ-CAIYue Zhang. 2013. Partial-tree linearization: General- ized word ordering for text synthesis. In Proc. IJ- CAI, pages 2232-2238.
15,691,181
Merging Verb Senses of Hindi WordNet using Word Embeddings
In this paper, we present an approach for merging fine-grained verb senses of Hindi WordNet. Senses are merged based on gloss similarity score. We explore the use of word embeddings for gloss similarity computation and compare with various WordNet based gloss similarity measures.Our results indicate that word embeddings show significant improvement over Word-Net based measures. Consequently, we observe an increase in accuracy on merging fine-grained senses. Gold standard data constructed for our experiments is made available.Hindi WordNet Sense GranularityDifferent applications need different types of sense granularity. Fine-grained sense distinctions 344
[ 777346, 6561801, 1359050, 886027, 340173, 18597583, 6481231, 2276401, 6254130, 14687186, 7803700, 15599540, 1499545, 5959482, 18193242 ]
Merging Verb Senses of Hindi WordNet using Word Embeddings NLPAICopyright NLPAIDecember 2014. 2014 D S Sharma Department of Computer Science and Engineering IIT Bombay 400076Powai, Mumbai R Sangal Department of Computer Science and Engineering IIT Bombay 400076Powai, Mumbai J D Pawar Department of Computer Science and Engineering IIT Bombay 400076Powai, Mumbai Sudha Bhingardive Department of Computer Science and Engineering IIT Bombay 400076Powai, Mumbai Ratish Puduppully ratishp@cse.iitb.ac.in Department of Computer Science and Engineering IIT Bombay 400076Powai, Mumbai Dhirendra Singh dhirendra@cse.iitb.ac.in Department of Computer Science and Engineering IIT Bombay 400076Powai, Mumbai Pushpak Bhattacharyya Department of Computer Science and Engineering IIT Bombay 400076Powai, Mumbai Merging Verb Senses of Hindi WordNet using Word Embeddings NLP Association of India . of the 11th Intl. Conference on Natural Language essingGoa, IndiaNLPAIDecember 2014. 2014 In this paper, we present an approach for merging fine-grained verb senses of Hindi WordNet. Senses are merged based on gloss similarity score. We explore the use of word embeddings for gloss similarity computation and compare with various WordNet based gloss similarity measures.Our results indicate that word embeddings show significant improvement over Word-Net based measures. Consequently, we observe an increase in accuracy on merging fine-grained senses. Gold standard data constructed for our experiments is made available.Hindi WordNet Sense GranularityDifferent applications need different types of sense granularity. Fine-grained sense distinctions 344 Introduction Hindi WordNet 1 (HWN) is the first Indian language WordNet. It was created manually from Princeton WordNet 2 (Christiane Fellbaum, 1998) using expansion approach and similarly other 16 Indian language WordNets were created from Hindi. This linked structure of Indian language WordNets is known as IndoWordNet 3 (Bhattacharya P., 2010) . It is as shown in Figure 1. The structure of HWN is similar to the Princeton WordNet. It is composed of synsets and semantic relations. Synset is a set of synonyms representing the same concept. Synsets are linked with basic semantic relations viz., hypernymy, hyponymy, meronymy, holonymy, troponymy etc. In comparison with Princeton WordNet, HWN provides extra relations e.g., gradation, causative, compounds, conjunction etc. HWN is widely used Figure 1: IndoWordNet in Natural Language Applications (NLP) viz., Machine Translation (Ananthakrishnan et al., 2008;Kunchukuttan et al., 2012), Word Sense Disambiguation (Khapra et al., 2010;Bhingardive et al., 2013), Sentiment Analysis (Balamurali et al., 2012;Popat et al., 2013) etc. Over-specified sense distinctions in HWN may not be useful for certain applications. Hence, generating a coarse-grained version of HWN is a crucial task in order to get better results for such applications. In this paper, we present a method for merging the fine-grained senses of HWN using gloss similarity. Word embeddings are used for computing this similarity. The presented method performs better as compared to baselines. The paper is organised as follows. Section 2 describes the sense granularity that exists in HWN. Section 3 presents the related work. Section 4 gives details about Word Embeddings. Sense merging approach is given in section 5. Experiments and results are presented in section 6. Error analysis is given in section 7. Section 8 concludes the paper and points to the future work. 1 º Ú º (mumha bahuta thodA khulA rakhakar havA bAhar nikAlanA) (blow air through a very small opening of mouth) to blow 2 º e º º (mukha se bajAye jAne wAle bAjom ko phumkakara bajAnA) (blowing the instruments that are played by mouth) 3 Ú (phUmka mAra kara dahakAnA yA prajjvalita karanA) (ignite by blowing) to ignite 4 a (Aga ke sanyoga se kisI vastu ko jalane mem pravarutt karanA ) (to burn something with fire) 5 a (Aga lagAnA) (to burn) to smoke 6 º a Ú a Ê º (tambAkU, gAnje Adi kA dhumA mumha se khINcakara bAhara nikAlanA) (to exhale the smoke of tobacco etc after inhaling) HWN has many fine-grained senses. For example, there are six senses of word (phumkanA), which can be merged into three sense groups as shown in Table 1. Hindi senses are distinguished depending on different types of linguistic properties like properties of subject, object, time variations, compulsion, mode of communication, visibility, acts, existence etc. Some of them are listed in Table 2 and explained below: • Subject property : Senses can be distinguished depending on the properties of the subject. Consider the word (kAtanA) which has two senses S 1 (to cut) and S 2 (insect bite) as shown in Table 2. In S 1 , subject will always be an animate entity (a human being) while in S 2 , it will always be an insect. • Object property : Object property can also help in making sense distinction. For example, the word (rakhanA) has two senses S 1 (to put) and S 2 (to present) as shown in Table 2, in which S 1 can take either animate or inanimate object while S 2 can take only abstract object. • Compulsion : In some cases, senses are distinguished depending on the force of action. For example, the word (nikAlanA) has two senses S 1 (to remove from a post) and S 2 (forcefully remove from a post) are distinguished by the force of action. Word Sense Disambiguation algorithms often fail in making such fine distinction. • Time period : Consider the senses of word Ú (dina). There are total nine senses out of which three senses (ref Table 2) differ only in time period. Fine grained sense distinctions are very difficult to capture programmatically. Sometimes even humans fail in making such distinctions. Hence, for applications which do not need fine-grained senses, a coarse-grained version of HWN is essential. Related Work Recently, a large number of sense clustering techniques have been proposed. These techniques rely on various information resources like ontological structure, external corpora, translation similarities, supervision etc. WordNet ontology structure is very helpful for merging fine-grained word senses. Various synset similarity measures have been proposed viz., Path Based Similarity (Wu and Palmer, 1994), (Leacock and Chodorow, 1998), Information Content Based Measures (Resnik, 1995) (Lin, 1998) (Jiang and Conrath, 1997), Gloss Based Heuristics (Lesk, 1986) (Banerjee and Pedersen, 2003) etc. These measures were used for creating coarse-grained 345 (2001) derived a set of semantic and probabilistic rules for reducing average polysemy. This was the first attempt of grouping synsets rather than word senses. The resulting version of WordNet leads to reduction of polysemy by around 26% with an error rate of 2.1%. Tomuro (2001) used a similar approach but introduced more principled algorithms. Mihalcea and Moldovan Agirre and Lacalle (2003) Bhagwani et. al., (2013) proposed a semisupervised approach which learns synset similarity by using graph based recursive similarity. Resulting coarse-grained sense inventory boosts performance of noun sense disambiguation. Chugur et. al., (2002) used translational equivalences of word senses for sense merging. Two word senses are expected to be similar, if they lead to the same translation in other languages. Several sense clustering attempts were made by mapping WordNet to other sense inventories either manually or automatically. Navigli (2006) proposed a sense clustering method by mapping WordNet senses to Oxford English Dictionary (OED). Martha Palmer (2007) suggested a semiautomatic technique for verb sense grouping by using Levin class theory. Snow et. al., (2007) proposed a supervised approach using Support Vector Machine in which features were derived from WordNet and other 346 lexical resources. Due to shallow hierarchy of verbs in WordNet, the knowledge based measures which exploit ontology structure are ineffective for sense merging. We therefore make use of gloss to infer finegrained senses. We investigate usage of word embeddings for gloss similarity computation. Word Embeddings Word Embeddings are increasingly being used in variety of NLP tasks. Word Embeddings represent each word with low-dimensional real valued vector. Such models work under the assumption that similar words occur in similar context (Harris, 1968). Word embeddings have been used for textual similarity computation (Mihalcea et al., 2006). We are using word embeddings for finding gloss similarity between synsets. The fine-grained senses can be merged based on the similarity values. Word embeddings have been trained using word2vec 4 tool (Mikolov et al., 2013). word2vec provides two broad techniques for word vectors generation: Continuous SkipGram and Continuous Bag of Words (CBOW). CBOW predicts current word based on surrounding context, whereas Continuous SkipGram model tries to maximize classification of word based on another word in the same sentence (Mikolov et al., 2013). The approach followed here is using SkipGram model by varying context window size (w). Like (Bansal et al., 2014) we find that lower window size results in syntactically similar words. As the window size increases, more semantically similar words are listed. For the experiments we performed, we fixed window size as w = 7 as we are interested in more semantically similar words. The word vectors have been trained on 44M sentence corpus (Bojar et al., 2014). The time taken to create word embeddings on the corpus was few minutes on a 2X2 GHz machine. [nervousness due to feeling of loss or premonition] Above two senses of word Ú (daranA) are too fine-grained. Lesk similarity (Lesk, 1986) and Extended Lesk similarity (Banerjee and Pedersen, 2003) comes out to be zero as there is no gloss overlap and no relation between these two senses in HWN. Therefore, instead of finding the gloss overlap, the approach followed here is to find whether words from two glosses are semantically related or not. Mihalcea Text Similarity using Word Embeddings We used word embeddings generated using word2vec (ref Section 4) for finding the semantic similarity between words from two glosses. We leverage the text similarity measure proposed by (Mihalcea et al., 2006) for gloss similarity computation. It considers both word-to-word similarity and word specificity. Word specificity indicates whether the word is specific or generic. Specificity of a word is measured using Inverse document frequency (idf ) (Sparck-Jones et al., 1972). idf is defined as the total number of documents in the corpus divided by the total number of documents including that word. We used hindi wikipedia dump 5 for obtaining idf . Each wikipedia page is treated as single document. The text similarity measure given in Equation 1 compares two text segments T 1 and T 2 for semantic similarity. For each word w in T 1 , it finds the respective word in T 2 with which it has maximum similarity maxSim(w, T 2 ). where, maxSim(w, T i ) is computed on word embeddings by finding the maximum cosine similarity between w and words in T i . The process is repeated for each word in T 2 w.r.t T 1 . The similarities are weighted by idf values, summed up and normalized w.r.t to the length of the text segment. Similarity scores obtained are values between 0 and 1, where 0 indicates least similarity and 1 indicates maximum similarity. sim(T1, T2) = 1 2 * ( w∈T 1 (maxSim(w, T2) * idf (w)) w∈T 1 idf (w) + w∈T 2 (maxSim(w, T1) * idf (w)) w∈T 2 idf (w) )(1) Compositional Text Semantic Similarity Using Word Embeddings In this approach, we consider the word embedding of the text segment T as compositionally obtained from that of its words. The principle behind the same is that the meaning of the sentence is derived from its constituent words. This is the Weighted Addition model in (Mitchell and Lapata, 2008). For this system, we construct word embeddings for each text segment as in Equation 2: vec(T 1 ) = w∈T 1 (vec(w) * idf (w))(2) sim(T 1 , T 2 ) = cosine(vec(T 1 ), vec(T 2 )) (3) where vec(T ) is the word embedding for text segment T . Experiments and Results For the purpose of experiments, we created gold standard data. It consists of 250 verbs each with two senses. The test set verbs were tagged as mergeable or not. Five annotators worked independently and created this data with 0.8 inter annotator agrrement. This data is released for further 348 experimentation 6 . We compare our approach with WordNet based gloss similarity measures listed below: • Lesk with idf: Senses are merged based on the word overlap between glosses (Lesk, 1986) with idf weighting applied on them. For this, we use the Equation 1 with maxSim defined as follows: maxSim(w, T i ) = 1 if w ∈ T i = 0 if w / ∈ T i • Lesk without idf: In this method, senses are merged based on the word overlap between glosses (Lesk, 1986) without applying idf weighting on them. The following equation is used which is derived from the Equation 1. sim(T1, T2) = 1 2 * ( w∈T 1 (maxSim(w, T2)) count(T1) + w∈T 2 (maxSim(w, T1)) count(T2) )(4) where maxSim is as defined in Lesk with idf. • Path Length Measure: It measures the similarity between two synsets depending on the number of links existing in the is-a hierarchy of WordNet. This measure is defined as follows: sim path = 1 shortest path length(S1, S2) where S1, S2 are synsets. The shorter the length of the path between them, the more related they are considered. Thus there is an inverse relation between the length of the path between the synsets and the similarity between them. This sim path value is substituted into Equation 1. • The Leacock Chodorow (Leacock and Chodorow, 1998) similarity is determined as: simLCH = −log shortest path length(S1, S2) 2 * D where D is the maximum depth of the taxonomy. This sim LCH value is substituted into Equation 1. 6 https://github.com/sudhabh/SenseMerging • (Wu and Palmer, 1994) similarity metric measures the depth of two given synsets in the WordNet taxonomy, and the depth of the least common subsumer (LCS), and combines these figures into a similarity score: simW U P = 2 * depth(LCS) depth(S1) + depth(S2)(7) This sim W U P value is substituted into Equation 1. Table 3, Table 4 and Table 5 present Precision, Recall and F-measure for sense merging techniques with similarity threshold as 0.7, 0.6 and 0.5. Here threshold is value above which the two candidate verb senses are considered similar. The similarity values range from 0 to 1. From the results, we observe that decreasing the value of similarity threshold leads to increase in recall with corresponding decrease in precision. Figure 2 Senses S 1 and S 2 are not similar, but they have high semantic similarity score resulting in an incorrect sense merging. This might have happened because (sthAn) is common between the two gloss and Ê (pahunchanA) is semantically similar to (f ailenA) in the corpus. 2. Another source of error is disparity in idf values due to multiple ways of expressing Hindi word forms. For example, as seen in S 1 , S 2 above, Ê (pahumchanA), Ê (pahun-chanA) are two ways of saying the same word. This results in split in their counts and consequent change in idf value. Conclusion and Future Work We conclude that word embeddings are indeed effective in computing gloss similarity and can be used to merge fine-grained senses of Hindi Word-Net. We report significant performance improvement with word embeddings over WordNet based similarity measures. The resulting coarse-grained verb senses of Hindi WordNet are important resources in applications which do not prefer the fine-grained sense distinctions. In future, we will perform evaluation on verbs having more than two senses. Also, the same technique can be applied for merging senses of other Indian language WordNets. We plan to use the coarse grained senses in both Rule Based Machine Translation and Statistical Machine Translation systems and conduct experiments to verify increase in accuracy of translation. The Weighted Addition model for compositional meaning is agnostic to syntax of the sentence. We plan to explore additional models of representing phrases and sentences such as lexical function model in (Paperno et al., 2014). (Collobert et al., 2011) used word embeddings for POS tagging, Named Entity Recognition and Semantic Role Labeling. Such embeddings have also been used in Sentiment Analysis(Tang et al., 2014), Word Sense Induction(Huang et al., 2012), Dependency Parsing(Bansal et al., 2014) and Constituency Parsing(Socher et al., 2013). Figure 3 :Ê 3Plot of f-measure of Word embedding based measures (Rada and Compositional) and Wordnet based measures (WUP, Path and LCH) against various threshold values 1. Sometimes gloss semantic similarity score is very high, even though the word senses are not similar. This leads to an incorrect sense merging. Consider the two senses of the word Ê (pahuchanA) listed below. • S 1 : { Ê , Ê , } (pahum-chanA, pahunchanA, failenA) -(kisI sthAn tak failenA) [to extend upto a place] (kisI pad, sthAn aadI tak pahun-chanA) [to reach a position or a place] Table 1 : 1Fine-grained senses of the verb (phUmkanA); six senses can be merged into three sense groups are suitable for language learners and applications like Document Categorization, Information Retrieval, Information Extraction etc. However, coarse-grained senses are sufficient for applications like Machine Translation and Word Sense Disambiguation. The main difficulty arises in finding the consistent criteria for making accurate sense distinctions. Table 2 : 2Hindi WordNet Sense Distinction LDOCE).Peters (1998) addressed different ways for reducing the fine-grainedness of EuroWord-Net. In his approach, senses were grouped depending on the semantic relations like sisters, twins, cousins, autohyponymy etc.senses. Dolan (1994) first used ontological in- formation for sense clustering. He presented a heuristic based algorithm for clustering senses of Longman's Dictionary of Contemporary English ( presented a clustering technique which uses confusion matrices, translation similarities, hand-tagged examples of the target word senses and other web information. McCarthy (2006) used combination of word-to-word distributional similarity along with WordNet based similarity measures for sense clustering. Table 3 : 3Sense Merging Results with similarity threshold ≥ 0.7Sense Merging Techniques Precision Recall F-measure Mihalcea Text Similarity Using Word Embeddings 0.95 0.54 0.69 Compositional Text Similarity Using Word Embeddings 0.75 0.54 0.63 Lesk with idf 0.97 0.29 0.45 Lesk without idf 0.86 0.29 0.44 Path Similarity 0.90 0.24 0.38 WUP 0.82 0.21 0.33 LCH 0.43 0.28 0.34 Table 4 : 4Sense Merging Results with similarity threshold ≥ 0.6Sense Merging Techniques Precision Recall F-measure Mihalcea Text Similarity Using Word Embeddings 0.74 0.58 0.65 Compositional Text Similarity Using Word Embeddings 0.67 0.69 0.68 Lesk with idf 0.96 0.36 0.52 Lesk without idf 0.76 0.38 0.51 Path Similarity 0.82 0.27 0.41 WUP 0.61 0.24 0.35 LCH 0.39 0.34 0.36 Table 5 : 5Sense Merging Results with similarity threshold ≥ 0.5 and Figure 3 show the variation in F-measure across range of similarity thresholds. From the figures, again we observe that techiques based on Word Embeddings performs much better than techniques based on WordNet similarity measures with regard to Fmeasure.Figure 2: Plot of F-measure of Word embedding based measures (Rada and Compositional) and WordNet similarity based (Lesk with idf and Lesk without idf) figures against various threshold values7 Error Analysis Our approach suffers from some limitations listed below. http://www.cfilt.iitb.ac.in/wordnet/webhwn/wn.php 2 http://wordnet.princeton.edu/ 3 IndoWordNet is available in following Indian languages: Assamese, Bodo, Bengali, English, Gujarati, Hindi, Kashmiri, Konkani, Kannada, Malayalam, Manipuri, Marathi, Nepali, Punjabi, Sanskrit, Tamil, Telugu and Urdu. These languages cover three different language families, Indo Aryan, Sino-Tibetan and Dravidian. http://www.cfilt.iitb.ac.in/indowordnet https://code.google.com/p/word2vec/ Clustering wordnet word senses. E Agirre, Lacalle, RANLP. 260Agirre E. and Lacalle. 2003. Clustering wordnet word senses, In RANLP, volume 260, pages 121-130. Simple Syntactic and Morphological Processing Can Help English-Hindi Statistical Machine Translation. R Ananthakrishnan, J Hegde, P Bhattacharyya, M Sasikumar, International Joint Conference on NLP (IJCNLP08). Hyderabad, IndiaAnanthakrishnan R., Hegde J., Bhattacharyya P. and Sasikumar M. 2008. Simple Syntactic and Morpho- logical Processing Can Help English-Hindi Statis- tical Machine Translation, International Joint Con- ference on NLP (IJCNLP08), Hyderabad, India. A R Balamurali, A Joshi, P Bhattacharyya, Cross-Lingual Sentiment Analysis for Indian Languages using WordNet Synsets, COLING. MumbaiBalamurali A.R., Joshi A. and Bhattacharyya P. 2012. Cross-Lingual Sentiment Analysis for Indian Lan- guages using WordNet Synsets, COLING, Mumbai. Extended Gloss Overlaps as a Measure of Semantic Relatedness. S Banerjee, T Pedersen, Proceedings of the Eighteenth International Joint Conference on Artificial Intelligence. the Eighteenth International Joint Conference on Artificial IntelligenceAcapulco, MexicoBanerjee S. and Pedersen T. 2003. Extended Gloss Overlaps as a Measure of Semantic Relatedness, Proceedings of the Eighteenth International Joint Conference on Artificial Intelligence, Acapulco, Mexico. Tailoring Continuous Word Representations for Dependency Parsing. Mohit Bansal, Kevin Gimpel, Karen Livescu, Proceedings of ACL. ACLBansal, Mohit and Gimpel, Kevin and Livescu, Karen 2014. Tailoring Continuous Word Representations for Dependency Parsing, Proceedings of ACL 2014. Merging Word Sense, Association for Computational Linguistics. S Bhagwani, S Satapathy, H Karnick, Proceedings of the TextGraphs-8 Workshop. the TextGraphs-8 WorkshopUSABhagwani S., Satapathy S. and Karnick H. 2013. Merging Word Sense, Association for Computa- tional Linguistics, Proceedings of the TextGraphs-8 Workshop, USA. P Bhattacharya, Lexical Resources Engineering Conference (LREC 2010). MaltaBhattacharya P. 2010. IndoWordNet, Lexical Resources Engineering Conference (LREC 2010), Malta. Neighbor's Help: Bilingual Unsupervised WSD Using Context, ACL. S Bhingardive, S Shaikh, P Bhattacharyya, Sofia, BulgariaBhingardive S., Shaikh S. and Bhattacharyya P. 2013. Neighbor's Help: Bilingual Unsupervised WSD Us- ing Context, ACL 2013, Sofia, Bulgaria. HindEnCorp -Hindi-English and Hindi-only Corpus for Machine Translation. Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14). Bojar Ondřej and Vojtěch Diatka and Pavel Rychlý and Pavel Straňák and Vít Suchomel and Aleš Tamchyna and Daniel Zemanthe Ninth International Conference on Language Resources and Evaluation (LREC'14)Bojar Ondřej and Vojtěch Diatka and Pavel Rychlý and Pavel Straňák and Vít Suchomel and Aleš Tam- chyna and Daniel Zeman 2014. HindEnCorp - Hindi-English and Hindi-only Corpus for Machine Translation, Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14) WordNet: An Electronic Lexical Database. Christiane Fellbaum, MIT PressChristiane Fellbaum. 1998. WordNet: An Electronic Lexical Database, MIT Press. Polysemy and sense proximity in the senseval-2 test suite. I Chugur, J Gonzalo, F Verdejo, Proceedings of the ACL 2002 WSD workshop. the ACL 2002 WSD workshopChugur I., Gonzalo J. and Verdejo F. 2002. Polysemy and sense proximity in the senseval-2 test suite, In Proceedings of the ACL 2002 WSD workshop. Natural language processing (almost) from scratch. Collobert Ronan, Jason Weston, Leon Bottou, Michael Karlen, Koray Kavukcuoglu, Pavel Kuksa, Journal of Machine Learning Research. 12Collobert Ronan , Jason Weston, Leon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa 2011. Natural language processing (almost) from scratch, Journal of Machine Learning Research, 12:2493-2537. Word sense ambiguation: clustering related senses. W Dolan, Proceedings of the 15th conference on Computational linguistics. the 15th conference on Computational linguisticsStroudsburg, PA, USAAssociation for Computational Linguistics2COL-INGDolan W. 1994. Word sense ambiguation: clustering related senses, In Proceedings of the 15th confer- ence on Computational linguistics -Volume 2, COL- ING,, pages 712-716, Stroudsburg, PA, USA. Asso- ciation for Computational Linguistics. Mathematical Structures of Language. Z Harris, WileyNew YorkHarris. Z. 1968. Mathematical Structures of Lan- guage., Wiley, New York. Improving Word Representations via Global Context and Multiple Word Prototypes. H Huang Eric, Richard Socher, Christopher D Manning, Andrew Y Ng, Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Long Papers. the 50th Annual Meeting of the Association for Computational Linguistics: Long Papers1Huang Eric H. , Richard Socher, Christopher D. Man- ning, and Andrew Y. Ng. 2012. Improving Word Representations via Global Context and Multiple Word Prototypes, Proceedings of the 50th Annual Meeting of the Association for Computational Lin- guistics: Long Papers -Volume 1 Pages 873-882 Semantic similarity based on corpus statistics and lexical taxonomy. J Jiang, D Conrath, Proceedings on International Conference on Research in Computational Linguistics. on International Conference on Research in Computational LinguisticsJiang J. and Conrath D. 1997. Semantic similar- ity based on corpus statistics and lexical taxonomy, In Proceedings on International Conference on Re- search in Computational Linguistics. Domain-Specific Word Sense Disambiguation Combining Corpus Based and WordNet Based Parameters. M Khapra, S Shah, P Kedia, P Bhattacharyya, 5th International Conference on Global WordNet (GWC 2010). MumbaiKhapra M., Shah S., Kedia P. and Bhattacharyya P. 2010. Domain-Specific Word Sense Disambiguation Combining Corpus Based and WordNet Based Pa- rameters, 5th International Conference on Global WordNet (GWC 2010), Mumbai. Experiences in Resource Generation for Machine Translation through Crowdsourcing. A Kunchukuttan, S Roy, P Patel, K Ladha, S Gupta, M Khapra, P Bhattacharyya, Lexical Resources Engineering Conference (LREC 2012. Istanbul, TurkeyKunchukuttan A., Roy S., Patel P., Ladha K., Gupta S., Khapra M. and Bhattacharyya P. 2012. Experiences in Resource Generation for Machine Translation through Crowdsourcing, Lexical Resources Engi- neering Conference (LREC 2012), Istanbul, Turkey. Combining local context and WordNet similarity for word sense identification. C Leacock, Chodorow M , Fellbaum, C., ed.MIT PressWordNet: An electronic lexical databaseLeacock C., and Chodorow M. 1998. Combining local context and WordNet similarity for word sense iden- tification, In Fellbaum, C., ed., WordNet: An elec- tronic lexical database. MIT Press, pages 265-283. Automatic sense disambiguation using machine readable dictionaries: how to tell a pine cone from and ice cream cone. M Lesk, Proceedings of the ACM SIGDOC Conference. the ACM SIGDOC ConferenceToronto, CanadaLesk M. 1986. Automatic sense disambiguation using machine readable dictionaries: how to tell a pine cone from and ice cream cone, In Proceedings of the ACM SIGDOC Conference, pages 24-26, Toronto, Canada. An information-theoretic definition of similarity. D Lin, Proceedings of the International Conference on Machine Learning. the International Conference on Machine LearningLin D. 1998. An information-theoretic definition of similarity, In Proceedings of the International Con- ference on Machine Learning. Relating WordNet Senses for Word Sense Disambiguation. D Mccarthy, proceedings of ACL Workshop on Making Sense of Sense. ACL Workshop on Making Sense of SenseMcCarthy D. 2006. Relating WordNet Senses for Word Sense Disambiguation, In proceedings of ACL Workshop on Making Sense of Sense. Ez.wordnet: principles for automatic generation of a coarse grained wordnet. Mihalcea Rada, Moldovan Dan, Proceedings of Flairs. FlairsMihalcea Rada and Moldovan Dan 2001. Ez.wordnet: principles for automatic generation of a coarse grained wordnet, In Proceedings of Flairs 2001, pages 454-459. Corpus-based and Knowledge-based Measures of Text Semantic Similarity. Mihalcea Rada, Courtney Corley, Carlo Strapparava, Proceedings of the American Association for Artificial Intelligence (AAAI 2006). the American Association for Artificial Intelligence (AAAI 2006)BostonMihalcea Rada, Courtney Corley, Carlo Strapparava 2006. Corpus-based and Knowledge-based Mea- sures of Text Semantic Similarity, In Proceedings of the American Association for Artificial Intelligence (AAAI 2006), Boston, July 2006. Efficient Estimation of Word Representations in Vector Space. Mikolov Tomas, Kai Chen, Greg Corrado, Jeffrey Dean, Proceedings of Workshop at ICLR. Workshop at ICLRMikolov Tomas , Kai Chen, Greg Corrado, and Jeffrey Dean 2013. Efficient Estimation of Word Represen- tations in Vector Space, In Proceedings of Work- shop at ICLR, 2013. Vector-based models of semantic composition. Mitchell Jeff, Mirella Lapata, Proceedings of ACL. ACLColumbus, OHMitchell Jeff and Mirella Lapata 2008. Vector-based models of semantic composition, In Proceedings of ACL, pages 236-244, Columbus, OH. Meaningful clustering of senses helps boost word sense disambiguation performance. R Navigli, Proceedings of COLING-ACL. COLING-ACLNavigli R. 2006. Meaningful clustering of senses helps boost word sense disambiguation performance, In Proceedings of COLING-ACL, pages 105-112. Making Fine-grained and coarse-grained sense distinctions, both manually and automatically. M Palmer, H Dang, C Fellbaum, Natural Language Engineering. Palmer M., Dang H. and Fellbaum C. 2007. Mak- ing Fine-grained and coarse-grained sense distinc- tions, both manually and automatically, Natural Language Engineering. A practical and linguistically-motivated approach to compositional distributional semantics. Paperno Denis, Nghia The Pham, Marco Baroni, Proceedings of ACL. ACLPaperno Denis and Nghia The Pham and Marco Ba- roni 2014. A practical and linguistically-motivated approach to compositional distributional semantics, Proceedings of ACL 2014. WordNet::Similarity -Measuring the Relatedness of Concepts. T Pedersen, S Patwardhan, J Michelizzi, Proceedings of Fifth Annual Meeting of the North American Chapter of the Association for Computational Linguistics (NAACL-04). Fifth Annual Meeting of the North American Chapter of the Association for Computational Linguistics (NAACL-04)Boston, MAPedersen T., Patwardhan S. and Michelizzi J. 2004. WordNet::Similarity -Measuring the Relatedness of Concepts, Proceedings of Fifth Annual Meeting of the North American Chapter of the Association for Computational Linguistics (NAACL-04), pages 38- 41, Boston, MA. Automatic sense clustering in Eurowordnet. W Peters, I Peters, P Vossen, Proceedings of first international conference on language resource and evaluation. first international conference on language resource and evaluationGranada, SpainPeters W., Peters I. and Vossen P. 1998. Automatic sense clustering in Eurowordnet, Proceedings of first international conference on language resource and evaluation : Granada, Spain, pages 409-416. The Haves and the Have-Nots: Leveraging Unlabelled Corpora for Sentiment Analysis. K Popat, A Balamurali, P Bhattacharyya, G Haffari, Sofia, BulgariaPopat K., Balamurali A., Bhattacharyya P. and Haffari G. 2013. The Haves and the Have-Nots: Lever- aging Unlabelled Corpora for Sentiment Analysis, ACL 2013, Sofia, Bulgaria. Using information content to evaluate semantic similarity in a taxonomy. P Resnik, Proceedings of the 14th international joint conference on Artifi. the 14th international joint conference on ArtifiSan Francisco, CA, USA1IJCAI'95Resnik P. 1995. Using information content to evaluate semantic similarity in a taxonomy, In Proceedings of the 14th international joint conference on Artifi- cial intelligence -Volume 1, IJCAI'95, pages 448- 453, San Francisco, CA, USA. Learning to Merge Word Senses. R Snow, S Prakash, D Jurafsky, Andrew Ng, Proceedings of the Joint Meeting of the Conference on Empirical Methods on Natural Language Processing and the Conference on Natural Language Learning. the Joint Meeting of the Conference on Empirical Methods on Natural Language Processing and the Conference on Natural Language LearningSnow R., Prakash S., Jurafsky D. and Andrew Ng. 2007. Learning to Merge Word Senses, In Pro- ceedings of the Joint Meeting of the Conference on Empirical Methods on Natural Language Processing and the Conference on Natural Language Learning. Parsing With Compositional Vector Grammars. Socher Richard, John Bauer, Christopher D Manning, Andrew Y Ng, Proceedings of the ACL conference. the ACL conferenceSocher Richard, John Bauer, Christopher D. Manning and Andrew Y. Ng. 2013. Parsing With Composi- tional Vector Grammars, In Proceedings of the ACL conference. 2013 A statistical interpretation of term specificity and its application in retrieval. K Sparck-Jones, Journal of Documentation. 281Sparck-Jones, K. 1972. A statistical interpretation of term specificity and its application in retrieval., Journal of Documentation 28(1):11-21 Learning Sentiment-Specific Word Embedding for Twitter Sentiment Classification. Duyu Tang, Furu Wei, Yang , Nan Zhou, Ming Liu, Qin Ting, Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics. the 52nd Annual Meeting of the Association for Computational LinguisticsBingLong Papers1Tang, Duyu and Wei, Furu and Yang, Nan and Zhou, Ming and Liu, Ting and Qin, Bing 2014. Learning Sentiment-Specific Word Embedding for Twitter Sen- timent Classification, Proceedings of the 52nd An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) Tree-cut and a lexical based on systematic polysemy, proc. Of the second meeting of the North America Chapter. M Tomuro, Association for Computational LinguisticsTomuro M. 2001. Tree-cut and a lexical based on sys- tematic polysemy, proc. Of the second meeting of the North America Chapter of the Association for Computational Linguistics, (NAACL). Verb semantics and lexical selection. Z Wu, M Palmer, 32nd Annual Meeting of the Association for Computational Linguistics. Wu Z. and Palmer M. 1994. Verb semantics and lexical selection, In 32nd Annual Meeting of the Associa- tion for Computational Linguistics, pages 133-138.
233,365,244
[]
Abusive Language Recognition in Russian April 20, 2021 Kamil Saitov saitov66@gmail.com Innopolis University Russian Federation IT University of Copenhagen Denmark Leon Derczynski Innopolis University Russian Federation IT University of Copenhagen Denmark Abusive Language Recognition in Russian Proceedings of the 8th BSNLP Workshop on Balto-Slavic Natural Language Processing the 8th BSNLP Workshop on Balto-Slavic Natural Language ProcessingApril 20, 202120 Abusive phenomena are commonplace in language on the web. The scope of recognizing abusive language is broad, covering many behaviours and forms of expression. This work addresses automatic detection of abusive language in Russian. The lexical, grammatical and morphological diversity of Russian language present potential difficulties for this task, which is addressed using a variety of machine learning approaches. We present a dataset and baselines for this task. Introduction Unfortunately, hate speech and abusive language are prevalent on the internet (Waseem and Hovy, 2016), often creating an aggressive environment for users. This can include cyber-bullying or threats towards individuals and groups. Reducing this content is difficult: it is harmful for humans to moderate. 1 Thus, there is a critical need for abusive language recognition systems, which would help social networks and forums filter abusive language. Moreover, with platforms taking increased control over which content to surface, automatic abuse recognition is more important than ever. One problem arises when the subjectivity of the matter is considered. Abusive language is hard for humans to recognize universally (Waseem, 2016). This indicates that the collection and labeling of data should be thorough and objective, which could be reached through e.g. large-scale crowd-sourced data annotation (Sabou et al., 2014). NLP research in the area is nascent, with existing solutions oriented mostly towards English language (Vidgen and Derczynski, 2020), which, despite sometimes being mistakenly considered as "universal" (Bender, 2019), is very different grammatically and lexically from many languages, especially those using non-Latin characters (e.g. Russian, Japanese etc). This paper addresses abusive language detection in Russian. One issue with recognition of abusive language in Russian is the limited number of sources of labeled data relative to English (Andrusyak et al., 2018;Zueva et al., 2020;Smetanin, 2020;Potapova and Gordeev, 2016). Thus, the collection and labeling of data presents an additional challenge, and we present both dataset and models. Abusive Language Definition In this case, we use the OLID annotation definition of abusive language (Zampieri et al., 2019). This covers profanity, and targeted and untargeted insults and threats, against both groups and individuals. Specifically, in accordance this scheme, we consider the use of racial and other group-targeted slurs abusive. Dataset Data collection We searched for publicly available datasets containing considerable amounts of abusive language. Russian Troll Tweets is a repository consisting of 3 million tweets. 2 This was filtered to only Cyrillic texts. This data is not labeled, thus a subset of the data was labeled manually for use in this research. During labeling, the data turned out to contain significantly less abusive language than expected. An additional resource, the RuTweetCorp (Rubtsova, 2013), was also annotated for abusive language. In search for sources rich in abusive language, the "South Park" TV show was found. The Russian subtitles for it embodied a high density of profanity, hate-speech, racism, sexism, various examples of ethnicity and nationality abuse. The subtitles from more than four seasons of the series yielded many instances of abusive language. This data, Russian South Park (RSP), was annotated manually. Interannotator agreement (IAA; computed with Cohen's Kappa) over the whole dataset is 0.68 among three L1 3 Russian annotators. To complement this, the Kaggle "Russian Language Toxic Comments" dataset (RTC) was also annotated. The dataset contains more than 14 000 labeled samples of hate speech. In Section 4, the performance of models trained on RSP data will be compared to that including RTC. More than 1500 samples are in the RSP dataset, and more than 15 000 samples are in total, adding the RTC data. As well as in many in situ abusive language research, an abusive language lexicon was also constructed. The text data that was collected previously contained a fair amount of such vocabulary, however, the dictionary should not be limited by the dataset. HateBase (Tuckwood, 2017) contains only 17 abusive Russian words. VK, the largest social network in Russia and CIS, has an abusive speech filter dictionary published unofficially, containing a large lexicon of abusive words. 4 Another source is russki-mat, 5 an open dictionary of Russian curse words with proper explanations and examples of usage. Overall, the multiple-source lexicon built contains more than 700 unique terms. As can be seen from Table 2, abuse-bearing sentences contain four times more uppercased words and 25 times more abusive words than non-abusive sentences. Data Preprocessing The stages of pre-processing are the following: 1. Balance the dataset. The initial dataset nohate/hate distribution is 1078/307 for the RSP dataset and 8815/5597 for the RSP+RTC dataset. The no-hate portion of the dataset is under-sampled so that this proportion is consistent. 2. Strip URLs. Remove the links from texts. 3. Adjust platform-specific text. All Twitter mentions, hashtags and retweet are shown by a set of distinct symbols (# for hasthtag, @ for retweet). These tags might hold information on whether the tweet is targeted at a particular person or not. 4. Orthographic normalisation. Replace Russian ё and Ё to the corresponding е and Е. These letters are mostly interchangeable in Russan language, thus it is the standard preprocessing routine when working with Russian text data. Tokenization. Splitting the sentences into separate words and punctuation. The tokenization is done with NLTK library's word tokenize() method. 6. Lemmatize terms. Lemmatization is reducing the word into its normal form. In case of Russian language, most researchers prefer stemming over lemmatization, however, if stemming is used, the search for offensive words in sentences would become intractable. The lemmatization is done with pymorphy2 (Korobov, 2015) -a morphological analyzer library specifically for Russian language. 7. Remove stop words from the text. Such words are common interjections, conjugations, prepositions, that do not need to be seen as features in the future modelling of the data. 8. TF-IDF vectorization. Turn the words into frequency vectors for each sample. 9. Train-test split Randomly split the ready data into train and test sets with 80/20 proportion. Feature Extraction Additional features beyond the text itself are included. Since abusive or hateful comments are anticipated to be also negative in sentiment, sentiment analysis is included. The sentiment was automatically predicted for the RTC dataset, for which a FastText (Bojanowski et al., 2017) embedding induced over RuSentiment (Rogers et al., 2018) was used, achieving F1 of 0.71, high for sentiment classifiers for Russian. Upper-casing full words is a popular toneindicating technique (Derczynski et al., 2015). Since one cannot "shout" in the internet, the intent of a higher-tone is expressed with upper-casing. Therefore, the number of fully-uppercased words is counted for each sample. We also count the number of offensive words (from our lexicon) contained in a sentence. This feature is expected to be important, since abusive language is often combined with profanity, though this kind of sampling is not without bias (Vidgen and Derczynski, 2020). Baseline Results [no RTC data] The baseline model is a binary Linear Support Vector Classifier with default L2 loss and squaredhinge loss. The model was chosen to be an SVC because similar work for other languages suggest that it can be effective for this type of task (Frisiani et al., 2019). The overall F1-score is up to 0.75, depending on the seed and parameters. The F1-score on the RSP+RTC Comments dataset is higher, up to 0.87, again, depending on the seed and parameters (Figure 2). Analysing the incorrectly classified samples, it turns out that the main difficulty the model has is longer texts as well as texts containing swear words that cannot be converted to initial form due to distortion through slang/word formation. An example of this is the following: В чем проблема? Деградируй до неандертальца и х*ярь (heavily distorted slang) п*дарасов (misspelling) What is the problem? Degrade to a Neanderthal level and kick those f*ggots The following example is a stereotypical hate speech sentence -it is all upper-cased, it uses abusive words and contains numerous insults. The baseline model recognizes it well: Skip stopword exclusion Although removing stop words from tokenized text is common practice, leaving them in might yield different results. This is the case here. The results are better on both datasets. F1-score over the RTC+RSP dataset is 0.88 (Figure 3). Without balancing the dataset In this experiment, the datasets are not balanced, thus the proportion of hate/no-hate is 1/2 in the combined RTC+RSP dataset and 1/10 in the RSP. As can be seen in Figure 4, true positives decrease by a small amount and the false negatives have risen up by a large margin, causing a decrease in overall model performance. Deep Learning Neural network-based approaches often show promising results on various NLP tasks. In fact, some of the best methods for hate-speech detection in English are BERT, CNN, GRU/LSTM-based techniques (Zampieri et al., 2020). We investigated these methods over RSP. RuBERT (Burtsev et al., 2018) is the original Bidirectional Encoder Representations from Transformers (Devlin et al., 2019) model but trained on Russian Wikipedia pages. The fine-tuning needed to be made includes training the last, classifier layer of the network. The results are promising, reaching F1-score of 0.85 on the whole training dataset (confusion matrix in Figure 5). The model is able to correctly recognize the following sample as hate-speech: Посмотрел Утомленных солнцем 2. И оказалось, что это хороший фильм, такая высокобюджетная артхаусятина, к которой могут быть претензии только потому, что сп*здили-распилили и вообще ТАК НЕ БЫВАЕТ. Ну н*хуй этих критиков. Обзоры длиннее фильмов, петросянство хуже рашкокомедий, еб*нутая ненависть и до*бки по мелочам. Watched Burnt by the Sun 2. Turns out it's a pretty good movie, a high-budget arthouse-ish film, the only downside possible is that most of the budget has been corruptly-stolen and THE PLOT IS NOT REALISTIC. F*ck those critics. The review texts are longer than the movie itself, jokes are worse than <humor in Russian-produced comedies>, f*cked up hate and f*cking nagging about small errors. mBERT mBERT is multilingual BERT (Devlin et al., 2019), again trained on Wikipedia pages of over a hundred languages, mainly of non-Latin alphabets. Russian is Cyrillic, thus the model has the potential in Russian hate-speech recognition domain. The fine-tuning is the same as for RuBERT. The results ( Figure 5) showed worse performance than RuBERT, up to 0.76 F1-score. The reason for the lower performance is probably in the concept of generalisation of BERT to multiple languages, as opposed to RuBERT, which is trained exclusively on Russian language. The following is an example of a sample which has been incorrectly classified as no-hate with both BERT-based models, as well as the baseline model: Вонючий совковый скот прибежал и ноет. А вот и сторонник демократии и свободы слова закукарекал. The stinking soviet cattle came running and whining. And here is the supporter of democracy and freedom of speech starting to croak. The sentence does not contain any especially abusive vocabulary, but rather the words "stinking", "cattle", "croak" in this context (in relation to people) are abusive. Analysis For the largest dataset of Russian abusive language samples (RSP+RTC) and the LinearSVC model, the best-case is 0.88. This is a good result for such a simple model compared to typical results in other languages (Zampieri et al., 2020). Our suggestion is that the reason for such a good score is the correct data preprocessing and, even more importantly, feature selection. RuBERT still struggles mainly with recognizing longer texts and texts with misspellings. Another barrier for this model in particular is when a text contains many named entities, because word representations might not contain entity surface forms (Augenstein et al., 2017) or individual entities may not be representative of the typical context of a given abusive language phenomena. An example of the above-mentioned criteria is the following long sentence with many named entities (NEs) and misspellings: Сторонники бандеровцев (NE) (леваков (NE), выступавших за бесклассовое (misspelling) общество и борьбу с капитализмом) и карлика-душителя котов Степана Бандеры (NE), который, как известно, боролся с расизмом, поддерживал Идель-Урал (NE) и называл побратимами исламских борцов за свободу из Азербайджана (NE), не пользуются симпатиями у правых европейцев. The mistakes made by mBERT are roughly a superset of those made by RuBERT. This suggests that information mBERT can gain from other languages is not particularly helpful for this task. Conclusion This paper presented data, models and experiments for abusive language detection in Russian. By choosing the right preprocessing techniques and language-specific feature selection it is possible to achieve state-of-the-art performance on par with best-performing English language models, even using a simple SVM model. This indicates that, given sufficient diversity of data, abusive language detection solutions can be rapidly developed for new languages. The code and data for this research are publicly available at: https://github.com/Sariellee/ Figure 1 : 1Dataset parts size and balance Figure 2 : 2Confusion matrixes of the baseline model КРЫМОТРЕД НАРУШАЕТ ПРАВИЛА РАЗДЕЛА Т.К В НЕМ НЕТ ОБСУЖДЕ-НИЯ ПОЛИТИКИ. СВОБОДНОЕ ОБ-ЩЕНИЕ ЭТО В b. ЭТО ТОЖЕ СА-МОЕ ЕСЛИ Я НА ДОСКЕ О ПОЛИТИ-КЕ СОЗДАМ ТРЕД О ШЛ*ХАХ. ТАК ЧТО У*БЫВАЙТЕ В Б ИЛИ НВР СО СВОИМ ЧАТИКОМ ПРЕСТАРЕЛЫХ Г*МОСЕКОВ! CRIMEA THREAD VIOLATES THE RULES OF THE FORUM BECAUSE THE RULES DOES NOT ALLOW POLITICS DISCUSSION. THIS IS THE SAME IF I START DISCUSSING SL*TS ON A POLITICS FORUM. SO GET THE F*CK OUT OF HERE AND GO TO ¡another forum¿ AND TAKE YOUR WHOLE OLD F*GGOTS PARTY WITH YOU! Figure 3 :Figure 4 : 34Improved Performance without giving balancing instance weights Figure 5 : 5Performance Table 1 : 1Word & token distribution across RSP Table 2 : 2Uppercase and profane word distribution across the dataset 4 Experiments Table 3 : 3Ablations over data processing steps, with SVM classifier (F-scores) https://www.theverge.com/2020/5/12/21255870/facebookcontent-moderator-settlement-scola-ptsd-mental-health https://github.com/fivethirtyeight/russian-troll-tweets I.e. as first language 4 Common Knowledge Russian Tweets, http://study.mokoron.com/ 5 http://www.russki-mat.net/home.php Russan-Hate-speech-Recognition A Data StatementThis appendix describes metadata for RSP, followingBender and Friedman (2018).A. Curation rationale The texts were taken from the South Park TV series in order to gather a corpus relatively rich in various forms of abusive language.B. Language variety Scripted Russian translated at high standard from US English. BCP47 ru-RU C. Speaker demographic The text is transcribed from words of Russian actors, mostly male, portraying characters who are both adults and children. The child characters (age eight) make up most of the speech content. The scripts were originally written by two US males from Colorado, over a period where they were aged 20-something to 40-something.D. Annotator demographic Native Russian speakers, male, twenties, university students.E. Speech situation This is scripted TV speech; it's not know how much latitude the voice actors were afforded over wording.F. Text characteristics The content is deliberately somewhat foul-mouthed and very informal; political satire and social commentary are common themes. Detection of abusive speech for mixed sociolects of Russian and Ukrainian Languages. Bohdan Andrusyak, Mykhailo Rimel, Roman Kern, Proceedings of RASLAN. RASLANBohdan Andrusyak, Mykhailo Rimel, and Roman Kern. 2018. Detection of abusive speech for mixed soci- olects of Russian and Ukrainian Languages. In Pro- ceedings of RASLAN, pages 77-84. Generalisation in named entity recognition: A quantitative analysis. Isabelle Augenstein, Leon Derczynski, Kalina Bontcheva, Computer Speech & Language. 44Isabelle Augenstein, Leon Derczynski, and Kalina Bontcheva. 2017. Generalisation in named entity recognition: A quantitative analysis. Computer Speech & Language, 44:61-83. The #BenderRule: On Naming the Languages We Study and Why It Matters. The Gradient. Emily Bender, Emily Bender. 2019. The #BenderRule: On Naming the Languages We Study and Why It Matters. The Gradient. Data statements for natural language processing: Toward mitigating system bias and enabling better science. M Emily, Batya Bender, Friedman, Transactions of the Association for Computational Linguistics. 6Emily M Bender and Batya Friedman. 2018. Data statements for natural language processing: Toward mitigating system bias and enabling better science. Transactions of the Association for Computational Linguistics, 6:587-604. Enriching word vectors with subword information. Piotr Bojanowski, Edouard Grave, Armand Joulin, Tomas Mikolov, Transactions of the Association for Computational Linguistics. 5Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the Associa- tion for Computational Linguistics, 5:135-146. DeepPavlov: Open-source library for dialogue systems. Mikhail Burtsev, Alexander Seliverstov, Rafael Airapetyan, Mikhail Arkhipov, Dilyara Baymurzina, Nickolay Bushkov, Olga Gureenkova, Taras Khakhulin, Yurii Kuratov, Denis Kuznetsov, Proceedings of ACL 2018, System Demonstrations. ACL 2018, System DemonstrationsMikhail Burtsev, Alexander Seliverstov, Rafael Airapetyan, Mikhail Arkhipov, Dilyara Baymurz- ina, Nickolay Bushkov, Olga Gureenkova, Taras Khakhulin, Yurii Kuratov, Denis Kuznetsov, et al. 2018. DeepPavlov: Open-source library for dia- logue systems. In Proceedings of ACL 2018, System Demonstrations, pages 122-127. Analysis of named entity recognition and linking for tweets. Leon Derczynski, Diana Maynard, Giuseppe Rizzo, Marieke Van Erp, Genevieve Gorrell, Raphaël Troncy, Johann Petrak, Kalina Bontcheva, Information Processing & Management. 512Leon Derczynski, Diana Maynard, Giuseppe Rizzo, Marieke Van Erp, Genevieve Gorrell, Raphaël Troncy, Johann Petrak, and Kalina Bontcheva. 2015. Analysis of named entity recognition and linking for tweets. Information Processing & Management, 51(2):32-49. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies1Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Un- derstanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186. Combination of multiple deep learning architectures for offensive language detection in tweets. Nicolò Frisiani, Alexis Laignelet, Batuhan Güler, arXiv:1903.08734arXiv preprintNicolò Frisiani, Alexis Laignelet, and Batuhan Güler. 2019. Combination of multiple deep learning archi- tectures for offensive language detection in tweets. arXiv preprint arXiv:1903.08734. Morphological analyzer and generator for russian and ukrainian languages. Mikhail Korobov, 10.1007/978-3-319-26123-2_31Analysis of Images. Mikhail Yu. Khachay, Natalia Konstantinova, Alexander Panchenko, Dmitry I. Ignatov, and Valeri G. Labunets542320Social Networks and TextsMikhail Korobov. 2015. Morphological analyzer and generator for russian and ukrainian languages. In Mikhail Yu. Khachay, Natalia Konstantinova, Alexander Panchenko, Dmitry I. Ignatov, and Va- leri G. Labunets, editors, Analysis of Images, Social Networks and Texts, volume 542 of Communications in Computer and Information Science, pages 320- Detecting state of aggression in sentences using CNN. Rodmonga Potapova, Denis Gordeev, International Conference on Speech and Computer. Rodmonga Potapova and Denis Gordeev. 2016. Detect- ing state of aggression in sentences using CNN. In International Conference on Speech and Computer, pages 240-245. RuSentiment: An enriched sentiment analysis dataset for social media in Russian. Anna Rogers, Alexey Romanov, Anna Rumshisky, Svitlana Volkova, Mikhail Gronas, Alex Gribov, Proceedings of the 27th international conference on computational linguistics. the 27th international conference on computational linguisticsAnna Rogers, Alexey Romanov, Anna Rumshisky, Svitlana Volkova, Mikhail Gronas, and Alex Gribov. 2018. RuSentiment: An enriched sentiment analy- sis dataset for social media in Russian. In Proceed- ings of the 27th international conference on compu- tational linguistics, pages 755-763. A method for development and analysis of short text corpus for the review classification task. Yv Rubtsova, Proceedings of Conferences Digital Libraries: Advanced Methods and Technologies, Digital Collections, RCDL. Conferences Digital Libraries: Advanced Methods and Technologies, Digital Collections, RCDLYV Rubtsova. 2013. A method for development and analysis of short text corpus for the review classi- fication task. In Proceedings of Conferences Digi- tal Libraries: Advanced Methods and Technologies, Digital Collections, RCDL, pages 269-275. Corpus annotation through crowdsourcing: Towards best practice guidelines. Marta Sabou, Kalina Bontcheva, Leon Derczynski, Arno Scharl, Proceedings of LREC. LRECMarta Sabou, Kalina Bontcheva, Leon Derczynski, and Arno Scharl. 2014. Corpus annotation through crowdsourcing: Towards best practice guidelines. In Proceedings of LREC, pages 859-866. Toxic Comments Detection in Russian. Sergey Smetanin, Computational Linguistics and Intellectual Technologies: Proceedings of the International Conference. 2020Sergey Smetanin. 2020. Toxic Comments Detection in Russian. In Computational Linguistics and Intellec- tual Technologies: Proceedings of the International Conference "Dialogue 2020". Hatebase: Online database of hate speech. Christopher Tuckwood, The Sentinal Project. Available. Christopher Tuckwood. 2017. Hatebase: Online database of hate speech. The Sentinal Project. Avail- able at: https://www. hatebase. org. Directions in abusive language training data, a systematic review: Garbage in, garbage out. Bertie Vidgen, Leon Derczynski, PLoS one. 1512243300Bertie Vidgen and Leon Derczynski. 2020. Direc- tions in abusive language training data, a system- atic review: Garbage in, garbage out. PLoS one, 15(12):e0243300. Are you a racist or am i seeing things? Annotator influence on hate speech detection on twitter. Zeerak Waseem, Proceedings of the first workshop on NLP and computational social science. the first workshop on NLP and computational social scienceZeerak Waseem. 2016. Are you a racist or am i seeing things? Annotator influence on hate speech detec- tion on twitter. In Proceedings of the first workshop on NLP and computational social science, pages 138-142. Hateful symbols or hateful people? Predictive features for hate speech detection on twitter. Zeerak Waseem, Dirk Hovy, Proceedings of the NAACL Student Research Workshop. the NAACL Student Research WorkshopZeerak Waseem and Dirk Hovy. 2016. Hateful sym- bols or hateful people? Predictive features for hate speech detection on twitter. In Proceedings of the NAACL Student Research Workshop, pages 88-93. Marcos Zampieri, Preslav Nakov, Sara Rosenthal, Pepa Atanasova, Georgi Karadzhov, Hamdy Mubarak, Leon Derczynski, Zeses Pitenis, and Ç agrı Çöltekin. 2020. SemEval-2020 Task 12: Multilingual Offensive Language Identification in Social Media (Offen-sEval 2020). Marcos Zampieri, Preslav Nakov, Sara Rosenthal, Pepa Atanasova, Georgi Karadzhov, Hamdy Mubarak, Leon Derczynski, Zeses Pitenis, and Ç agrı Çöltekin. 2020. SemEval-2020 Task 12: Multilingual Offen- sive Language Identification in Social Media (Offen- sEval 2020). Reducing unintended identity bias in Russian hate speech detection. Nadezhda Zueva, Madina Kabirova, Pavel Kalaidin, Proceedings of the Fourth Workshop on Online Abuse and Harms. the Fourth Workshop on Online Abuse and HarmsNadezhda Zueva, Madina Kabirova, and Pavel Kalaidin. 2020. Reducing unintended identity bias in Russian hate speech detection. In Proceedings of the Fourth Workshop on Online Abuse and Harms, pages 65-69.
195,065,610
Cross Linguistic Variations in Discourse Relations among Indian Languages
This paper summarizes our work on analysis of cross linguistic variations in discourse relations for Indo-Aryan language Hindi and Dravid ian languages Malayalam and Tamil. In this paper we have also presented an automat ic discourse relation identifier, wh ich gave encouraging results. Analysis of the results showed that some co mplex structural inter-dependencies existed in these three languages. We have described in detail the structural inter-dependencies that occurred. Discourse relations in the three languages thus exhibited co mplex nature due to the structural inter-dependencies.
[ 14382150, 12887687, 13374927, 17471572, 14114841, 210363 ]
Cross Linguistic Variations in Discourse Relations among Indian Languages NLPAICopyright NLPAIDecember 2017. 2016 S Bandyopadhyay D Sharma R Sangal AU-KBC Research Centre AU-KBC Research Centre MIT Campus of Anna University ChennaiIndia AU-KBC Research Centre MIT Campus of Anna University ChennaiIndia MIT Campus of Anna University ChennaiIndia Cross Linguistic Variations in Discourse Relations among Indian Languages NLP Association of India . of the 14th Intl. Conference on Natural Language essingKolkata, IndiaNLPAIDecember 2017. 2016 This paper summarizes our work on analysis of cross linguistic variations in discourse relations for Indo-Aryan language Hindi and Dravid ian languages Malayalam and Tamil. In this paper we have also presented an automat ic discourse relation identifier, wh ich gave encouraging results. Analysis of the results showed that some co mplex structural inter-dependencies existed in these three languages. We have described in detail the structural inter-dependencies that occurred. Discourse relations in the three languages thus exhibited co mplex nature due to the structural inter-dependencies. Introduction Discourse relations link clauses in text and compose overall text structure. Discourse relations are used in natural language processing (NLP), including text summarization and natural language generation. The analysis and modeling of discourse structure has been an important area of linguistic research and it is necessary for building efficient NLP applications. Hence the automatic detection of discourse relation is also important. The Indo-Aryan (Hindi) and Dravidian languages (Malayalam and Tamil) share certain similarities such as verb final language, free word order and morphologically rich inflections. Due to the influence of Sanskrit in these languages they are similar at lexical level. But structurally they are very different. In this work we have presented an analysis of the cross linguistic variations in the discourse relations among three languages Hindi, Malayalam and Tamil. Instead of identifying all possible discourse relations we have considered the analysis of explicit discourse relations and developed an automatic discourse relation identification system. During error analysis various structural interdependencies were also noted. Discourse tagging for Indian languages Hindi, Malayalam and Tamil has been done by Sobha et al., (2014) Other published works on discourse relation annotations in Indian languages are in Hindi (Kolachina et al., (2012); Oza et al., (2009)) and Tamil (Rachakonda and Sharma (2011)). Menaka et al., (2011) in their paper have automatically identified the causal relations and have described about the structural interdependencies that exist between the relations. Similarly, we observed the existence of structural interdependencies between the discourse relations in three languages, which we have explained in detail. From the previous works on discourse relation annotation for various Indian languages, we can observe that the study of discourse relations is carried out for specific Indian language and hence we attempted to discuss the cross linguistic variations among Hindi, Tamil and Malayalam languages. Researchers have performed identification and extraction of discourse relation using cue based or statistical methods. Penn Discourse Tree Bank (PDTB) is the large scale annotated corpora of linguistic phenomena in English (Prasad et al., 2008). The PDTB is the first to follow the lexically grounded approach to annotation of discourse relations. Marcu and Echihabi (2012) have focused on recognition of discourse relation using cue phrases, but not extraction of arguments. Wellner and Pustejovksy (2007) in their study considered the problem of automatically identifying the arguments of discourse connectives in PDTB. They re-casted the problem to that of identifying the argument heads, instead of 402 identifying the full extents of the arguments as annotated in PDTB. To address the problem of identifying the arguments of discourse connectives they incorporated a variety of lexical and syntactic features in a discrimination log-linear re-ranking model to select the best argument pair from a set of N best argument pairs provided by independent argument models. They obtained 74.2% accuracy using gold standard parser and 64.6% accuracy using automatic parser for both arguments. Elwell and Baldridge (2008) have used models tuned to specific connectives and connective types. Their study showed that using models for specific connectives and types of connectives and interpolating them with a general model improves the performance. The features used to improve performance include the morphological properties of connectives and their arguments, additional syntactic configuration and wider context of preceding and following connectives. The system was developed on PDTB. They used Maximum entropy ranker. Models were trained for arg1 and arg2 selection separately. They achieved 77.8% accuracy for identifying both arguments of connective for gold standard parser and 73.6% accuracy using automatic parser. Ramesh and Yu (2010) have developed a system for identification of discourse connectives in bio-medical domain. They developed the system on BioDRB corpus using CRFs algorithm. For PDTB data they obtained Fscore of 84%. They obtained F-score of 69% for BioDRB data. For PDTB based classifier on B i-oDRB data, they obtained F-score of 55%. In this work they did not focus on identification of arguments. Versley (2010) presented his work on tagging German discourse connectives using a German-English parallel corpus. AlSaif (2012) used machine learning algorithms for automatically identifying explicit discourse connectives and its relations in Arabic language. Wang et al., (2012) used sub-trees as features and identified explicit and implicit connectives and their arguments. Zhou et al., (2012) presented the first effort towards cross lingual identification of the ambiguities of discourse connectives. Faiz et al., (2013) did explicit discourse connectives identification in the PDTB and the Biomedical Discourse Relation Bank (BDRB) by combining certain aspects of the surface level and syntactic feature sets. In this study we tried to develop a discourse parser for all three languages for identification of connectives and its arguments. Following sections are organized as follows. Corpus Collection and Annotation is described in section 2, cross linguistic variations in discourse relations among three languages is given in section 3, method used for the automatic identification of discourse relation and the results are described in section 4 and the various structural interdependencies that occur in the three languages is described in section 5. The paper ends with the conclusion section. Corpus collection and Annotation Health related articles were chosen from web and after removing inconsistencies like hyperlinks a total corpus of 5000 sentences were obtained. Then we annotated the corpus for connectives and its arguments. The discourse relation annotation was purely syntactic. The arguments were labeled as arg1 and arg2 and arg2 was chosen to be following arg1. When free words occur, we tag them separately and the discourse unit between which the relation is inferred is marked as arg1 and arg2. When the connectives exist as bound morphemes we keep them along with the word to which it is attached and include it under arg1. The annotated corpus contains 1332 explicit connectives in Hindi, 1853 in Malayalam and 1341 in Tamil. From the data statistics we can observe that Malayalam language has more number of connectives than Tamil and Hindi. Annotated corpus is used to train the system and the models are built for the identification of connectives and arguments. Cross Linguistic variations in Discourse Relations The discourse relation in Indian language can be expressed in many ways. It can be syntactic (a suffix) or lexical. It can be within a clause, interclausal or inter-sentential. The various cross linguistic variations in discourse relation among the three languages is analyzed and described below. Discourse Connectives Discourse relations can be inferred using Explicit or Implicit connectives. Explicit connectives connect two discourse units and trigger discourse relation. The explicit connectives can be realized in any of the following ways. Subordinators that connect the main clause with the subordinate or dependent clause. (For example: agar-to, jabkI in Hindi, appoL, -aal in Malayalam andaal, ataal in Tamil). Coordinators which connect two or more items of equal syntactic importance. 403 They connect two independent clauses. (For example: "aur", "lekin" in Hindi, "um", "ennaal" in Malayalam and "anaal", "athanaal" in Tamil). Conjunct adverbs that connect two independent clauses and modify the clauses or sentences in which they occur. (For example: "isliye", "halaanki" in Hindi, "athinaal", "aakayaal" in Malayalam and "enninum", "aakaiyaal" in Tamil). Correlative conjunctions which are paired conjunctions. They link words or group of words of equal weights in a sentence. (For example: "na keval balki" in Hindi, "maathramalla-pakshe" in Malayalam and "mattumalla-aanaal" in Tamil). Position of Connectives In our approach we have done a syntactic based tagging. In Hindi, Malayalam and Tamil discourse connectives can occur within a sentence or between sentences. In all the three languages inter sentence connectives are said to occupy sentence initial position. Example 1 shows the inter sentence discourse relation in Malayalam. Example 1: [chila aaLukaL mukhsoundaryam koottaan Some people facial-beauty increase kreemukaL upayogikkaaruNt.]/arg1 creams use ennaal [athu guNathekkaaLeRe doshamaaN But that goodness-more than harm-is cheyyuka.]/arg2 do (Some people use creams to increase their facial beauty. But that will do more harm than good.) We found that there exists a difference in the position of conjunct adverb "although" among the three languages. As in Example 2, in Hindi this connective occurs in the sentence initial position whereas in Tamil and Malayalam this connective occurs in the middle position and remains agglutinated with the verb. Agglutinated and intra sentence In Malayalam and Tamil connectives can occur as free words or bound morphemes. But in Hindi only free word connectives exist as in Example 2. Example 4: [vayiRRil kutalpun irunthaal]/arg1 [vayiRu In stomach ulcer is there-if stomach valikkum]/arg2. will pain (If there is ulcer in stomach, stomach will pain.) Paired connectives In Hindi some discourse connectives were seen as paired connectives. This type of connectives is not noticed in Malayalam and Tamil. In the above Example 5 "yadhii-to" is the paired connective that occurs at the start of arg1 and arg2. Whereas in Tamil and Malayalam it occurs as a single connective as in Example 4 and occurs agglutinated with verb. Arguments of Relations In our approach the label assignment is syntactic. Sometimes, the arguments can be in the same sentence as the connective. Sometimes, one of the preceding sentence acts as an argument. Also the argument can be a non-adjacent sentence. But the text span follows the minimality-principle. In Example 1 the connective "ennal" in Malayalam 404 connects two discourse units inter sententially. The discourse unit that follows the connective is arg2 and the preceding unit is arg1. In Example 4 the arguments for connective "-aal" in Tamil occur in same sentence. Automatic identification of discourse relation 4.1 Method Used We have used the method adopted by Menaka et al., (2011) for the identification of discourse relations. We have preprocessed the text for morph analysis (Ram et al, 2010), part-of-speech tagging (PoS) (Sobha et al, 2016), chunking (Sobha and Ram, 2006), clause tagging (Ram et al, 2012). The implementation is done based on machine learning technique CRFs. Conditional Random Fields CRFs is an undirected graphical model, where the conditional probabilities of the output are maximized for a given input sequence. We chose CRFs, because it allows linguistic rules or conditions to be incorporated into machine learning algorithm. Here, we have used CRF++ (Kudo, 2005) , an open source toolkit for linear chain CRFs. Features Used For the identification of connectives, we have used PoS tagging information, morphological suffixes and clause information as features for Malayalam and Tamil. Morphological suffixes such as conditional markers, causal markers, relative participle (RP) marker followed by postposition (PSP) and coordination markers were used. For connective identification in Hindi, word, PoS tagging information and chunk information were used. For argument identification we have taken PoS tagging information, chunk information, morphological suffixes, and clause information, combination of PoS and chunk information and connectives as features. Training and Testing For identifying the discourse connectives, we trained the system using the features for connectives. In the next stage we train the system to identify the arguments and their text spans. Here we have built 4 language models for each of the 4 boundaries -Arg2-START, Arg1-END, Arg1-START and Arg2-END motivated by the work of Menaka et al., (2011). The system was trained in 4 phases to develop 4 models. We used 4000 sentences from the corpus for training and 1000 sentences for testing. For testing, the sentences are pre-processed similarly as training data. The system identified the discourse markers in stage 1 and this output becomes input to stage 2. In both the stages we used CRFs as the machine learning algorithm. The performance of our system is measured in terms of Precision, Recall and F score. Precision is the number of discourse relations correctly perceived by the system from the total number of discourse relations identified, Recall is the number of discourse relations correctly detected by the system by the total number of discourse relations contained in the input text and F-score is the harmonic mean of precision and recall. The results for connective identification are tabulated in Table 1 During error analysis it is noted that a good number of errors are due to structural interdependencies between discourse relations. When there are such structures, there is a considerable overlap in the arguments of two discourse relations leading to the improper identification of boundaries by the system. These are discussed in detail in the next section. Structural Interdependencies between discourse relations Some very unique pattern of interdependencies was seen existing between discourse relations for Hindi, Malayalam and Tamil mainly due to the free word order nature of those languages. Given below are such patterns. Embedding within itself Due to the free word order nature of Indian languages this type of structure comes into being. Consider the Malayalam Example 6 given below. Here arg1 and marker is seen embedded inside arg2. Between Two Discourse Relations -Containment One most frequently occurring structural dependency is that of embedding or containment of the whole of a discourse relation within one of the arguments of another discourse relation. The Example 7 shows that the arguments of connective "agar-to" are contained within the arg2 of connective "aur". Between two Discourse Relations -Complete Overlap/Shared Argument An argument may be shared by two discourse relations in different ways. In Example 8 the arg2 of the first discourse relation is the shared argument for the second discourse relation. Completely Independent Relations Example 9: [poshakaaharam nalki kuttiye nourishing-food gave child paripaalichu.]/arg1 i engilum [kuttiyute fostered But child's arogyathil purogathiyilla.]/arg2 i [atuthaghathathil health-in no-progress next-stage-in guLikakaL nalki.]/arg1 j engilum [kuttiyte vitamin tablets gave But child's arogyam athe nilayil thutarnnu.]/arg2 j health same condition-in continued. (Nourishing food was given for the child. But the child's health had no progress. In the next stage gave vitamin tablets. But the child's condition remained the same.) In Example 9 there are two adjacent discourse relations which are independent of each other. Conclusion We have presented our work on discourse relation identification for Hindi, Malayalam and Tamil. An analysis of the discourse relations among the three languages was performed and an automatic identification system for discourse relation was developed. By analyzing the results 406 structural dependencies were noted. By handling this issue the performance of the system can be improved which makes up our future work. If cartilage in the knee gets wear without growing and if the smooth fluid present there becomes less, friction will develop in the knee.)Example 2: haalaaMki [yoga pakshaaGaath kii samasyaa kaa although yoga paralysis problem's sTaayii samaaDhaan karthaa hai]/arg2, permanent solution do is [yah samay lethaa hai evaM shramsaaDya This time take is and painstaking hai]/arg1 is (Although yoga gives a permanent solution for paralysis, this is time taking and painstaking.) In Tamil and Malayalam the connective "and" exists in the form as in Example 3. In Hindi sin- gle lexicon "aur" serves this purpose. Example 3: [muuttukaLiluLLa kuRuththelumpu vaLaraamal in knee cartilage without theymaanam atainthaalum]/arg1, growing wear if get-and [angkuLLa vazhuvazhuppaana thiravam there smooth fluid kuRainthupoonaalum]/arg2 muuttukaLil uraayvu get less-and knee friction eRpatum. will develop ( .The argument identification results are given inTable 2, Table 3, Table 4 and Table 5.Precision Recall F- score Hindi 96.33 92.3 94.27 Malayalam 96.3 91.6 93.89 Tamil 95.35 94.18 94.76 Table 1: Results for Connective Identification Precision Recall F- score Hindi 76 72.2 74.05 Malayalam 78.5 72 75.1 Tamil 81.53 73.6 77.36 Table 2 : 2Results for ARG1 StartPrecision Recall F- score Hindi 75.9 72.2 74 Malayalam 78.8 72 75.23 Tamil 82 72.6 77 Table 3 : 3Results for ARG1 EndPrecision Recall F- score Hindi 77.4 73.2 75.24 Malayalam 79.2 73 75.97 Tamil 81.5 72.6 76.79 Table 4 : 4Results for ARG2 StartPrecision Recall F- score Hindi 76.3 71.2 73.66 Malayalam 78.7 72.4 75.42 Tamil 82 72.7 77 Table 5 : 5Results for ARG2 End This contract didn't happen, hence many plans failed.)Example 6: [pala padhathikaLum [ee karaaR many plans this contract sambhavikkaathathinaal]/arg1 natakkaathe not-happen-hence failed poyi.]/arg2 ( /arg1 j tho [[ve bhahuth hii is then they very halke hothe haiN]/arg2 i ]/arg2 j light is (Approximately 25 to 50 percent of rubella infection is not known and if its symptoms develop then they are very light.)Example 7: [lagbhag 25 se 50 prathishath roobelaa approximately 25 from 50 percent rubella saMkramaN kaa pathaa nahiM cal paathaa]/arg1 i infection know not get aur [agar[ iske lakshaN paidhaa hothe and if its symptoms develop haiM] Because of using vehicles in modern life style walking is reduced. Because of this, the unwanted calories accumulated in the body is not burnt.)Example 8: naviina vaazhkkai muRaiyil vaakanagkalaip modern life style vehicles payanpatutthuvathaal]/arg1 i [[nataippayiRci use-because walking enpathu kuRainthuvittathu]/arg2 i ]/arg1 j . is reduced ithanaal [utalil cerum Because of this in body accumulate thevaiyaRRa kalorikaL cariyaaka unwanted calories correctly erikkappatuvathillai]/arg2 j . not burnt ( Hu man and auto matic annotation of discourse relations for Arabic. John L Alsaif, University of LeedsPh.D. thesisJohn L AlSaif. 2012. Hu man and auto matic annota- tion of discourse relations for Arabic, Ph.D. thesis, University of Leeds. Automatically Identifying the Argu ments of Discourse Connectives. Ben Wellner, James Pustejovsky, Proceedings of EMNLP-CoNLL. EMNLP-CoNLLPragueBen Wellner and James Pustejovsky. 2007. Automati- cally Identifying the Argu ments of Discourse Connec- tives, Proceedings of EMNLP-CoNLL, Prague, 92- 101. Identifying discourse connectives in bio med ical text. P Balaji, Hong Ramesh, Yu, Proceedings of AMIA Annual Symposium. AMIA Annual SymposiumWashington, DCBalaji P. Ramesh and Hong Yu. 2010. Identifying discourse connectives in bio med ical text, Proceedings of AMIA Annual Symposium, Washington, DC 657- 661. An unsupervised approach to recognizing discourse relations. Daniel Marcu, Abdessamad Ech, Proceedings of 40th Annual Meeting on Association for Computational Linguistics. 40th Annual Meeting on Association for Computational LinguisticsDaniel Marcu and Abdessamad Ech ihabi. 2012. An unsupervised approach to recognizing discourse rela- tions, Proceedings of 40th Annual Meeting on Associ- ation for Computational Linguistics, 368-375. Discourse connective argu ment identification with connective specific rankers. Robert Elwell, Jason Baldridge, Proceedings of International Conference on Semantic Computing. International Conference on Semantic ComputingSanta Clara, CARobert Elwell and Jason Baldridge. 2008. Discourse connective argu ment identification with connective specific rankers, Proceedings of International Confer- ence on Semantic Computing, Santa Clara, CA. The Penn Discourse TreeBank 2.0. Rashmi Prasad, Alan Nikh Il Dinesh, Lee, Livio Elen I Miltsakaki, Aravind Robaldo, Bonnie Joshi, Webber, Proceedings of Language Resources and Evaluation Conference. Language Resources and Evaluation ConferenceMarrakech, MoroccoRashmi Prasad, Nikh il Dinesh, Alan Lee, Elen i Miltsakaki, Livio Robaldo, Aravind Joshi, and Bonnie Webber. 2008. The Penn Discourse TreeBank 2.0, Proceedings of Language Resources and Evaluation Conference, Marrakech, Morocco. Cross-Lingual Identification of Amb iguous Discourse Connectives for Resource-Poor Language. Lanjun Zhou, Wei Gao, Binyang Li, Zhongyu Wei, Kam-Fai Wong, Proceedings of International Conference on Computational Linguistics. International Conference on Computational LinguisticsMumbai, IndiaLanjun Zhou, Wei Gao, Binyang Li, Zhongyu Wei, and Kam-Fai Wong. 2012. Cross-Lingual Identifica- tion of Amb iguous Discourse Connectives for Re- source-Poor Language, Proceedings of International Conference on Computational Linguistics, Mumbai, India, 1409-1418. Tamil Clause Boundary Identification: Annotation and Evaluation. Rvs Ram, T Bakiyavathi, R Sindhujagopalan, Amudha, L Sobha, Proceedings of 1st Workshop on Indian Language Data: Resources and Evaluation. 1st Workshop on Indian Language Data: Resources and EvaluationIstanbulRam, RVS, Bakiyavathi, T, Sindhujagopalan, R, Amudha, K and Sobha, L. 2012. Tamil Clause Boundary Identification: Annotation and Evaluation, Proceedings of 1st Workshop on Indian Language Data: Resources and Evaluation, Istanbul. Creat ing an annotated Tamil corpus as a discourse resource. Ravi T Rachakonda, Dipti M Sharma, Proceedings of 5th Linguistic Annotation Workshop. 5th Linguistic Annotation WorkshopPortland, OregonRavi T. Rachakonda, and Dipti M. Sharma. 2011. Creat ing an annotated Tamil corpus as a discourse resource, Proceedings of 5th Linguistic Annotation Workshop, Portland, Oregon, 119-123. Evaluation of Discourse Relation Annotation in the Hindi Discourse Relation Bank. Sudheer Kolachina, Rashmi Prasad, Dipt , M Sharma, Aravind Joshi, Proceedings of Language Resources and Evaluation Conference. Language Resources and Evaluation ConferenceIstanbul, TurkeySudheer Kolachina, Rashmi Prasad, Dipt i M. Sharma, and Aravind Joshi. 2012. Evaluation of Discourse Relation Annotation in the Hindi Discourse Relation Bank, Proceedings of Language Resources and Eval- uation Conference, Istanbul, Turkey, 823-828. Automatic identification of cause-effect relations in tamil using CRFs. S Menaka, R K Pattabhi, Sobha L Rao, Devi, Proceedings of Computational Linguistics and Intelligent Text Processing. Computational Linguistics and Intelligent Text Processing6608S. Menaka, Pattabhi R.K. Rao, and Sobha L. Devi. 2011. Automatic identification of cause-effect rela- tions in tamil using CRFs, Proceedings of Computa- tional Linguistics and Intelligent Text Processing, Lecture Notes in Computer Science, 6608:316-327. Identifying exp licit d iscourse connectives in text. I Sayeed, Robert E Faiz, Mercer, Advances in Artificial Intelligence. 7884Lecture Notes in Computer ScienceSayeed I. Faiz, and Robert E. Mercer. 2013. Identify- ing exp licit d iscourse connectives in text, Advances in Artificial Intelligence, Lecture Notes in Computer Science, 7884:64-76. Noun Phrase Chunker for Tamil. Vijay Sobha, R Sundar Ram, Proceedings of the First National Symposium on Modeling and Shallow Parsing of Indian Languages (MSPIL), IIT Mu mbai. the First National Symposium on Modeling and Shallow Parsing of Indian Languages (MSPIL), IIT Mu mbaiIndiaSobha, L and Vijay Sundar Ram, R. 2006. Noun Phrase Chunker for Tamil, Proceedings of the First National Symposium on Modeling and Shallow Pars- ing of Indian Languages (MSPIL), IIT Mu mbai, India, 194-198. Discourse Tagging for Indian Languages, Proceedings of Computational Linguistics and Intelligent Text Processing. L Sobha, S Dev I, Sindhuja Laksh Mi, Gopalan, Berlin, HeidelbergSobha L. Dev i, S. Laksh mi, and Sindhuja Gopalan. 2014. Discourse Tagging for Indian Languages, Pro- ceedings of Computational Linguistics and Intelligent Text Processing, Berlin, Heidelberg, 469-480. L Sobha, Devi, R K Pattabhi, Vijay Rao, R Sundar Ram, AUKBC Tamil Part-of-Speech Tagger (AUKBC-TamilPoSTagger 2016v1), web download. Sobha L Devi, Pattabhi RK Rao and Vijay Sundar Ram, R. 2016. AUKBC Tamil Part-of-Speech Tagger (AUKBC-TamilPoSTagger 2016v1), web download, http://www.au-kbc.org/nlp/corpusrelease.html. Taku Kudo, CRF++, an open source toolkit fo r CRF. Taku Kudo. 2005. CRF++, an open source toolkit fo r CRF, http://crfpp.sourceforge.net . The Hindi discourse relation bank. Umangi Oza, Rashmi Prasad, Sudheer Kolach Ina, Dipti M Sharma, Aravind Joshi, Proceedings of Third Linguistic Annotation Workshop. Third Linguistic Annotation WorkshopUmangi Oza, Rashmi Prasad, Sudheer Kolach ina, Dipti M. Sharma, and Aravind Joshi. 2009. The Hindi discourse relation bank, Proceedings of Third Linguis- tic Annotation Workshop, 158-161. Tamil Morphological Analyser. Vijay Sundar Ram, R Menaka, Sobha Lalitha, Devi , Morphological Analysers and Generators. Vijay Sundar Ram, R, Menaka, S and Sobha Lalitha Devi. 2010. Tamil Morphological Analyser", in "Morphological Analysers and Generators, LDC-IL, Mysore, 1 -18. Imp licit Discourse Relat ion Recognition by Selecting Typical Training Examp les. Xun Wang, Jiwei Suj Ian Li, Wenj Le Li, Li, Proceedings of International Conference on Computational Linguistics. International Conference on Computational LinguisticsMumbai, IndiaXun Wang, Suj Ian Li, Jiwei Li, and Wenj Le Li. 2012. Imp licit Discourse Relat ion Recognition by Selecting Typical Training Examp les, Proceedings of International Conference on Computational Linguis- tics, Mumbai, India, 2757-2772. Discovery of amb iguous and unambiguous discourse connectives via annotation projection. Yannick Versley, Proceedings of Workshop on Annotation and Exploitation of Parallel Corpora (AEPC). Workshop on Annotation and Exploitation of Parallel Corpora (AEPC)Yannick Versley. 2010. Discovery of amb iguous and unambiguous discourse connectives via annotation projection, Proceedings of Workshop on Annotation and Exploitation of Parallel Corpora (AEPC), 83-82.
16,207,472
Quality Estimation for Synthetic Parallel Data Generation
This paper presents a novel approach for parallel data generation using machine translation and quality estimation. Our study focuses on pivot-based machine translation from English to Croatian through Slovene. We generate an English-Croatian version of the Europarl parallel corpus based on the English-Slovene Europarl corpus and the Apertium rule-based translation system for Slovene-Croatian. These experiments are to be considered as a first step towards the generation of reliable synthetic parallel data for under-resourced languages. We first collect small amounts of aligned parallel data for the Slovene-Croatian language pair in order to build a quality estimation system for sentence-level Translation Edit Rate (TER) estimation. We then infer TER scores on automatically translated Slovene to Croatian sentences and use the best translations to build an English-Croatian statistical MT system. We show significant improvement in terms of automatic metrics obtained on two test sets using our approach compared to a random selection of synthetic parallel data.
[ 773282, 6284099, 5178123, 13282731, 15119437, 38407095, 3708537, 2765530, 6470935 ]
Quality Estimation for Synthetic Parallel Data Generation Raphael Rubino Antonio Toral atoral@computing.dcu.ie CNGL -School of Computing Dublin City University Ireland Nikola Ljubešić nljubesi@ffzg.hr Department of Information and Communication Sciences University of Zagreb Croatia Gema Ramírez-Sánchez Prompsit Language Engineering ElcheSpain Quality Estimation for Synthetic Parallel Data Generation Under-resourced LanguagesSynthetic CorporaMachine TranslationQuality Estimation This paper presents a novel approach for parallel data generation using machine translation and quality estimation. Our study focuses on pivot-based machine translation from English to Croatian through Slovene. We generate an English-Croatian version of the Europarl parallel corpus based on the English-Slovene Europarl corpus and the Apertium rule-based translation system for Slovene-Croatian. These experiments are to be considered as a first step towards the generation of reliable synthetic parallel data for under-resourced languages. We first collect small amounts of aligned parallel data for the Slovene-Croatian language pair in order to build a quality estimation system for sentence-level Translation Edit Rate (TER) estimation. We then infer TER scores on automatically translated Slovene to Croatian sentences and use the best translations to build an English-Croatian statistical MT system. We show significant improvement in terms of automatic metrics obtained on two test sets using our approach compared to a random selection of synthetic parallel data. Introduction Previous work on synthetic parallel data generation relies on the use of machine translation (MT) to translate source text into the target language for a given language pair in order to obtain a new parallel corpus. This resource can then be used as training material for SMT, or any other application that requires parallel data. However, one important limitation of this artificial resource is its translation quality. As it is directly related to the performance of data-driven systems, the need for estimating the translation quality of synthetically built corpora seems obvious. This paper applies quality estimation (QE) techniques for the generation of synthetic parallel data. Our case study is on the English-Croatian language pair with the Slovene language as pivot. We first train a Slovene-Croatian QE system by collecting limited amounts of parallel data for these languages from diverse sources. Then, the source side of the corpus is translated using the Apertium rule-based MT (RBMT) system (Forcada et al., 2011). The translated text is compared to its reference (the target side of the corpus) at the sentence level using TER (Snover et al., 2006). With these scores as labels, a regression model is built on feature vectors representing the sentence pairs (sourcetranslation). Using the regression model, TER scores are inferred on automatically translated Slovene to Croatian sentences taken from the English-Slovene Europarl parallel corpus (Koehn, 2005). The best translations are used to build an English-Croatian statistical MT (SMT) system. After giving an overview of previous work in the areas of pivot-based MT and QE in Section 2., the first part of our study is to build and evaluate a QE model for Slovene to Croatian, described in Section 3. We then present the SMT setup for translating from English to Croatian and the results obtained using synthetic data in Section 4. Finally, we conclude and give details about future work in Section 5. Previous Work Synthetic Data for Pivot-based MT Pivot-based MT refers to the use of an intermediate language, called pivot language (PL), to translate from the source-(SL) to the target language (TL). Differently than typical MT systems, which translate directly from SL to TL, pivot-based systems translate sequentially from SL to PL and then from PL to TL. The main motivation for building pivot-based MT systems is the lack of language resources for a language pair SL-TL, in contrast with the availability of such resources for both language pairs SL-PL and PL-TL. This is our case as our aim is to translate from English to Croatian, but to do so we use Slovene as a pivot. Our bilingual resources are for the English-Slovene language pair (Europarl parallel corpus) and for Slovene-Croatian (RBMT system). Pivot-based strategies in MT can be classified into three categories (Wu and Wang, 2009): phrase table multiplication (also known as triangulation), transfer (also referred to as cascade) and synthetic corpus. The synthetic corpus approach (Gispert and Mariño, 2006;Bertoldi et al., 2008;Utiyama et al., 2008) is the one we work upon. In this method a SL-TL corpus is obtained using the SL-PL or the PL-TL corpora. One way to do this is to translate the PL sentences in the SL-PL corpus into TL with the PL-TL system. Another possibility is to translate the PL sentences in the PL-TL corpus into SL with the SL-PL system. Obviously, both methods could be applied and the two resulting synthetic corpora be merged into a single SL-TL corpus. In this paper we extend the synthetic corpus approach to pivot-based MT by filtering the resulting synthetic corpus with QE. Quality Estimation and Applications Estimating the quality of MT output is the ability to judge the correctness of a translation without any translation ref-erence. Since the first work conducted on QE for MT at the word and sentence levels, this task has grown in interest and performance in the past few years. Ueffing et al., 2003;Blatz et al., 2003) Recent evaluation campaigns helped defining QE baselines and stateof-the-art systems, based on supervised learning using vectorial representations of source sentences and their translations associated with quality scores or labels. (Callison-Burch et al., 2012;Bojar et al., 2013) The usefulness of feature types is directly related to the QE task itself, i.e. varies according to the quality scores or labels to estimate. (Shah et al., 2013) However, it was shown that the 17 baseline features suggested by the WMT12 QE shared task organisers perform well on several QE tasks. (Callison-Burch et al., 2012;Rubino et al., 2013) In the work presented in this paper, the QE baseline is inspired by this tried-and-tested feature set. The type of machine learning algorithm used to train QE models is also well studied in the literature. For instance, Quirk (2004) report good results using linear regression, while partial least squares or decision trees show the best performance in the study conducted by Specia et al. (2009) and by Soricut et al. (2012) respectively. Amongst all the studies on QE for MT previously published, it appears that support vector machines (SVMs) tend to be the most popular machine learning approach, this is why we decided to settle on SVMs in the work presented in this paper. The current performance of QE for MT allows researchers to integrate this technique in the MT pipeline, for instance as a way to rank or combine several MT systems' output (Sánchez Martínez, 2011;Okita et al., 2012;Avramidis, 2013) or to improve SMT performances in specific domains (Banerjee et al., 2013). Quality Estimation for Slovene-Croatian The QE setup designed for our experiments on synthetic parallel data generation is presented in this Section. We first introduce the data and tools required to build and evaluate the QE models in Subsection 3.1., followed by the feature sets described in Subsection 3.2. extracted from the text data. Finally, the QE models evaluation results are detailed in Subsection 3.3.. Dataset and Tools In order to build and evaluate QE models for the Slovene-Croatian language pair, we collect three parallel corpora for these languages: • the EAC Translation Memory (noted EAC) 1 containing 573 translation units, • the EU Bookshop parallel corpus (noted EUb) 2 containing 4, 222 sentence pairs, • a small Slovene-Croatian parallel corpus obtained from a translation agency 3 (noted slhr) containing 2, 286 sentence pairs. We first consider these corpora individually to build and evaluate three QE models, before concatenating the data (noted all) into one corpus and building our final QE model. This way, four QE models are trained, and evaluated on four test sets. We present the four corpora used for our QE experiment in Table 1. For each of the parallel corpora, the source sentences are translated from Slovene to Croatian using the Apertium RBMT system for this language direction. 4 Source sentences, their translations and references are then tokenised and lowercased using the tools provided with the Moses MT system (Koehn et al., 2007). The tool TERCOM 5 provides us with the sentence-level TER scores. This scores can be used as labels to train and evaluate our QE models. Finally, sentence triplets are randomised and the corpus is split in two parts: a training and a test set. Based on source sentences, their translations and their corresponding sentence-level TER scores, we train regression models that aim to predict sentence-level TER scores on unseen data. However, using words directly, or n-grams, as features for QE usually leads to large and sparse vectors which complicates the supervised learning step. In order to generalise well and avoid training data overfitting, we extract a tried-and-tested set of features described in Section 3.2. using an in-house feature extractor tool-kit. We consider this first set of 15 features as our baseline. We then extend this feature set in order to improve the QE performance and measure its impact on the synthetic parallel data selection. Regression models are trained using the -SVR implementation available in the LibSVM toolkit (Chang and Lin, 2011). SVM parameters, namely c, γ and , are optimised based on a 5-fold cross-validation approach using the training set. The best parameter triplet is chosen according to several metrics: Mean Average Error (MAE), Root Mean Square Error (RMSE), Pearson's correlation coefficient (r) and the total number of support vectors. In our experiments, minimising the MAE and RMSE is not as crucial as maximising the Pearson's correlation coefficient, as the aim of our work is to predict TER scores following a similar distribution as the reference ones. Quality Estimation Features The features extracted from each sentence pair, i.e. source sentences and their translations, are inspired from the baseline feature set suggested by the WMT2012 QE shared task organisers (Callison-Burch et al., 2012). The full baseline set initially contains 17 features, 2 of them being dependent on the MT system. As Moses was the MT system used by the shared task organisers and the 2 system dependent features are extracted from GIZA word-alignment tables, we decide to exclude these features from our feature set and keep a baseline set as independent as possible from the MT system used. The 15 baseline features are described below: -6 Surface Features source and target segment lengths, number of punctuation marks, average source word length and average target word occurrence. -2 Language Model Features 3-gram log-probabilities of source and target segments according to Kneser-Neydiscounted LMs built with the SRILM toolkit (Stolcke et al., 2011) using the slWaC 6 and hrWac 7 monolingual corpora (Ljubešić and Erjavec, 2011) for source and target LMs respectively. -7 n-gram Frequency Features The number of source segment unigrams seen in a reference corpus (slWaC) plus 6 features based on the most and least frequent source n-gram (n ∈ [1; 3]) quartiles. The reference corpus is the corpus used to extract the LM features. In order to improve the QE performance and to measure its impact on synthetic-data-based SMT, we extend the baseline feature set to 189 features including the baseline ones. This extended set contains: -36 Surface Features uppercased-lowercased letters ratio, untokenised items, special characters, source and target features ratio. -90 Language Model Features source and target 1 to 5gram perplexity and log-probability according to LMs and backward-LMs (based on Raybaud et al. (2011)), as well as source and target features ratio. -63 n-gram Frequency Features source and target unigrams seen in a reference corpus (slWaC and hrWaC respectively), plus 1 to 5-gram frequencies in each of the frequency quartiles, as well as source and target features ratio. Quality Estimation Evaluation To evaluate the regression model, we infer TER scores at the sentence-level for each pair of the test set. The evaluation metrics are MAE, RMSE and Pearson's r, but only the correlation coefficient is presented in this paper (Table 2). While MAE and RMSE are error measures (the lower the better) and thus indicate how far on average the predicted scores are from the reference ones, Pearson's r is a correlation measure (the higher the better) and allows us to see whether the prediction follows a similar distribution to that of the reference. This latter score is the most interesting for us and we decide to select the best QE models based on this measure. We build a regression model using each of the training corpora and evaluate them with the different test sets. This evaluation method aims to indicate which training corpus is performing the best on its corresponding test set, but also which corpus leads to a more generalised QE model. Pearson's r results show that each of the training set performs best on its corresponding test set, while the slhr corpus leads to the best r score overall on its corresponding test set with QE model trained on the extended feature set. The data concatenation (system noted all) yields to a higher correlation score on the mixed test set and thus indicates a better generalisation over the training data. This motivates our choice to select this QE model for filtering translated monolingual data and generating a synthetic parallel corpus. When comparing the baseline and the extended feature sets, we observe fluctuating improvements according to Pearson's correlation coefficient regarding the training and testing corpora. Five data configurations lead to negative correlations when using the baseline features while it is not the case with the extended set. Using the EUb corpus for training and testing the QE model, extending the feature set does not lead to significant improvement (with p ≤ 0.01 using the bootstrap resampling method). For the other corpora, the extended feature set improves over the baseline set when the train and test sets are taken from the same corpus. Figure 1 shows the distributions of TER scores for the reference, the baseline and the extended QE setups, with the concatenated training and testing datasets (noted all). Better predictions are done by the extended QE model when the TER reference scores are low, while the baseline QE model tends to predict scores around the reference average. We decide to keep two QE models for the rest of our experiments, one using the baseline feature set and one using the extended set, both trained on the concatenated corpora. Synthetic-data-based SMT For the remaining experiments presented in this paper, the QE models are used individually to estimate TER scores at the sentence-level in order to filter translations provided by an RBMT system. The translations are then ranked according to their TER scores and subsets of this corpus are extracted to train SMT systems. These translation systems are finally evaluated with four of the most popular automatic metrics according to two test sets. Table 2: Pearson's r obtained on the three corpora and the data concatenation using the baseline and the extended feature sets. Results in bold indicates the highest correlation between the prediction and the reference amongst training corpora for a given test set. presents the dataset used to train and evaluate the SMT systems, followed by the evaluation results in Subsection 4.2. Dataset The synthetic parallel corpus is generated by translating the target side of the Europarl English-Slovene parallel corpus into Croatian using the Apertium RBMT system. The resulting English-Croatian parallel corpus is used to train a phrase-based SMT system with the Moses tool-kit. We do not run any tuning algorithm, and thus do not need a development set, on the different SMT systems built in order to strictly evaluate the effect of QE-based synthetic data generation. SMT Systems Evaluation Based on the QE model presented in Section 3., we infer TER scores at the sentence-level for each translated sentence from the parallel training data presented in Table 3. Translations are then ranked from the lowest to the highest TER score and we extract four subsets of this corpus, keeping 10, 20, 40 and 80% of the overall amount of words in the parallel corpus. To compare our approach to a baseline, we randomly select subsets of the translated corpus with similar amount of words. We repeat the random-based experiments three times and average the obtained results. The SMT systems are then evaluated based on the translated test sets scored with BLEU (Papineni et al., 2002) version 13a, TER (using TERCOM) and METEOR (Lavie and Denkowski, 2009). The BLEU scores are presented in Figure 2 and show that the extended QE models lead to the highest scores for 10 and 20% of the training data. For the test set SETimes, the extended QE model also leads to the highest score for 40% of the training data, while the baseline QE model is better for this subset size on the Newstest2013 test set. This particular result can be explained by the fact that only a few subset sizes are evaluated and the maximum BLEU score obtained by the extended QE model may be higher than the one obtained by the baseline QE model. Overall, the two QE setups show better results compared to the random setup for smaller training data. These results are explained by the ability to select the best translations provided by the RBMT first when using a QE-based approach compared to a random selection of the translations. The TER scores are presented in Figure 3 and show consistent results based on the BLEU scores described previously. With 10 and 20% of the training data, the extended QE model leads to lower TER scores compared to the baseline QE model and the random approach. For New-stest20130, the lowest TER score is obtained by the extended QE model with 40% of the training data, while 80% of the training data is necessary to obtain the lowest TER score on SETimes with the QE models. For this latter test set, the TER results are similar to the BLEU ones where 80% of the training data appears to lead to the best score, once again explainable by the limited number of evaluated subset sizes. The METEOR scores are presented in Figure 4. For the test set Newstest2013, the best METEOR score is obtained by the extended QE model using 20% of the training data. Increasing the training data subset size does not lead to an improvement of this result which indicates that no useful parallel data is found over 20% of the training data size. For the test set SETimes, the best METEOR score is obtained by the extended QE model using 10% of the training data. With 20 and 40% of the training data, the extended QE model still leads to the highest METEOR score compared to the baseline QE model and the random approach, while the baseline QE model is better than the two other systems using 80% of the training data which is similar to the results obtained on the Newstest2013 test set. As shown by the evaluation done with three automatic metrics, the QE-based approach leads to better results with smaller amount of training data compared to the random selection of synthetic parallel instances. In order to validate these results, we perform statistical significance tests on BLEU between the random and the QE-based systems, using the paired bootstrap resampling method suggested by (Koehn, 2004). We use the toolkit provided by Table 4: Significance levels when comparing BLEU scores obtained by the extended QE-based system and the random systems. The p-values are calculated when the QE-based system reaches higher BLEU scores than the random systems. Subset Random1 Random2 Random3 Newstest2013 10% p ≤ 0.01 p ≤ 0.01 p ≤ 0.01 20% p ≤ 0.05 p ≤ 0.01 p ≤ 0.01 40% - p ≤ 0.05 - 80% - p ≤ 0.01 - SETimes 10% p ≤ 0.01 p ≤ 0.01 p ≤ 0.01 20% p ≤ 0.01 p ≤ 0.01 p ≤ 0.01 40% - - p ≤ 0.01 80% p ≤ 0.01 - - CMU 9 , which is based on the script mteval-v13a released by NIST 10 . We compare the extended QE-based approach with the three random systems individually (which were averaged previously to compute automatic metrics) considering two significance levels (p-values): 0.05 and 0.01. The results are presented in Table 4 and confirm our statement that the QE-based approach leads to better translations according to BLEU, compared to selecting random training instances, when the size of the training subset is below 40% of the synthetic training corpus. As the amount of the synthetic training data increase, the performances of the random and QE-based systems become non-significantly different and the QE-based systems never outperform significantly the system trained using the full synthetic parallel corpus. It appears that the QE-based systems do no benefit from the remaining 80% of the QEranked parallel corpus. In order to verify that BLEU really reflects the translation quality, a native Croatian evaluator reviewed and assigned two scores at the sentence-level for the Newstest2013 corpus. The evaluator has access to the English source sentence and its translation performed by three SMT systems: the random and extended QE-based systems trained on 40% of the parallel data, as well as the systems trained on the full synthetic corpus. Each translation is evaluated on a 1 to 10 scale according to the fluency and adequacy criteria. The results given by the human evaluator confirm what is observed using BLEU and only a few instances of New-stest2013 are better translated using the QE-based approach compared to the full system. Four examples of the QEbased system outperforming the two other ones are presented in Table 5, along with their fluency and adequacy scores. The first example shows an almost perfect translation obtained with the QE-based system, only turn-out is not translated from English to Croatian. In the second example, the translation obtained with the full system is the worst, while the QE-based one is slightly better than the random system. The third and fourth examples show how the QE-based approach generates better translations compared to the random system with the same amount of data. We assume that the full system is not significantly different than the QE one because our approach quickly reaches a plateau by using most of the good quality synthetic data in the first 20%. Conclusion This paper has presented a first step to the generation of synthetic parallel data for under-resourced languages using QE. We departed from the synthetic corpus approach to pivot-based MT and extended it by filtering the resulting corpus with QE. The case study presented deals with translation from English to Croatian through Slovene. We have built a synthetic English-Croatian parallel corpus using an English-Slovene parallel corpus and a Slovene-Croatian RBMT system. A QE system has been used to filter the resulting synthetic corpus. To that end, we have built a QE system for Slovene→Croatian that estimates sentence-level TER scores. The sentence pairs of the English-Croatian synthetic corpus are then ranked according to their estimated scores according to the QE model and variable subsets are used to train SMT systems. We show a significant improvement of the translation quality at p ≤ 0.01 using the QE-based approach compared to a random selection of training instances. However, the difference between these two setups becomes statistically insignificant when the synthetic training data subset exceeds 20% of the available parallel data. Also, the QE-based approach does not significantly outperform an SMT system trained on the full synthetic corpus. We assume that further improvements of the QE system, based on the extraction of a larger diversity of features and on automatic feature selection, could lead to some improvements of the SMT system. Improving the translation quality of the Slovene-Croatian RBMT output or using a larger English-Slovene parallel corpus would also impact the results obtained in this study, and more experiments are required to claim for the robustness of our approach. As future work, we would like to investigate the use of a more diverse feature set, containing linguistic information such as part-of-speech and syntax, which were shown to perform well in recent QE studies. Several aspects of the QE setup are still unclear, for instance the performance of individual features or feature subsets. It is possible that some features are noisy or redundant which motivates an automatic feature evaluation and selection approach. Figure 1 : 1Smoothed distributions of reference and predicted TER scores with the concatenated data setup (all) using two feature sets. Figure 2 : 2BLEU scores obtained by the random and the two QE setups on the two test sets, depending on training data subset sizes. Figure 3 : 3TER scores obtained by the random and the two QE setups on the two test sets, depending on training data subset sizes. Figure 4 : 4METEOR scores obtained by the random and the two QE setups on the two test sets, depending on training data subset sizes. Table 1 : 1Number of sentences in each configuration for the three different corpora used in our experiments. The column all is the concatenation of the three other corpora. Subsection 4.1.Baseline Feature Set Extended Feature Set Train/Test EAC EUb slhr all EAC EUb slhr all EAC 0.2779 -0.0659 0.0227 -0.0081 0.4361 0.0881 0.0514 0.1844 EUb -0.0949 0.2333 0.0801 0.1847 0.2595 0.2373 0.2497 0.2235 slhr 0.0160 -0.0790 0.4021 -0.0459 0.3053 0.1941 0.6646 0.2297 all 0.0198 0.1210 0.2221 0.2024 0.3964 0.1280 0.5237 0.3127 To evaluate the SMT systems, we use two different test sets: Newstest2013 (a subset from WMT'13 test set manually translated into Croatian) and SETimes 8 . Details about the training and testing datasets are presented inTable 3.Sentences Words Train English 621k 16.5M Slovene 621k 14.2M Test Newstest2013 (source) 1k 19.4k SETimes (source) 2k 51.5k Table 3 : 3Number of sentences and words in the training and testing data used for the SMT system. InstanceAdequacy Fluency Source one thing is certain : these new provisions will have a negative impact on voter turn-out .--Random jedno je izvjesno : tim novim odredbamaće imati okrnitve udeleženost na izborima . 5 3 QE jedno je izvjesno : tim novim odredbamaće imati negativan utjecaj na glasačko turn-out .8 4 Full jedno je izvjesno : tim novim odredbamaće imati okrnitve udeleženost na izborima . 5 3 Source cigarettes are linked to 85 % of lung cancer cases . - - Random cigarete su povezane sa 85 % pljučnega rakavih slučaja . 5 4 QE cigarete su povezane sa 85 % pljučnega slučaja raka . 7 7 Full cigaretami navezujeta do 85 % pljučnega rakavih nepooblaščenega . 3 2 Source however , in this vital area , much remains to be done . - - Random ali , u tom vitalnem cromane još učiniti . 4 3 QE ali , u tom ključnom području ,što još treba učiniti . 7 5 Full ali , u tom ključnom području , dosta postoriti . 6 5 Source i am a hero of the last century , when culture meant something . - - Random ja sam junak iz posljednjih stoljeća , kad je kultura u mislima . 5 6 QE ja sam junak iz prošloga stoljeća , kad je kulturu značio nešto . 8 7 Full ja sam junak iz prošloga stoljeća , kad je kultura u mislima . 6 8 Table 5 : 5Examples of source sentences and their translations obtained with the systems trained on the full synthetic corpus (noted Full), 20% of the synthetic data extracted randomly (noted Random) and with the extended QE approach (noted QE). http://ipsc.jrc.ec.europa.eu/index.php? id=784 2 http://bookshop.europa.eu 3 http://www.ciklopea.com https://svn.code.sf.net/p/apertium/svn/ trunk/apertium-hbs-slv/ 5 TER COMpute Java code, version 0.7.25 http://nlp.ffzg.hr/resources/corpora/ setimes-hr http://www.ark.cs.cmu.edu/MT/ 10 http://www.itl.nist.gov/iad/mig/tests/mt/2009/ AcknowledgementsThe research leading to these results has received fund- Sentence-level ranking with quality estimation. Machine translation. E Avramidis, 27Avramidis, E. (2013). Sentence-level ranking with quality estimation. Machine translation, 27(3-4):239-256. Quality estimation-guided data selection for domain adaptation of smt. P Banerjee, R Rubino, J Roturier, J Van Genabith, Machine Translation Summit XIV. Banerjee, P., Rubino, R., Roturier, J., and van Genabith, J. (2013). Quality estimation-guided data selection for do- main adaptation of smt. In Machine Translation Summit XIV, pages 101-108. Phrase-Based Statistical Machine Translation with Pivot Languages. N Bertoldi, M Barbaiani, M Federico, R Cattoni, Proc. of the International Workshop on Spoken Language Translation. of the International Workshop on Spoken Language TranslationHawaii, USABertoldi, N., Barbaiani, M., Federico, M., and Cattoni, R. (2008). Phrase-Based Statistical Machine Transla- tion with Pivot Languages. In Proc. of the International Workshop on Spoken Language Translation, pages 143- 149, Hawaii, USA. . J Blatz, E Fitzgerald, G Foster, S Gandrabur, C Goutte, A Kulesza, A Sanchis, N Ueffing, Blatz, J., Fitzgerald, E., Foster, G., Gandrabur, S., Goutte, C., Kulesza, A., Sanchis, A., and Ueffing, N. (2003). Confidence Estimation for Machine Translation. JHU/CLSP Summer Workshop Final Report. Confidence Estimation for Machine Translation. In JHU/CLSP Summer Workshop Final Report. O Bojar, C Buck, C Callison-Burch, C Federmann, B Haddow, P Koehn, C Monz, M Post, R Soricut, L Specia, Findings of the 2013 Workshop on Statistical Machine Translation. In WMT. Bojar, O., Buck, C., Callison-Burch, C., Federmann, C., Haddow, B., Koehn, P., Monz, C., Post, M., Soricut, R., and Specia, L. (2013). Findings of the 2013 Workshop on Statistical Machine Translation. In WMT, pages 1-44. Findings of the 2012 workshop on statistical machine translation. C Callison-Burch, P Koehn, C Monz, M Post, R Soricut, L Specia, Proceedings of the Seventh Workshop on Statistical Machine Translation. the Seventh Workshop on Statistical Machine TranslationCallison-Burch, C., Koehn, P., Monz, C., Post, M., Soricut, R., and Specia, L. (2012). Findings of the 2012 work- shop on statistical machine translation. In Proceedings of the Seventh Workshop on Statistical Machine Transla- tion, pages 10-51. Libsvm: a library for support vector machines. C.-C Chang, C.-J Lin, ACM Transactions on Intelligent Systems and Technology. 2327TISTChang, C.-C. and Lin, C.-J. (2011). Libsvm: a library for support vector machines. ACM Transactions on Intelli- gent Systems and Technology (TIST), 2(3):27. Apertium: a free/open-source platform for rule-based machine translation. M L Forcada, M Ginestí-Rosell, J Nordfalk, J O&apos;regan, S Ortiz-Rojas, J A Pérez-Ortiz, F Sánchez-Martínez, G Ramírez-Sánchez, F M Tyers, Machine Translation. 252Forcada, M. L., Ginestí-Rosell, M., Nordfalk, J., O'Regan, J., Ortiz-Rojas, S., Pérez-Ortiz, J. A., Sánchez-Martínez, F., Ramírez-Sánchez, G., and Tyers, F. M. (2011). Aper- tium: a free/open-source platform for rule-based ma- chine translation. Machine Translation, 25(2):127-144. Confidence Estimation for Translation Prediction. S Gandrabur, G Foster, CoNLL. Gandrabur, S. and Foster, G. (2003). Confidence Estima- tion for Translation Prediction. In CoNLL, pages 95- 102. Statistical machine translation without parallel corpus: bridging through spanish. A D Gispert, J B Mariño, Processdings of LREC 5th Workshop on Strategies for developing Machine Translation for Minority Languages. essdings of LREC 5th Workshop on Strategies for developing Machine Translation for Minority LanguagesGispert, A. D. and Mariño, J. B. (2006). Statistical machine translation without parallel corpus: bridging through spanish. In Processdings of LREC 5th Workshop on Strategies for developing Machine Translation for Mi- nority Languages, pages 65-68. Moses: Open source toolkit for statistical machine translation. P Koehn, H Hoang, A Birch, C Callison-Burch, M Federico, N Bertoldi, B Cowan, W Shen, C Moran, R Zens, Proceedings of the 45th Annual Meeting of the ACL on Interactive Poster and Demonstration Sessions. the 45th Annual Meeting of the ACL on Interactive Poster and Demonstration SessionsKoehn, P., Hoang, H., Birch, A., Callison-Burch, C., Fed- erico, M., Bertoldi, N., Cowan, B., Shen, W., Moran, C., Zens, R., et al. (2007). Moses: Open source toolkit for statistical machine translation. In Proceedings of the 45th Annual Meeting of the ACL on Interactive Poster and Demonstration Sessions, pages 177-180. Statistical significance tests for machine translation evaluation. P Koehn, EMNLP. Koehn, P. (2004). Statistical significance tests for machine translation evaluation. In EMNLP, pages 388-395. Europarl: A parallel corpus for statistical machine translation. P Koehn, MT summit. 5Koehn, P. (2005). Europarl: A parallel corpus for statisti- cal machine translation. In MT summit, volume 5. The meteor metric for automatic evaluation of machine translation. Machine Translation. A Lavie, M J Denkowski, 23Lavie, A. and Denkowski, M. J. (2009). The meteor met- ric for automatic evaluation of machine translation. Ma- chine Translation, 23(2-3):105-115, September. hrWaC and slWac: Compiling web corpora for Croatian and Slovene. N Ljubešić, T Erjavec, Text, Speech and Dialogue -14th International Conference. Habernal, I. and Matousek, V.Pilsen, Czech RepublicSpringerProceedingsLjubešić, N. and Erjavec, T. (2011). hrWaC and slWac: Compiling web corpora for Croatian and Slovene. In Habernal, I. and Matousek, V., editors, Text, Speech and Dialogue -14th International Conference, TSD 2011, Pilsen, Czech Republic, September 1-5, 2011. Proceed- ings, Lecture Notes in Computer Science, pages 395- 402. Springer. Sentence-Level Quality Estimation for MT System Combination. T Okita, R Rubino, J Van Genabith, ML4HMT-12 Workshop. 55Okita, T., Rubino, R., and van Genabith, J. (2012). Sentence-Level Quality Estimation for MT System Com- bination. In ML4HMT-12 Workshop, page 55. Bleu: A method for automatic evaluation of machine translation. K Papineni, S Roukos, T Ward, W.-J Zhu, Proceedings of the 40th Annual Meeting on Association for Computational Linguistics, ACL '02. the 40th Annual Meeting on Association for Computational Linguistics, ACL '02Stroudsburg, PA, USAAssociation for Computational LinguisticsPapineni, K., Roukos, S., Ward, T., and Zhu, W.-J. (2002). Bleu: A method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting on Association for Computational Linguistics, ACL '02, pages 311-318, Stroudsburg, PA, USA. Association for Computational Linguistics. Training a Sentence-Level Machine Translation Confidence Measure. C Quirk, LREC. Quirk, C. (2004). Training a Sentence-Level Machine Translation Confidence Measure. In LREC, pages 825- 828. Detecting Errors in Machine-Translated Sentences. S Raybaud, D Langlois, K Smaïli, Machine Translation. This Sentence is WrongRaybaud, S., Langlois, D., and Smaïli, K. (2011). "This Sentence is Wrong." Detecting Errors in Machine- Translated Sentences. Machine Translation, pages 1-34. Estimating the quality of translated user-generated content. R Rubino, J Foster, R S Z Kaljahi, J Roturier, F Hollowood, 6th International Joint Conference on Natural Language Processing (IJCNLP). Rubino, R., Foster, J., Kaljahi, R. S. Z., Roturier, J., and Hollowood, F. (2013). Estimating the quality of trans- lated user-generated content. In 6th International Joint Conference on Natural Language Processing (IJCNLP), pages 1167-1173. Choosing the Best Machine Translation System to Translate a Sentence by Using Only Source-language Information. In European Association for Machine Translation. F Sánchez Martínez, Sánchez Martínez, F. (2011). Choosing the Best Machine Translation System to Translate a Sentence by Using Only Source-language Information. In European Asso- ciation for Machine Translation, pages 97-104. An investigation on the effectiveness of features for translation quality estimation. K Shah, T Cohn, L Specia, Machine Translation Summit XIV. Shah, K., Cohn, T., and Specia., L. (2013). An inves- tigation on the effectiveness of features for translation quality estimation. In Machine Translation Summit XIV, pages 167-174. A study of translation edit rate with targeted human annotation. M Snover, B Dorr, R Schwartz, L Micciulla, J Makhoul, Proceedings of association for machine translation in the Americas. association for machine translation in the AmericasSnover, M., Dorr, B., Schwartz, R., Micciulla, L., and Makhoul, J. (2006). A study of translation edit rate with targeted human annotation. In Proceedings of associa- tion for machine translation in the Americas, pages 223- 231. The SDL Language Weaver Systems in the WMT12 Quality Estimation Shared Task. R Soricut, N Bach, Wang , Z , WMT. Soricut, R., Bach, N., and Wang, Z. (2012). The SDL Lan- guage Weaver Systems in the WMT12 Quality Estima- tion Shared Task. In WMT, pages 145-151. Estimating the Sentencel-Level Quality of Machine Translation Systems. L Specia, M Turchi, N Cancedda, M Dymetman, N Cristianini, EAMT. Specia, L., Turchi, M., Cancedda, N., Dymetman, M., and Cristianini, N. (2009). Estimating the Sentencel-Level Quality of Machine Translation Systems. In EAMT, pages 28-35. Srilm at sixteen: Update and outlook. A Stolcke, J Zheng, W Wang, Abrash , V , Proceedings of IEEE Automatic Speech Recognition and Understanding Workshop. IEEE Automatic Speech Recognition and Understanding Workshop5Stolcke, A., Zheng, J., Wang, W., and Abrash, V. (2011). Srilm at sixteen: Update and outlook. In Proceedings of IEEE Automatic Speech Recognition and Understanding Workshop, page 5. Confidence Measures for Statistical Machine Translation. N Ueffing, K Macherey, H Ney, MT SummitUeffing, N., Macherey, K., and Ney, H. (2003). Confi- dence Measures for Statistical Machine Translation. In MT Summit. The NICT/ATR Speech Translation System for IWSLT. M Utiyama, A Finch, H Okuma, M Paul, H Cao, H Yamamoto, K Yasuda, E Sumita, Proc. of the International Workshop on Spoken Language Translation. of the International Workshop on Spoken Language TranslationHawaii, USAUtiyama, M., Finch, A., Okuma, H., Paul, M., Cao, H., Ya- mamoto, H., Yasuda, K., and Sumita, E. (2008). The NICT/ATR Speech Translation System for IWSLT 2008. In Proc. of the International Workshop on Spoken Lan- guage Translation, pages 77-84, Hawaii, USA. Revisiting pivot language approach for machine translation. H Wu, H Wang, Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP. the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLPSuntec, SingaporeAssociation for Computational LinguisticsWu, H. and Wang, H. (2009). Revisiting pivot language approach for machine translation. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP, pages 154-162, Suntec, Singapore, August. Association for Computa- tional Linguistics.
8,201,148
An Entity-centric Approach for Overcoming Knowledge Graph Sparsity
Automatic construction of knowledge graphs (KGs) from unstructured text has received considerable attention in recent research, resulting in the construction of several KGs with millions of entities (nodes) and facts (edges) among them. Unfortunately, such KGs tend to be severely sparse in terms of number of facts known for a given entity, i.e., have low knowledge density. For example, the NELL KG consists of only 1.34 facts per entity. Unfortunately, such low knowledge density makes it challenging to use such KGs in real-world applications. In contrast to best-effort extraction paradigms followed in the construction of such KGs, in this paper we argue in favor of ENTIty Centric Expansion (ENTICE), an entity-centric KG population framework, to alleviate the low knowledge density problem in existing KGs. By using ENTICE, we are able to increase NELL's knowledge density by a factor of 7.7 at 75.5% accuracy. Additionally, we are also able to extend the ontology discovering new relations and entities.
[ 1455080, 74065, 14068874, 10318045 ]
An Entity-centric Approach for Overcoming Knowledge Graph Sparsity Association for Computational LinguisticsCopyright Association for Computational LinguisticsSeptember 2015. 2015 Manjunath Hegde manjunath@ssl.serc.iisc.in Indian Institute of Science Indian Institute of Science Partha Talukdar Indian Institute of Science Indian Institute of Science An Entity-centric Approach for Overcoming Knowledge Graph Sparsity Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing the 2015 Conference on Empirical Methods in Natural Language ProcessingLisbon, PortugalAssociation for Computational LinguisticsSeptember 2015. 2015 Automatic construction of knowledge graphs (KGs) from unstructured text has received considerable attention in recent research, resulting in the construction of several KGs with millions of entities (nodes) and facts (edges) among them. Unfortunately, such KGs tend to be severely sparse in terms of number of facts known for a given entity, i.e., have low knowledge density. For example, the NELL KG consists of only 1.34 facts per entity. Unfortunately, such low knowledge density makes it challenging to use such KGs in real-world applications. In contrast to best-effort extraction paradigms followed in the construction of such KGs, in this paper we argue in favor of ENTIty Centric Expansion (ENTICE), an entity-centric KG population framework, to alleviate the low knowledge density problem in existing KGs. By using ENTICE, we are able to increase NELL's knowledge density by a factor of 7.7 at 75.5% accuracy. Additionally, we are also able to extend the ontology discovering new relations and entities. Introduction Over the last few years, automatic construction of knowledge graphs (KGs) from webscale text data has received considerable attention, resulting in the construction of several large KGs such as NELL (Mitchell et al., 2015), Google's Knowledge Vault (Dong et al., 2014). These KGs consist of millions of entities and facts involving them. While measuring size of the KGs in terms of number of entities and facts is helpful, they don't readily capture the volume of knowledge needed in Table 1: Any new fact involving a source entity from a Knowledge Graph (i.e., facts of the form entity1-relation-entity2 where entity1 is already in the KG) can be classified into one of the four extraction classes shown above. Most KG population techniques tend to focus on extracting facts of the KR-KE class. ENTICE, the entity-centric approach proposed in this paper, is able to extract facts of all four classes. real-world applications. When such a KG is used in an application, one is often interested in known facts for a given entity, and not necessarily the overall size of the KG. In particular, knowing the average number of facts per entity is quite informative. We shall refer to this as the knowledge density of the KG. Low knowledge density (or high sparsity) in automatically constructed KGs has been recognized in recent research (West et al., 2014). For example, NELL KG has a knowledge density of 1.34. Such low knowledge density puts significant limitations on the utility of these KGs. Construction of such KGs tend to follow a batch paradigm: the knowledge extraction system makes a full pass over the text corpus extracting whatever knowledge it finds, and finally aggregating all extractions into a graph. Clearly, such best-effort extraction paradigm has proved to be inadequate to address the low knowledge density issue mentioned above. We refer to such paradigm as best-effort since its attention is divided equally among all possible entities. Recently, a few entity-centric methods have been proposed to increase knowledge density in KGs (Gardner et al., 2013;Gardner et al., 2014). In contrast to the best-effort approaches mentioned above, these entity-centric approaches aim at increasing knowledge density for a given entity. A new fact involving the given entity can belong to one of the four types shown in Table 1. Unfortunately, these densifying techniques only aim at identifying instances of known relations among entities already present in the KG, i.e., they fall in the KR-KE type of Table 1. In this paper we propose ENTIty Centric Expansion (ENTICE), an entity-centric knowledge densifying framework which, given an entity, is capable of extracting facts belonging to all the four types shown in Table 1. By using ENTICE, we are able to increase NELL's knowledge density by a factor of 7.7 1 , while achieving 75.4% accuracy. Our goal here is to draw attention to the effectiveness of entity-centric approaches with bigger scope (i.e., covering all four extraction classes in Table 1) towards improving knowledge density, and that even relatively straightforward techniques can go a long way in alleviating low knowledge density in existing state-ofthe-art KGs. ENTICE code is available at: https://github.com/malllabiisc/entity-centrickb-pop 2 Related Work Open Information Extraction (OIE) systems (Yates et al., 2007;Fader et al., 2011;Schmitz et al., 2012) aim at extracting textual triples of the form noun phrase-predicate-noun phrase. While such systems aim for extraction coverage, and because they operate in an ontologyfree setting, they don't directly address the problem of improving knowledge density in ontological KGs such as NELL. However, OIE extractions provide a suitable starting point which is exploited by ENTICE. (Galárraga et al., 2014) addresses the problem of normalizing (or canonicalizing) OIE extractions which can be considered as one of the components of ENTICE (see Section 3.3). As previously mentioned, recent proposals for improving density of KGs such as those reported in (Gardner et al., 2013;Gardner et al., 2014) focus on extracting facts of one of the four extraction classes mentioned in Table 1, viz., KR-KE. The KBP challenge (Surdeanu, 2013) also focuses on extracting facts while keeping the relation set fixed, i.e., it addresses the KR-KE and KR-NE extraction classes. A method to improve knowledge density in KGs by using search engine query logs and a question answering system is presented in (West et al., 2014). The proprietary nature of datasets and tools used in this approach limits its applicability in our setting. ENTICE aims to improve knowledge density by extracting facts from all four extraction classes, i.e., for a given entity, it extracts facts involving known relations, identifies potentially new relations that might be relevant for this entity, establishes such relations between the given entity and other known as well as new entities -all in a single system. While various parts of this problem have been studied in isolation in the past, ENTICE is the first system to the best of our knowledge that addresses the complete problem as a single framework. ENTIty Centric Expansion (ENTICE) Overall architecture and dataflow within EN-TICE is shown in Figure 1. We describe each of the components in the sections below. Data Preprocessing Given the source entity, documents relevant to it are downloaded by issues queries against Google. In order to make the query specific, especially in case of ambiguous entities, a few keywords are also added to the query. For the experiments in this paper, the category is used as the keyword. For example, for the entity Albert Einstein from the scientist category, the query will be "Albert Einstein scientist". Top 20 documents returned by the search engine are downloaded and processed further. Text is extracted from the raw downloaded documents using regex patters, HTML tag matching, and by using the Boilerpipe tool 2 . Triple Extraction Text of each document obtained in the previous step is processed through the Stanford CoreNLP toolkit (Manning et al., 2014) for tokenization, coreference resolution, and dependency parsing. Tokenized and coreferenceresolved sentences are then passes through OpenIEv4 system 3 to extract (noun phrase, predicate, noun phrase) triples. Multiple and overlapping triples from the sentence was permitted. Length filter is applied on the noun phrase and the predicate of the triple extracted. This eliminates triples whose predicate is more than 6 tokens and noun phrase more than 7 tokens. Noun and Relation Phrase Normalization Noun phrases (NPs) and relation phrases obtained from the previous step are normalized (or canonicalized) in this step. Canopy clustering technique as proposed in (Galárraga et al., 2014) was used for noun phrase as well relation phrase clustering. Initial clustering is done over the unlinked noun phrases in the triples. Please note that since we are working in an entity-centric manner, one of the two NPs present in the triple is already connected to the knowledge graph, and hence is considered linked. To cluster noun phrases, we first construct canopies corresponding to each word in the noun phrase. For example, for noun phrase Albert Einstein, we create two canopies, viz., a canopy for Albert and another canopy for Einstein, and add Albert Einstein to both canopies. Grouping of noun phrases inside the canopy is the next step of clustering phase. Noun phrase similarity is calculated based on similarity of words in the noun phrases. Word similarity is either direct string matching or Gensim similarity score 4 , which internally uses word2vec embeddings (Mikolov et al., 2013). After calculating pairwise similarity of noun phrases, hierarchical clustering is carried out to group noun phrases inside each canopy. A threshold score is used to stop hierarchical clustering. At the end of this process, we have canopies and groups of noun phrases inside them. A noun phrase can be in more than one canopy, hence those groups across canopies are merged if the similarity is greater than certain threshold. After this, each group will contain facts which have similar noun phrases and different (or same) relation phrase. Again the facts are clustered based on the similarity of the relation phrase. Relation phrase similarity calculation step resembles the one used for noun phrases as described above. After this triple clustering step, the best representative triple from each cluster is selected based on a few rules. We consider the structure of POS tags in noun phrases of a triple as one of the criteria. Secondly, if both noun phrases in the triple are linked to the knowledge graph, then it makes the triple more likely to become a representative tuple of the cluster. Also, if the NPs present in the triple are frequent in the cluster, then it makes the corresponding triple more like to become a representative. Integrating with Knowledge Graph The set of normalized triples from the previous step are linked with the Knowledge Graph, whenever possible, in this step. For a given normalized triple, following steps are performed as part of linking. First, category of each noun phrase in the triple is obtained based on string matching. In case of no match, refinements like dropping of adjectives, considering only noun phrases are done to for rematching. Now, the relation phrase is mapped to an existing predicate in the KG based on the extraction patterns in the metadata of the target relation (e.g., NELL and many other KGs have such metadata available). Can-didate predicates are chosen from the above mapped predicates based on category signature of the two noun phrases (i.e. entity1 and entity2). This is possible since the all the predicates in NELL have the type signature defined in the metadata. Frequency of the relation phrase in the metadata is used as a criteria to select a candidate from multiple predicates. If such category-signature based mapping is not possible, then the predicate is listed as a new relation, and the corresponding triple marked to belong to either NR-KE or NE-NE extraction class, depending on whether the target entity is already present in the KG or not. Experiments In order to evaluate effectiveness of ENTICE, we apply it to increase knowledge density for 100 randomly selected entities from each of the following five NELL categories: Scientist, Universities, Books, Birds, and Cars. For each category, a random subset of extractions in that category was evaluated using Mechanical Turk. To get a better accuracy of the evaluation, each fact was evaluated by 3 workers. Workers were made to classify each fact as correct, incorrect or can't say. Only those facts classified as correct by 2 or more evaluators were considered as correct facts. Main Result: Experimental results comparing knowledge densities in NELL and after application of ENTICE, along with the accuracy of extractions, are presented in Table 2. From this, we observe that ENTICE is able to improve knowledge density in NELL by a factor of 7.7 while maintaining 75.5% accuracy. Sample extraction examples and accuracy perextraction class are presented in Table 3 and Table 4, respectively. Noun and Relation Phrase Normalization: We didn't perform any intrinsic evaluation of the entity and relation normalization step. However, in this section, we provide a few anecdotal examples to give a sense of the output quality from this step. We observe that the canopy clustering algorithm for entity and normalization is able to cluster together facts with somewhat different surface representations. For example, the algorithm came up with the following cluster with two facts: {(J. Willard Milnor, was awarded, 2011 Abel Prize); (John Milnor, received, Abel Prize)}. It is encouraging to see that the system is able to put J. Willard Milnor and John Milnor together, even though they have somewhat different surface forms (only one word overlap). Similarly, the relation phrases was awarded and received are also considered to be equivalent in the context of these beliefs. Integrating with Knowledge Graph: Based on evaluation over a random-sampling, we find that entity linking in ENTICE is 92% accurate, while relation linking is about 70% accurate. In the entity linking stage, adjectives present in a noun phrase (NP) were ignored while matching the noun phrase to entities in the knowledge graph (NELL KB in this case). In case the whole NP didn't find any match, part of the NP was used to retrieve its category, if any. For example, in (Georg Waldemar Cantor, was born in, 1854), the NP Georg Waldemar Cantor was mapped to category person using his last name and 1854 to category date. The relation phrase "was born in" maps to many predicates in NELL relational metadata. NELL predicate AtDate was selected based on the rule that category signature of the predicate matches the category of the noun phrases present in the triple. It also has the highest frequency count for the relational phrase in the metadata. We observed that relation mapping has lesser accuracy due to two reasons. Firstly, error in determining right categories of NPs present in a triple; and secondly, due to higher ambiguity involving relation phrases in general, i.e., a single relation phrase usually matches many relation predicates in the ontology. Conclusion This paper presents ENTICE, a simple but effective entity-centric framework for increasing knowledge densities in automatically constructed knowledge graphs. We find that EN-TICE is able to significantly increase NELL's knowledge density by a factor of 7.7 at 75.5% accuracy. In addition to extracting new facts, ENTICE is also able to extend the ontology. Our goal in this paper is twofold: (1) to draw attention to the effectiveness of entitycentric approaches with bigger scope (i.e., covering all four extraction classes in Table 1) towards improving knowledge density; and (2) to demonstrate that even relatively straightforward techniques can go a long way in alleviating low knowledge density in existing stateof-the-art KGs. While these initial results are encouraging, we hope to apply ENTICE on other knowledge graphs, and also experiment with other normalization and entity linking algorithms as part of future work. Figure 1 : 1Dataflow and architecture and of ENTICE. See Section 3 for details. Table 2 : 2Knowledge densities of five categories in NELL and after application of ENTICE, along with resulting accuracy. We observe that overall, ENTICE is able to increase knowledge density by a factor of 7.7 at 75.5% accuracy. This is our main result.Entity Name All facts in NELL Sample facts extracted by EN- TICE Extraction Class George Paget Thomson (George Paget Thomson, isIn- stanceOf, scientist) (Sir George Thomson, isFellowOf, Royal Society) NR-KE (George Thomson, hasSpouse, Kath- leen Buchanan Smith) KR-NE (George Paget Thomson, diedOn, September 10) KR-KE Table 3 : 3Facts corresponding to an entity from the scientists domain in NELL as well as those extracted by ENTICE. While NELL contained only one fact for this entity, ENTICE was able to extract 15 facts for this entity, only 3 of which are shown above.Category KR -KE KR -NE NR -KE NR -NE correct facts wrong facts acc. correct facts wrong facts acc. correct facts wrong facts acc. correct facts wrong facts acc. Scientists 57 10 85.07 61 8 88.40 14 3 82.35 9 2 81.81 Cars 68 35 66.01 58 21 73.41 9 5 64.28 5 0 100 Universities 52 30 63.41 68 20 77.27 9 2 81.81 12 4 75 Books 78 24 76.47 79 12 86.81 2 0 100 6 1 85.71 Birds 67 29 69.79 46 19 70.76 15 4 78.94 8 6 57.14 Overall 322 128 71.55 312 80 79.59 49 14 77.77 40 13 75.47 Table 4 : 4Accuracy breakdown over ENTICE extractions for each of the four extraction classes inTable 1. For each category, approximately 200 extractions were evaluated using Mechanical Turk. Measured with respect to the five categories experimented with in the paper. See Section 4 for details. Boilerpipe: http://code.google.com/p/boilerpipe 3 OpenIEv4: http://knowitall.github.io/openie/ https://github.com/piskvorky/gensim/ AcknowledgmentThis work is supported in part by a gift from Google. Thanks to Uday Saini for carefully reading a draft of the paper. Knowledge vault: A web-scale approach to probabilistic knowledge fusion. Xin Dong, Evgeniy Gabrilovich, Geremy Heitz, Wilko Horn, Ni Lao, Kevin Murphy, Thomas Strohmann, Shaohua Sun, Wei Zhang, Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining. the 20th ACM SIGKDD international conference on Knowledge discovery and data miningXin Dong, Evgeniy Gabrilovich, Geremy Heitz, Wilko Horn, Ni Lao, Kevin Murphy, Thomas Strohmann, Shaohua Sun, and Wei Zhang. 2014. Knowledge vault: A web-scale approach to probabilistic knowledge fusion. In Proceed- ings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining. Identifying relations for open information extraction. Anthony Fader, Stephen Soderland, Oren Etzioni, Proceedings of the Conference on Empirical Methods in Natural Language Processing. the Conference on Empirical Methods in Natural Language ProcessingAssociation for Computational LinguisticsAnthony Fader, Stephen Soderland, and Oren Et- zioni. 2011. Identifying relations for open infor- mation extraction. In Proceedings of the Con- ference on Empirical Methods in Natural Lan- guage Processing, pages 1535-1545. Association for Computational Linguistics. Canonicalizing open knowledge bases. Luis Galárraga, Geremy Heitz, Kevin Murphy, Fabian M Suchanek, Proceedings of the 23rd ACM International Conference on Conference on Information and Knowledge Management. the 23rd ACM International Conference on Conference on Information and Knowledge ManagementACMLuis Galárraga, Geremy Heitz, Kevin Murphy, and Fabian M Suchanek. 2014. Canonicalizing open knowledge bases. In Proceedings of the 23rd ACM International Conference on Conference on Information and Knowledge Management, pages 1679-1688. ACM. Improving learning and inference in a large knowledge-base using latent syntactic cues. Matt Gardner, Partha Pratim Talukdar, Bryan Kisiel, Tom Mitchell, Matt Gardner, Partha Pratim Talukdar, Bryan Kisiel, and Tom Mitchell. 2013. Improving learning and inference in a large knowledge-base using latent syntactic cues. Incorporating vector space similarity in random walk inference over knowledge bases. Matt Gardner, Partha Pratim Talukdar, Jayant Krishnamurthy, Tom Mitchell, Matt Gardner, Partha Pratim Talukdar, Jayant Krishnamurthy, and Tom Mitchell. 2014. Incor- porating vector space similarity in random walk inference over knowledge bases. The stanford corenlp natural language processing toolkit. D Christopher, Mihai Manning, John Surdeanu, Jenny Bauer, Finkel, J Steven, David Bethard, Mcclosky, Proceedings of 52nd Annual Meeting of the Association for Computational Linguistics: System Demonstrations. 52nd Annual Meeting of the Association for Computational Linguistics: System DemonstrationsChristopher D Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven J Bethard, and David McClosky. 2014. The stanford corenlp natural language processing toolkit. In Proceed- ings of 52nd Annual Meeting of the Association for Computational Linguistics: System Demon- strations. Distributed representations of words and phrases and their compositionality. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, Jeff Dean, Advances in neural information processing systems. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed rep- resentations of words and phrases and their com- positionality. In Advances in neural information processing systems, pages 3111-3119. Neverending learning. T Mitchell, E Cohen, P Hruschka, Talukdar, Betteridge, B Carlson, Dalvi, Gardner, Kisiel, Krishnamurthy, Proceedings of AAAI. AAAIT Mitchell, W Cohen, E Hruschka, P Talukdar, J Betteridge, A Carlson, B Dalvi, M Gardner, B Kisiel, J Krishnamurthy, et al. 2015. Never- ending learning. In Proceedings of AAAI. Open language learning for information extraction. Michael Schmitz, Robert Bart, Stephen Soderland, Oren Etzioni, Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning. the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language LearningAssociation for Computational LinguisticsMichael Schmitz, Robert Bart, Stephen Soderland, Oren Etzioni, et al. 2012. Open language learn- ing for information extraction. In Proceedings of the 2012 Joint Conference on Empirical Meth- ods in Natural Language Processing and Compu- tational Natural Language Learning, pages 523- 534. Association for Computational Linguistics. Overview of the tac2013 knowledge base population evaluation: English slot filling and temporal slot filling. Mihai Surdeanu, Proceedings of the Sixth Text Analysis Conference. the Sixth Text Analysis ConferenceTACMihai Surdeanu. 2013. Overview of the tac2013 knowledge base population evaluation: English slot filling and temporal slot filling. In Proceed- ings of the Sixth Text Analysis Conference (TAC 2013). Knowledge base completion via searchbased question answering. Robert West, Evgeniy Gabrilovich, Kevin Murphy, Shaohua Sun, Rahul Gupta, Dekang Lin, Proceedings of the 23rd international conference on World wide web. the 23rd international conference on World wide webRobert West, Evgeniy Gabrilovich, Kevin Murphy, Shaohua Sun, Rahul Gupta, and Dekang Lin. 2014. Knowledge base completion via search- based question answering. In Proceedings of the 23rd international conference on World wide web. Textrunner: open information extraction on the web. Alexander Yates, Michael Cafarella, Michele Banko, Oren Etzioni, Matthew Broadhead, Stephen Soderland, Proceedings of Human Language Technologies: The Annual Conference of the North American Chapter of the Association for Computational Linguistics: Demonstrations. Human Language Technologies: The Annual Conference of the North American Chapter of the Association for Computational Linguistics: DemonstrationsAssociation for Computational LinguisticsAlexander Yates, Michael Cafarella, Michele Banko, Oren Etzioni, Matthew Broadhead, and Stephen Soderland. 2007. Textrunner: open information extraction on the web. In Proceed- ings of Human Language Technologies: The An- nual Conference of the North American Chapter of the Association for Computational Linguis- tics: Demonstrations, pages 25-26. Association for Computational Linguistics.
14,701,636
GLR PARSER WITH CONDITIONAL ACTION MODEL USING SURFACE PHRASAL TYPES FOR KOREAN
In this paper, we propose a new probabilistic GLR parsing method that can solve the problems of conventional methods. Our proposed Conditional Action Model uses Surface Phrasal Types (SPTs) encoding the functional word sequences of the sub-trees for describing structural characteristics of the partial parse. And, the proposed GLR model outperforms the previous methods by about 6~8%.
[ 2220955, 6112493 ]
GLR PARSER WITH CONDITIONAL ACTION MODEL USING SURFACE PHRASAL TYPES FOR KOREAN Yong-Jae Kwak yjkwak@nlp.korea.ac.kr NLP Lab Dept. of CSE Korea University SeoulKorea So-Young Park NLP Lab Dept. of CSE Korea University SeoulKorea Hae-Chang Rim rim@nlp.korea.ac.kr NLP Lab Dept. of CSE Korea University SeoulKorea GLR PARSER WITH CONDITIONAL ACTION MODEL USING SURFACE PHRASAL TYPES FOR KOREAN In this paper, we propose a new probabilistic GLR parsing method that can solve the problems of conventional methods. Our proposed Conditional Action Model uses Surface Phrasal Types (SPTs) encoding the functional word sequences of the sub-trees for describing structural characteristics of the partial parse. And, the proposed GLR model outperforms the previous methods by about 6~8%. Introduction Since the first approach [Wright and Wrigley 1991]of combining a probabilistic method into the GLR technique was published, Some probabilistic GLR parsers also have been implemented in which probabilities are assigned to actions of LR parsing tables by using lookaheads or LR states as simple context information of [Briscoe and Carroll 1993], [Kentaro et al. 1998], and [Ruland, 2000] which does not use the stack information of the GLR parser effectively, because of highly complex internal GLR stack. As a result, they have used relatively limited contextual information for disambiguation. [Kwak et al., 2001] have proposed a conditional action model that uses the partially constructed parse represented by the graph-structured stack as the additional context. However, this method inappropriately defined sub-tree structure. Our proposed model uses Surface Phrasal Types representing the structural characteristics of the sub-trees for its additional contextual information. Conditional Action Model(CAM) using Surface Phrasal Type (SPT) CAM is devised based on the hypothesis that this model can actively use rich information provided by the partially constructed parse built on the graph-structured stack, and thus estimate the probability of the shift/reduce actions more precisely [Kwak et al., 2001]. Surface Phrasal Type (SPT) is represented by a sequence of the primitive mnemonics which describes the specific types of phrases based on their terminal nodes. In this work, we use functional words for mnemonics in SPT. In Korean, the functional word system is highly developed in the morpheme level. Therefore, this kind of phrasal description is meaningful way of representing the parse structure without considering the internal relation of the parse forest. Moreover, this scheme can avoid the overhead of taking care of the packed node with a local ambiguity. We represent SPTs as the corresponding mnemonic sequence(in backward order) as shown in Figure 1. We have defined mnemonic sets of SPT combination for the production of the noun phases and verb phrases, respectively. Example mnemonic sets for the both production forms are shown in Table 1.Elements in each mnemonic set consist of representatives of part-of-speeches (POSs) with the same syntactic (structural) function. For probabilistic model, we define the entire parse of the given input sentence as the sequence of actions taken until the parser reaches the accept state. Thus, the probability of the i-th action and parse probability are calculated by the following formula: sub-trees along the reduce route. s i-1 indicates the number of the state nodes at the top of the stack, and l i is the lookahead symbol (POS) read by the parser., and represents the i-th action. Then, the probability of a parse tree can be calculated by the product of all action probabilities. To cope with the sparse data problem when using our probabilistic model, we use a deleted interpolation method with the backing-off strategy similar to [Collins, 1999]. i a 3 ∏ = − − − =    = 1 i 1 i 1 1 0 i i ( ) | ( , case)( Experimental Results We have experimented on the Korean treebank which consists of 12,084 sentences tagged with Korean grammar scheme with 56 CFG rules of [Park et al. 1999]. The distribution of sentence length over the corpus is shown in Table 3. We have used 10,906 sentences for the training data and 1,178 sentences for the test data. Average morpheme length is 22.5. For CAM, because of the sparse data problem, we have restricted the maximum continuous repetition count of the same mnemonic and the maximum length of one SPT to 1 and 3, respectively (empirically optimal value) Our GLR parser uses the canonical SLR(1) parsing table constructed from the binary CFG entries provided by the CFG grammar. As shown in the experimental results of Table 2, our proposed model outperforms previous models by about 6~8 %(Upper and lower parts show the results for training data and test data, respectively). Furthermore, the performance of our parser could be improved if it is integrated with the properly lexicalized information. The results show that functional category is an effective way of describing structural aspects of a phrase and can be used as contextual information in GLR parsing. Figure 1 . 1representations of SPT. Functional words are underlined. Table 1 : 1SPT mnemonic codes (partial) for NP code property of the produced NP syntactic structure POS ED modified by clause verb+ED +noun EFD EN transformed by ending verb+EN EFN PD genitive noun noun+PD+noun PD … Table 2 : 2Parsing Accuracy% B&C 1993 Kentaro 1998 Kwak 2001 Proposed Model 72.02 74.29 77.23 83.64 L R 71.22 74.27 76.01 82.18 2.13 3.81 6.99 12.94 E M 1.70 3.77 6.04 10.36 Generalized Probabilistic LR Parsing of Natural Language (Corpora) with Unification-Based Grammars. Ted Briscoe, John Carroll, Computational Linguistics. 19and Carroll 1993] Ted Briscoe and John Carroll. 1993. Generalized Probabilistic LR Parsing of Natural Language (Corpora) with Unification-Based Grammars. In Computational Linguistics,19(1):pages 25-59. Head-Driven Models for Natural Language Parsing. Michael Collins, CollinsDept. of Computer and Information Science, University of PennsylvaniaPH.D thesis[Collins 1999] Michael Collins. 1999. Head-Driven Models for Natural Language Parsing. PH.D thesis, Dept. of Computer and Information Science, University of Pennsylvania. Probabilistic GLR parsing: a new formalization and its impact on parsing performance. [ Kentaro, Journal of Natural Language Processing. 5[Kentaro et al. 1998] Inui Kentaro, Virach Sornlertlamvanich, Tanaka Hozumi and Tokunaga Takenobu. 1998. Probabilistic GLR parsing: a new formalization and its impact on parsing performance. In Journal of Natural Language Processing, Vol. 5, No. 3, pages 33-52. So-Young Park, Hoojung Chung, Young-Sook Hwang, Sang-Zoo Lee, Hae-Chang Rim, 2001. GLR Parser with Conditional Action Model (CAM). [ Kwak, proceedings of 6th Natural Language Processing Pacific Rim Symposium. 6th Natural Language Processing Pacific Rim Symposium[Kwak et al. 2001] Yong-Jae Kwak. So-Young Park, Hoojung Chung, Young-Sook Hwang, Sang-Zoo Lee, Hae- Chang Rim, 2001. GLR Parser with Conditional Action Model (CAM). In proceedings of 6th Natural Language Processing Pacific Rim Symposium, pages 359-366. So-Young Park, Young-Sook Hwang, Hoojung Chung, Yong-Jae Kwak, and Hae-Chang Rim. 1999. A Feature-based Grammar for Korean Parsing. [ Park, proceedings of 5th Natural Language Processing Pacific Rim Symposium. 5th Natural Language Processing Pacific Rim Symposium[Park et al. 1999] So-Young Park, Young-Sook Hwang, Hoojung Chung, Yong-Jae Kwak, and Hae-Chang Rim. 1999. A Feature-based Grammar for Korean Parsing. In proceedings of 5th Natural Language Processing Pacific Rim Symposium, pages 167-171 A Context-Sensitive Model for Probabilistic LR Parsing of Spoken Language with Transformation-Based Postprocessing. Tobias Ruland, Proceedings of the 18th International Conference on Computational Linguistics. the 18th International Conference on Computational LinguisticsRuland[Ruland 2000] Tobias Ruland, 2000. A Context-Sensitive Model for Probabilistic LR Parsing of Spoken Language with Transformation-Based Postprocessing. In Proceedings of the 18th International Conference on Computational Linguistics, pages 677-683. GLR Parsing with Probability. J H Wright, E N Wrigley, Generalized LR Parsing. Kluwer Academic PublishersWright and Wrigley 199[Wright and Wrigley 199] J. H. Wright and E. N. Wrigley. 1991. GLR Parsing with Probability. In Generalized LR Parsing. Kluwer Academic Publishers.
5,313,088
LUKE: AN EXPERIMENT IN THE EARLY INTEGRATION OF NATURAL LANGUAGE PROCESSING
Luke is a knowledge editor designed to support two tasks; the first is editing the classes and relations in a knowledge base. The second is editing and maintaining the semantic mapping knowledge neccesery to allow a natural language interface to understand sentences with respect to that knowledge base. In order to emphasize design decisions shared between the two tasks, Luke provides facilities to concurrently debug the application and the natural language interface. Luke also makes natural language available in its own user interface. This makes it possible for a knowledge base builder to exploit natural language both as a way of locating desired concepts within the knowledge base and as a a way of doing consistency checking on the knowledge base as it is being built.base, Luke makes natural language available in its own interface. This makes it possible for the knowledge base builder to exploit natural language both as a way of referring to objects in the knowledge base and as a way of doing consistency checking on the objects themselves. In this paper, we will describe both what Luke does and how doing that supports this productive view of the interaction between building a knowledge based system and building an associated natural language interface.
[ 7697328 ]
LUKE: AN EXPERIMENT IN THE EARLY INTEGRATION OF NATURAL LANGUAGE PROCESSING David A Wroblewski MCC Human Interface Laboratory 3500 West Balcones Center Drive Austin78759Texas Elaine A Rich MCC Human Interface Laboratory 3500 West Balcones Center Drive Austin78759Texas LUKE: AN EXPERIMENT IN THE EARLY INTEGRATION OF NATURAL LANGUAGE PROCESSING Luke is a knowledge editor designed to support two tasks; the first is editing the classes and relations in a knowledge base. The second is editing and maintaining the semantic mapping knowledge neccesery to allow a natural language interface to understand sentences with respect to that knowledge base. In order to emphasize design decisions shared between the two tasks, Luke provides facilities to concurrently debug the application and the natural language interface. Luke also makes natural language available in its own user interface. This makes it possible for a knowledge base builder to exploit natural language both as a way of locating desired concepts within the knowledge base and as a a way of doing consistency checking on the knowledge base as it is being built.base, Luke makes natural language available in its own interface. This makes it possible for the knowledge base builder to exploit natural language both as a way of referring to objects in the knowledge base and as a way of doing consistency checking on the objects themselves. In this paper, we will describe both what Luke does and how doing that supports this productive view of the interaction between building a knowledge based system and building an associated natural language interface. Background And Motivation Introduction Luke is a knowledge base editor that has been enhanced to support entering and maintaining the semantic mappings needed by a natural language interface to a knowledge base. Thus Luke supports a team of system builders who are simultaneously building a knowledgebased program and building a natural language interface to that program. It makes sense for a single tool to support both of these efforts because the efforts themselves are logically intertwined in two important ways, both of which result from the fact that the application program and its NL interface must share a single knowledge base. (This sharing is necessary because otherwise the NL system will not be able to communicate with the application). The first way in which the two efforts Luke supports are related is that, although they produce two systems that are different and may thus place different demands on their associated knowledge bases, both must share a single such knowledge base. By supporting the early integration of the application program and the NL interface as this single knowledge base is being built, Luke helps to ensure that it will be adequate, with respect to both its content and its structure, to support both these target tasks. The second way in which the two system building tasks are related is that one can support the other. By associating natural language with concepts as they are entered into a knowledge A Model Of Semantic Analysis All of the following discussion is based on a model of semantic analysis similar to that proposed in (Hobbs, 1985). Under this model, syntactic and semantic analysis are done as separate operations. The first stage of semantic analysis is a conversion to initia/logical form, in which the surface content of the sentence is encoded in a set of expressions that look like logical terms, but whose predicates are taken directly from the words used in the sentence. Initial logical form captures the predicational structure of the sentence, without expressir~g it in terms of the knowledge base. Once produced, the expressions in initial logical form are individually translated into final logical form, which is a set of first-order terms whose predicates are those used in the application's knowledge base. The translation from initial logical form to final logical form is done via a set of rules known as semantic mappings, and it is the acquisition of these semantic mappings that is the subject of this paper 1. The control of and exact details of semantic mappings are irrelevant for this 1In reality, we further subdivide the semantic mappings into mappings and compoundings. Mappings we described above. Compoundings are rules that specify how two nouns can be compounded. discussion; it is enough to know that semantic mappings roughly translate from the surface form of the English input to expressions built in terms of the target knowledge base. The general form of a semantic mapping is shown below, along with several examples. A semantic mapping is a rule for translating one initial logical form into zero or more final logical forms. A semantic lexicon is then a collection of semantic mappings that specify translations for the words in the syntactic lexicon. Generally: i/t--> tlt,,rl .... tit. Examples : (dog ?x) --> (canine ?x) (I) (make ?i ?x ?y) --> (2) (creating ?i) (agent ?i ?x) (object ?i ?y) (graphic-ob j ?y) A mapping for the noun "dog" is shown in (1). This rule states that the referent of a noun phrase whose head is "dog" must be a member of the class canine. Mapping (2) shows that sortal restrictions can be included in the mapping, in this case restricting the direct object of the verb "make" to be a member of the class graphic-obj. An ILF may match the left hand side of many semantic mappings, and so ambiguity is captured in the semantic lexicon. In our model of semantic analysis, these semantic mappings are used to build a picture of what was said in the sentence by posting constraints. In fact, each semantic mapping exploits two kinds of constraints. Lexical constraints define the applicability of a mapping as a function of the words that appear in a sentence. These constraints always appear on the left hand side of a semantic mapping. Knowledge-base constraints define the applicability of a mapping as a function of the meanings of the current word, as well as the other words in a sentence. These constraints always appear on the right hand side of a semantic mapping. Viewed this way, mapping (1) constrains the referent of "a dog" (or "the dog" or any noun phrase with "dog" as its head) to be a member of the class canine, but does not specify what (if any) specialization of canine the referent might refer to. For example, it does not commit to the class schnauzer versus the class dachshund. Past Experience Our early attempts at porting our natural language understanding system, Lucy (Rich, 1987), consisted of "hand-crafting" a set of semantic mappings for an existing knowledge base. The application program was an intelligent advice system (Miller, 1987) that would accept questions from a user about operating a statistical analysis program and try to provide advice based on its knowledge of the program's interface and internal structure. Creating the semantic mappings was a long and tedious chore. Starting with a mostlycomplete knowledge base, finding the correct semantic mappings was a matter of knowledge acquisition, in which we asked the knowledge base designers what knowledge structure a particular word might map onto. Many times this was almost as difficult for the knowledge base designers as it was for the "semanticians", since the knowledge base was quite large, and developed by several people. Often, the knowledge base designer being interviewed was not familiar with the area of the knowledge base being mapped into, and thus could not accurately answer questions, especially with respect to completeness (i.e., "this is the only class that the word could map into.") Furthermore, defining the semantic mappings often uncovered inconsistencies in the knowledge base. When this happened, it was not always immediately clear what the correct action was; we could either fix the knowledge base or live with the inconsistencies (which usually meant semantic ambiguity where none was really necessary.) Even worse, there were many cases where defining any semantic mapping was problematic. In these cases, representational decisions that had already been made either precluded or made very difficult any principled mapping of English expressions into the knowledge base. This happened when information was needed to analyze a syntactic constituent (perhaps a noun phrase like "the mouse") but the referent of the constituent (the mouse icon on the screen), was not represented in the knowledge base. Thus, no semantic mapping could be written. The problem could be solved by simply introducing the relevent knowledge, but sometimes a better solution would have involved redesigning a portion of the knowledge base to represent more clearly important features of the domain. Usually this was too costly an option to consider. Finally, we quickly discovered that the dream of establishing the semantic mappings once and for all was a fallacy. Any significant knowledge base is "under construction" for a long period of time; introducing semantic mappings before the knowledge base is totally done necessarily implies maintenance of the semantic mappings in the face of a changing knowledge base. This is a paradox: on the one hand, it would be best to have a completed knowledge base before doing any semantic mapping. On the other hand, to avoid problematic semantic mappings it would be best to introduce semantic mappings and "debug" them as early as possible in the development of the knowledge base. operations performed at the editor interface are translated into a series of function calls via a well-defined functional interface to the knowledge representation system. The base editor is a complete system: it can be run independently of any of the extensions described hereafter. The base editor knows nothing of the Lucy natural language understanding system. The Dual-Application Development Model In order to avoid the problems mentioned in the last section, Luke endorses a dual-application mode/ of the development process. Under such a model, there are two related applications being developed. One is a natural language interface (NLI), responsible for forming a syntactic, semantic, and pragmatic analysis of user input, and passing the interpreted input to the domain application. The domain application, of course, could be anything. We focused on knowledge-based applications so that we could assume that a knowledge base was a part of the domain application. We assume that the natural language understanding component and the domain component both have access to the knowledge base, and that semantic analysis should be done with respect to that knowledge base. The dual-application model highlights the design interplay between the domain application and the interface. In particular, knowledge base design decisions motivated exclusively by the domain application or the NLI, without regard for the other application, are likely to be inadequate in the final, integrated, system. Such ill-informed decisions might be avoided in a development environment that allows the earliest possible integration of the applications. Luke is our first attempt to provide such an environment, and is built to support the work done during early prototyping and full-scale development of an application. NL I._. P Analysis I Algorithms[ q'- LUKE lase " Knowteoge Representation ~.~ System Figure 1" Luke's Architecture The base editor allows two types of commar,ds: terminological and assertional commands 2. These terms are taken from (Brachman, 1983), which defines a knowledge base as consisting of two "boxes". The Tbox holds the terminological information of the knowledge base, information that defines what symbols are valid class identifiers, and what the names, arities, domains and ranges of those relations are. Brachman and Levesque liken the terminological knowledge to the "noun phrases" of the knowledge base. Luke's Architecture Luke is an enhanced version of a simple knowledge editor, as illustrated in Figure 1. In the discussion that follows, we will refer to this as the base editor, because it forms the foundation upon which Luke is built. All 2Actually, there is at least one other type of command: management. Management commands handle such prosaic issues as saving and loading knowledge bases. While these commands will not be described in detail in this paper, the reader should be aware that a significant effort was also required to upgrade these to handle managing both the knowledge base and the semantic lexicon, Operation Create Class Create Slot Delete Class Existing mappings may be invalid because they refer to a now nonexistent class. Delete Slot Some existing mappings may be invalid because they refer to a now nonexistent slot. Attach Superclass Some existing mappings may be invalid Detach Superclass because inheritance paths have changed. Rename (anything) Existing mappings may be invalid due to renaming. The Abox holds assertional information, described by using logical connectives such as "and", "or" and "not" and the predicates defined in the Tbox to form logical sentences. While the terminological component describes what it is possible to say, the assertional component holds a theory of the world: a set of axioms describing the valid inferences in the knowledge base. As shown in Figure 1, Luke extends the base editor by additionally maintaining a semantic lexicon. Each time an operation is performed on the knowledge base, Luke must update the semantic lexicon so that the set of semantic mappings it contains remains consistent with the updated knowledge base. Table 1 shows some operations and their effect on the semantic lexicon. As can be seen from this table, operations that change the terminological content of the knowledge base (such as Create Class or Create Slot) may change the number or structure of the semantic mappings known. For example, consider the case of the Create Class command. By adding a new class to the knowledge base, we have extended the Tbox;, since the knowledge base is now able to describe something it could not describe before, some English noun phrases that were previously uninterpretable can now be mapped into this class. Existing mappings may have to be changed, since the act of adding a class may constitute a refinement of an existing class and its associated mappings. For instance, one might add a set of subclasses of canine where none used to exist. If the current set of semantic mappings map "poodle" and "doberman" into canine, then these rules may have to be refined to map into the correct subclass. Extending the terminological component of the knowledge base extends the range of or precision with which syntactic constituents may be semantically analyzed. Operations that alter the Abox have less welldefined effects on the semantic lexicon. For instance, without detailed knowledge of the domain application and the domain itself, the addition of an inference rule to the knowledge base implies nothing about the possible semantic mappings or the validity of current mappings. In general, it is very difficult to use the assertional component of a knowledge base during semantic processing; for this reason, we will concentrate on terminological operations for the remainder of this paper. Luke, then, is a "base editor" extended to account for the semantic mapping side effects of knowledge editing operations. Luke reacts in predictable ways to each editing operation, based on the information shown in Table 1 : • New mappings possible: Luke reacts to this condition by conducting an "interview" with the user. Each interview is designed to collect the minimum information necessary to infer any new semantic mappings. In a word, the response to possible new mappings is "acquisition". • Old mappings possibly invalid: Luke reacts to this condition by trying to identify the affected mappings and requesting the user verify their correctness. In a word, the response to possibly invalid mappings is "verification". Figure 2 shows the screen as it might typically appear during an editing session with Luke. The user is provided with a suite of inspectors to display the class hierarchy or view individual frames in detail. Each inspector provides an iconic menu of operations that can be performed on it or its contents. Components of frames in the inspectors, such as the names of slots, are mouse-sensitive and provide the main machanism for editing the frames themselves. Also provided is an agenda of tasks to be performed. A user may manually queue up tasks to perform as reminders, annotate tasks, or refer tasks to other members of the development team. Tasks may be scheduled automatically as a side effect of various editing commands. There are two main types of tasks: verification tasks and acquisition tasks. Verification tasks are reminders to inspect some part of the knowledge base to ensure its consistency. Acquisition tasks are (typically) interviews that Luke has requested with the user. The Luke Window The base editor also provides a method of delaying tasks. Some tasks, such as acquisition tasks, are started at a default time, usually immediately after the action that inspired them. The user has the option, at any point during the task, of pressing the delay key, causing the task to be stopped, and an agenda item created for it if none already exists. Through this delaying mechanism, the user has control of when tasks are executed. The agenda is shown in the upper right inspector in Figure 2. It is implemented as a frame (an instance of the built-in class agenda, and may be inspected via the normal editing commands of the base editor. Each task is represented as an instance of the class task, and includes a description of the event that inspired it. Although the base editor makes very little use of the agenda mechanism, Luke schedules a large number of interviews and verification tasks through the agenda. User Tasks, User Models Luke is different from most other tools of its kind for three reasons. It provides support for both the acquisition and maintenance of semantic mappings. Because it then knows those semantic mappings, it makes natural language available in its own interface. And in order to do these things, it must assume more sophistication on the part of its users. The intended users of Luke are members of a knowledge engineering team. These people are assumed to be familiar with the content and structure of the knowledge base, or to be capable of discovering what they need to know by inspecting the knowledge base. Although they are not assumed to have an extensive linguistics background nor extensive familiarity with the implementation of the semantic processing algorithms of Lucy, they are assumed to have a "qualitative model" of semantic processing (as presented earlier). Moreover, since we assume that a team of engineers will be building the applications, some with special interests or talents, tasks that might require greater linguistic sophistication may be delayed until the "linguistics specialist" can be brought in. Luke provides tools for the acquisition of semantic mappings and the maintenance of those mappings once collected. Although traditionally, little attention has been paid to the latter task, we believe that it may prove to be the more important of the two; once a large base of mappings has been established, it is only practical to maintain them with tools specifically designed for that task. The next part of of this section will describe tools provided by Luke for both tasks. Then the remainder of the section will show how these mappings can be used to inhance the user interface of Luke itself. Acquiring Semantic Mappings The Luke acquisition modules are built with the following design guidelines: 1. Perform acquisition tasks temporally near the event that causes them. 2. Allow the user to delay acquisition at will. 3. Allow the user to specify the minimum information from which semantic mappings can be deduced. Remember that people are better at verifying a proposed structure than they are at creating correct structures from scratch. 5. Try to repay the user for the work expended in the interviews by using the semantic mappings for knowledge base debugging, navigation, and consistency checking. 6. Project a correct model of semantic processing to the user throughout the acquisition process. In the Luke environment, acquiring semantic mappings turns out to be quite simple. The scheme we use in Luke involves a three-stage process. In the first stage, Luke collects associations. Simply put, an association is a triple of the form <word, part-of-speech, structure> In the second stage, a set of heuristics inspects the associations and compiles them into semantic mappings. For instance, the association <"dog",noun, canine> might be built during acquisition to indicate that some noun sense of the word "dog" maps into the class canine. In the final stage, the mapping rule deduced from the association is built, presented to the user for refinement via a special mapping editor, and entered into the semantic lexicon. Occassionally, Luke uses the new mapping to inspire other mappings, such as the nominalizations of a verb. In this case, once a verb mapping is known, nomimalizations of it are collected and created in the same manner, and heuristics take advantage of the fact that the new nouns are nomimalizations of a verb whose mapping is known. Thus the constraints on the complements of the verb are used to generate mappings for prepositions that can be used to specify the complements of the nominalization of that verb. Although the basic acquisition technique is simple, obeying guideline 6 can be tricky. For instance, in an early version of Luke we temporally separated the interviews from the heuristic construction of associations. Further, we did not submit the mappings to the user when they were guessed. The mappings were guessed later, in a background process, usually invisible to the Luke user. Yet semantic analyses often succeeded, giving users the impression that the associations were driving the semantic analysis routines, not the semantic mappings deduced from them. With such a model of the process, the user was confused and unprepared when semantic mappings ("where did they come from?") were incorrect and had to be inspected, debugged, and edited. In the current version of Luke, the semantic mappings are presented to the user at the end of the interview, to be reviewed and edited immediately. Connecting the process of associating with the mapping creation process leads to much less confusion. Managing the Semantic Lexicon Once a semantic lexicon exists, maintaining it becomes a significant chore. During routine knowledge base editing a user may change the terminological content in such a way that existing semantic mappings become invalid. Deleting a class, for example, clearly makes any semantic mappings that mention it incorrect. If a large semantic lexicon exists, changing the terminological content of the knowledge base may entail editing a very large number of semantic mappings. Luke provides a number of tools to help manage the semantic lexicon. These tools fall roughly into two categories, those that support editing and those that aid in consistency checking. The editing tools allow a user to request all the mappings that target a specific frame, or all the mappings that map from a given surface form, via a special mappings browser. Users may edit semantic mappings at any time using the ordinary editing tools of the base editor, because semantic mappings themselves are stored as frames in the knowledge base. The biggest maintenance service Luke provides is consistency checking. When a frame is deleted, entered, or specialized in the knowledge base, or after any terminological editing operation, Luke collects all of the semantic mappings that might be affected and creates a set of tasks to verify their continuing correctness. As always, the user can choose to handle such tasks immediately, or delay for later consideration. Exploiting Natural Language in Luke Itself The overall goal in building Luke is to provide a set of "power tools" (Sheils, 1983) that support the dual application model, and Luke is our first step in that direction. One potential problem in Luke's design is increasing the overhead of building a knowledge base, since various tasks are continually scheduled for the user. This fear is mitigated by the following observations. First, the added overhead doesn't represent extra work to be done by the user, only a different time for the user to do it. If there is to be a NLI for the application, then the developer is in a "pay me now or pay me later" bind, where late payment can be very costly. Viewed this way, Luke is helping the user trade a short-term loss (interviews and verification tasks during editing) for a long-term gain (smaller NLI development effort after the domain application is finished). Second, with the additional information provided by concurrently developing the NLI and the domain knowledge base, Luke can "pay back" the user at editing time by strategically using this information to support both extending and debugging a knowledge base. In the rest of this section we describe some of the ways in which this is done. Luke provides the Search For command, which accepts a noun phrase as its argument. Search For converts that noun phrase into a knowledge base query by using the Lucy natural language understanding system. The noun phrase is parsed and semantically analyzed using any known semantic mappings. When the resulting query is executed, the matching frames are stored into a response frame, along with information concerning what mappings were used in the interpretation process. Then the user is presented the frames in a menu. Thus, Search For provides both a way of exercising the semantic mappings and retrieving frames from the knowledge base during normal editing. Note that such "retrieval by description" facilities are not usually provided in knowledge editors because it would require a sophisticated query language allowing abstraction and arbitrary user extensions. Because Luke already has access to a natural language analysis component, providing this service to the user is straightforward. Also note that such a service is vital to editing and maintaining large knowledge bases --finding a frame using just graphical displays of the class hierarchy and detailed single-frame displays does not provide any sort of "random access" capabilities, and finding a specific frame using only such tools can be very difficult. Luke also provides a method of testing the analysis of entire sentences. The developer can submit a sentence for analysis to the NLI processing algorithms. The analysis of the sentence is returned as a frame in the knowledge base, recording the interpretations found, and a record of the mappings used to get the interpretations. This can be further processed by a "default command loop" used to simulate the behaviour of the application program. Using this facility, it is easy for the application developer to place her/himself in the place of the application program, and to envision the sorts of responses neccesary. Furthermore, the process of interviewing is a form of documentation. During an editing session, the user leaves throughout the knowledge base a "trail" of semantic hints that various customized commands can take advantage of. For instance, the Show Associated Nouns command pops up a quick menu of words associated with the frame in question, providing a handy documentation function. Finally, Luke can catch several knowledge editing mistakes that the base editor cannot. One of the most common is class duplication -unwittingly creating a class intended to represent the same set of entities as an alreadyexisting class. Often this happens when the knowledge base is being built by a team of people or because it has grown too complex for an individual to visualize. Luke helps solve the problem using the existing semantic mappings. After associating a noun with a class, Luke warns the user of the total number of mappings for that noun and some indication of the frames it might map into. This simple mechanisms detects many cases of class duplication. variety of ways, including scheduling, executing, annotating, or referring them between members of the development team. Future Plans At present, Luke is a useful, competent knowledge editor and provides a substrate of tools for concurrently managing the development of an application knowledge base and the NLI that will ultimately operate with it. Ultimately, we hope to make Luke itself a knowledge-based program, adding to it the heuristics that an "expert NLI engineer" might have, and expanding its role to that of an intelligent assistant. The groundwork is laid for such a step; Luke is already driven by a model of itself, the knowledge base, Lucy's algorithms, and its users. In the near term we plan to expand and refine the role that such knowledge plays in Luke's operation. Comparison To Other Work Luke appears to be different than previous systems of its ilk in a number of ways. Most importantly, Luke is built to support the dualapplication model of development. Systems such as TEAM (Grosz, 1987), TELl (Ballard, 1986), and to a lesser degree, IRACQ (Ayuso, 1987), all aim for portability between existing, untouchable, applications (usually DBMS's). These tools have generally emphasized building a database schema in order to supply the (missing) terminological component of the database. We have rejected such an approach on the grounds that it is only useful for building sentence-to-single-command translators, not for wholesale integration of a NLI with an application. Luke is an attempt to help design in the natural language interface from the start. Because of this basic assumption, Luke is more oriented toward users as sophisticated system builders than as linguistically naive endusers or "database experts". Luke users will understand some linguistics, either by educational background, hands-on experience, or special primers and training. Finally, Luke is designed to support a team of users, not a single user. Luke provides a flexible agenda and task management system that allows users to handle tasks for reviewing existing mappings, investigating potential conflicts in the semantic lexicon, and creating new mappings for new objects in the knowledge base. Such tasks can be operated on in a Figure 2 : 2For (a quoted noun phrase, for exanple= me dog') "the current user" For interpretation 1 of "the current user" (see also franc RESPONSEIIS) Query: Looking Far ?X-Ill such that: (ACCOUNTS-ON ?X-liT 73-1211) (CURRENTLY-LOCCEO-IN-ON ?X-ll? 73-121) (CL:AEMBER 73-1.21 PROCESSOR) (CL:MEMBER ?X-11.? HUMAN) flnsMers: SUSAN-THE-CURRENT-USER Click left on any displayed value to inspect it. nab uord -OATA-(jb) ilp25s8 Verify hem aord -COMPUTATION-(jb) Verify hem uord -COMPUTE-(jh) 11s2 Verify napping H.FILE-I (jb) 11s25s8 Verify hem uord -FILE-(jb) lla25sA Oeflne nouns FILE-STRUCT (jb) 11s25s 9efine nouns ORTR-STRUCT (jb) llsISs 6 .b A. ~ b~ x rRSK-~,Y(e t~sks) (an i~ta~ce o¢~¢RZk'Please take s look at thts nappt J~m Barnett 11/25/87 Z9:26:17: Could this neDpin9 rule be conbt ~1~ ~ A~ ~ b~ '~I PREP. FOR-I (~ task) (an tr~t~ 0¢ P~D~SITIOI~ SOURCE: Base Editor Facilities: Windows and Agendas Table 1 : 1Knowledge Editing Operations and Their EffectsSemantic Lexicon EffectNew mappings possible. Old mappings may have to be refined. AcknowledgmentsThe ideas in this paper are the product of the entire LINGO team. Mike Barnett designed and implemented the agenda facility described herein, and Kevin Knight designed some of the semantic debugging aids. Additionally, most of the ideas about the way this version Luke operates sprang from a working group including Mike Barnett, Jim Barnett, Kevin Knight, and the authors. . Damaris M Ayuso, Varda Shaked, Ralph M Weischedel, Damaris M. Ayuso, Varda Shaked and Ralph M. Weischedel. (July 1987). An Semantic Acquisition in TELl: A Transportable, User-Customized Natural Language Processor. B W Ballard, D E Stumberger, Proceedings of the 24th Annual Meeting of the Association of Computational Linguistics. the 24th Annual Meeting of the Association of Computational LinguisticsB.W. Ballard and D.E. Stumberger. (1986). Semantic Acquisition in TELl: A Transportable, User-Customized Natural Language Processor. Proceedings of the 24th Annual Meeting of the Association of Computational Linguistics.. . R J Brachman, R E Fikes, H J Levesque, R.J. Brachman, R.E. Fikes and H.J. Levesque. (October 1983). Krypton: A Functional Approach to Knowledge Representation. IEEE Computer, Special Issue on Knowledge Representation. Krypton: A Functional Approach to Knowledge Representation. IEEE Computer, Special Issue on Knowledge Representation,, pp. 67-73. TEAM: An Experiment in the Design of Transportable English Interfaces. B J Grosz, D E Appelt, P A Martin, F C N Periera, Artificial Intelligence. 322B.J. Grosz, D.E. Appelt P.A. Martin and F.C.N. Periera. (May 1987). TEAM: An Experiment in the Design of Transportable English Interfaces. Artificial Intelligence, 32(2), 173-244. G Hobbs, Ontological Promiscuity. Proceedings of the 23th Annual Meeting of the Association of Computational Linguistics. G. Hobbs. (1985). Ontological Promiscuity. Proceedings of the 23th Annual Meeting of the Association of Computational Linguistics. The role of the system image in intelligent user assistance. J R Miller, W C Hill, J Mckendree, M E J Masson, B Blumenthal, L Terveen, J Zaback, Proceedings of INTERACT'87. INTERACT'87StuttgartMiller, J. R., Hill, W. C., McKendree, J., Masson, M. E. J., Blumenthal, B., Terveen, L., & Zaback, J. (1987). The role of the system image in intelligent user assistance. Proceedings of INTERACT'87. Stuttgart. E A Rich, J Barnett, K &amp; D Wittenburg, Wroblewski, Ambiguity Procrastination. Proceedings of AAAI-8. 7Rich, E. A., J. Barnett, K. Wittenburg & D. Wroblewski. (July 1987). Ambiguity Procrastination. Proceedings of AAAI-8 7. . . B Sheils, B. Sheils. (1983). Power Tools for Programmers. Datamation. Power Tools for Programmers. Datamation,, pp. 131-144.
227,231,238
Cross-lingual annotation: a road map for low-and no-resource languages
This paper presents a "road map" for the annotation of semantic categories in typologically diverse languages, with potentially few linguistic resources, and often no existing computational resources. Past semantic annotation efforts have focused largely on high-resource languages, or relatively low-resource languages with a large number of native speakers. However, there are certain typological traits, namely the synthesis of multiple concepts into a single word, that are more common in languages with a smaller speech community. For example, what is expressed as a sentence in a more analytic language like English, may be expressed as a single word in a more synthetic language like Arapaho. This paper proposes solutions for annotating analytic and synthetic languages in a comparable way based on existing typological research, and introduces a road map for the annotation of languages with a dearth of resources.This work is licensed under a Creative Commons Attribution 4.0 International License. License details: http:// creativecommons.org/licenses/by/4.0/.
[ 2486369, 32260611, 7771402, 961966, 10914266 ]
Cross-lingual annotation: a road map for low-and no-resource languages December 13, 2020 Meagan Vigus mvigus@unm.edu Department of Linguistics University of New Mexico Jens E L Van Gysel jelvangysel@unm.edu Department of Linguistics University of New Mexico Tim O&apos;gorman togorman@cs.umass.edu College of Information and Computer Sciences University of Massachusetts Amherst Andrew Cowell james.cowell@colorado.edu Department of Linguistics University of Colorado Boulder Rosa Vallejos rvallejos@unm.edu Department of Linguistics University of New Mexico William Croft wcroft@unm.edu Department of Linguistics University of New Mexico Cross-lingual annotation: a road map for low-and no-resource languages Proceedings of the 2nd International Workshop on Designing Meaning Representations the 2nd International Workshop on Designing Meaning RepresentationsBarcelona, SpainDecember 13, 202030 This paper presents a "road map" for the annotation of semantic categories in typologically diverse languages, with potentially few linguistic resources, and often no existing computational resources. Past semantic annotation efforts have focused largely on high-resource languages, or relatively low-resource languages with a large number of native speakers. However, there are certain typological traits, namely the synthesis of multiple concepts into a single word, that are more common in languages with a smaller speech community. For example, what is expressed as a sentence in a more analytic language like English, may be expressed as a single word in a more synthetic language like Arapaho. This paper proposes solutions for annotating analytic and synthetic languages in a comparable way based on existing typological research, and introduces a road map for the annotation of languages with a dearth of resources.This work is licensed under a Creative Commons Attribution 4.0 International License. License details: http:// creativecommons.org/licenses/by/4.0/. Introduction: Cross-linguistically informed semantic annotation In recent years, there has been a surge of interest in annotation schemes that allow natural language texts to be parsed into semantic representations usable for information extraction, machine translation, and other downstream purposes. Good results have been achieved in automatically parsing natural language texts into Abstract Meaning Representations (Banarescu et al., 2013, AMR) and Discourse Representation Structures (Kamp and Reyle, 2013;Bos et al., 2017, DRS), among others, as demonstrated in various shared annotation tasks (Abzianidze et al., 2020;May and Priyadarshi, 2017). However, most efforts in developing and testing such annotation schemes have focused on a restricted set of (typically Indo-European) languages with large native speaker populations. For example, the shared annotation tasks reported on in Abzianidze et al. (2020) and May and Priyadarshi (2017) were all based on English. Large AMR corpora exist, to our knowledge, only for English and Mandarin -both morphologically isolating languages, with comparatively little inflectional and derivational morphology. PropBank (Palmer et al., 2005) has been extended to large-scale languages with derivational morphology, such as Hindi and Arabic. But the annotation of derivational morphology relies on a thorough documentation of its role in the language, which often isn't available for low-resource languages. For morphosyntactic annotation, this bias is less apparent: the Universal Dependencies project (de Marneffe et al., 2014) has annotated treebanks from 96 languages, representing 20 linguistic families. However, Indo-European languages are disproportionately represented (53/96 languages). In terms of native speaker populations, 62 out of the 96 UD languages have more than 1 million native speakers, belonging to the largest 6% of languages in the world (Eberhard et al., 2020). Only 24 UD languages have relatively small native speaker populations (10 UD languages are ancient languages). This apparent bias in languages represented in computational linguistic work likely has consequences for the structure of annotation schemes. The World Atlas of Language Structures chapter on the morphological structure of verbs (Bickel and Nichols, 2013) looks at a sample of 145 languages and finds that, on average, languages express 5.52 inflectional categories within the verb. The 19 UD languages with more than 10 million native speakers that are also included in Bickel and Nichols' (2013) sample express only an average of 4 inflectional categories within the verb -exemplifying a known correlation between morphological complexity and demographic factors (Lupyan and Dale, 2010). The annotation schemes developed in the context of high-population, typically Indo-European languages, may therefore not carry over well to smaller-scale, often more morphologically complex languages. Many smaller-scale languages do not have a long history of linguistic analysis, and therefore understanding of their structure may be progressing in tandem with annotation efforts. Expanding annotation efforts to such morphosyntactically diverse languages may, apart from simply expanding typologically sound coverage of annotation efforts, improve the overall utility of annotation schemes. Cross-linguistic annotation schemes must incorporate a certain amount of flexibility in order to deal with differences in conventionalized semantic distinctions, and the morphosyntactic expression of these distinctions. This flexibility in design can also benefit monolingual annotation, by allowing for flexibility with annotators who have different levels of linguistic training. For that reason, this paper proposes solutions for extending AMR to as many languages as possible, including a "road map" for languages with few existing resources. This provides a starting point for flexible but consistent annotation of a number of semantic categories. It also describes how the cross-linguistic diversity in morphosyntax, paired with pre-existing linguistic analyses and resources, can inform the design of a flexible annotation process. This road map provides steps towards more detailed semantic annotation, as the linguistic analysis of the language progresses and computational resources are created, in order to eventually arrive at the same level of specificity in annotations as in high-resource languages. The creation of a comparable cross-linguistic semantic annotation scheme is of course a larger topic than can be covered in a single paper; this paper sets forth a general approach for dealing with differences in linguistic properties and resource availability (the road map), and specific annotation solutions for certain semantic categories and morphosyntactic phenomena (e.g., synthesis). In this paper, examples are drawn from three no-or low-resource languages: Sanapaná, Kukama, and Arapaho. Sanapaná (Enlhet-Enenlhet) has about 1000 native speakers living in Paraguay. Aside from an ongoing documentation project (Van Gysel, 2020), there are only exploratory analyses of the morphology (Gomes, 2013;Van Gysel, 2017). Kukama (Tupian) has about 1000 native speakers living in Peru. Existing linguistic resources include a descriptive grammar (Vallejos, 2016), a Kukama/Spanish bilingual dictionary (Vallejos and Amías, 2015), translated and morphologically analyzed texts (Vallejos, 2014), and some pedagogical materials. Arapaho (Algonquian) is spoken by two communities in the United States, the Northern Arapaho in Wyoming and the Southern Arapaho in Oklahoma. Among the Northern Arapaho, there are around a hundred native speakers, and several hundred with passive knowledge of the language. Linguistic resources include a grammar (Cowell and Moss Sr, 2008), an online lexical database with detailed part-of-speech labelling and argument structure information, and an annotated text database of nearly 100,000 sentences with accompanying audio and/or video. Cross-linguistic annotation: Typological issues Certain typological issues arise when constructing a semantic annotation scheme that can, in theory, be applied to any language. Three general types of issues are described here. First, some types of morphosyntactic differences do not hinder the annotation of semantic information, and can therefore largely be ignored in a semantic annotation scheme. For example, languages may indicate grammatical roles via constituent order or case affixation of argument phrases, but argument phrases in both types of languages can be annotated for their semantic roles in the same way. Next, there are major typological differences in the conventionalized semantic distinctions that languages make in their grammar, i.e. how languages 'carve up' conceptual space. For example, some languages distinguish only SINGULAR from NON-SINGULAR nominal number, other languages distinguish SINGULAR, DUAL, TRIAL, and PLURAL (more than three), still other languages have a FEW (including singular) vs. MANY nominal number system (Corbett, 2000, chapter 2). For these types of semantic differences, the use of lattices of category values has been proposed to allow flexible but consistent annotation (Van Gysel et al., 2019); we adopt this approach and incorporate it into the road map in §3. Finally, languages differ in terms of how concepts are packaged into words and sentences. As discussed in §1, languages that are more synthetic, packaging many morphemes/concepts into a single 'multiconcept' word, have not been well-represented in past annotation efforts. This also presents a practical issue: for languages at an earlier stage of documentation, it may not be possible for annotators to morphologically decompose multiconcept words. Therefore, the issue of how to maintain consistent annotation across both more analytic and more synthetic languages will be the main focus of this section. Even 'word' does not have a consistent definition across languages. Most languages have a languageinternal concept of 'word', at least as a cognitively salient unit of the language (Bolinger, 1963). But these units do not share consistent linguistic traits across languages, nor is there a widely-accepted definition of what should constitute a word across languages (Dixon et al., 2002), but see Zingler (2020). One predicate instead of two In many languages, a single verb with derivational morphology may express what is expressed by two verbal words (e.g.., main verb, complement, auxiliary) in English and other analytic languages. In general, we treat derivational morphology as a single predicate along with the verb to which it attaches. Derivational morphology may express phasal aspect, as in 1 from Arapaho. 1 The aspectual marking, whether an affix or a separate word, is not annotated as a separate predicate, since it selects a phase of the event. 2 Derivational morphology may also express an external causing event, shown in 2 from Kukama. For causatives, either a single event with causative semantics is identified or two events are identified, one for the causing event and one for the caused event. This is based on whether negation can apply to the causing event and caused event separately. For derivational morphology, as in Kukama, negation would scope over both events, meaning that it is construed as a single event and annotated as such. In English, causative auxiliaries can be negated separately from the caused event (e.g., Grandmother didn't make the kid drink / Grandmother made the kid not drink); therefore, two events are identified. (1) ceesisnoo'oebiicitiit. ceesis-noo'oe-biicitii-t For modality, as shown in 3 from Arapaho, we apply semantic criteria to determine whether a single predicate or multiple predicates are identified; see §3.3 for a discussion of the modal annotation. If the modal can itself be modalized (i.e., appear under the scope of another modal), then it is annotated as its own predicate. Since English allows this (e.g., they might want to take it...), want is annotated as a predicate. But, in the Arapaho, this is not possible, and therefore the modal is annotated in the same predicate as the main verb. While this criterion generally correlates with the expression of the modal as a complement-taking predicate versus an affix on the verb, it relies on semantic criteria that can be applied across languages. In both cases, the modal informs the modal strength annotation of the verb. ( 3) Associated motion is treated similarly. Whether or not motion events are considered a single predicate with the verb or a separate predicate depends on whether locative or directional expressions that occur in the clause correspond to arguments of the motion event (as opposed to arguments or circumstantial locatives modifying the main event). When they are arguments of the motion event, it is identified as a separate predicate; when they are not, it is considered a single predicate with the verb. In the Sanapaná example in 4, no arrive predicate is identified in the annotation. The associated motion morphology -angv-akm indicates that the seeing event occurs after arriving at a location other than the deictic center. A locative expression can occur with this construction, but there is no evidence that this is an argument of the motion event rather than a circumstantial locative of the see predicate. Therefore, only a single predicate is annotated in Sanapaná. In English, this location can be expressed as an unambiguous argument of the motion event (e.g., we arrived home and saw...), so a separate arrive predicate is annotated. One word containing predicate and arguments Languages can also package together concepts that cut across the event-participant distinction that is fundamental to semantic annotation schemes that rely on predicate-argument structure, such as AMR. For these types of multiconcept words, namely pronominal indexation and noun incorporation, both a predicate and an argument are identified at all stages of the road map. In many languages, participants are indexed on the verb; this is often called agreement or pronominal affixation. In certain constructions, participants are signalled only through indexation and not expressed elsewhere in the clause. We treat the indexed participants as pronouns and identify both a predicate and an argument (or arguments) for a single word. This can be seen in examples 1, 3, and 4 above. Noun incorporation involves a word that expresses both a predicate and an argument. Mithun (1984) identifies four types based on their structure and function across languages. These types of noun incorporation exist on a grammaticalization cline, with languages that exhibit the more grammaticalized types also exhibiting the less grammaticalized types. Example 5 shows Type I incorporation, the least grammaticalized, and 6 shows Type IV incorporation, the most grammaticalized, both from Arapaho. Type I noun incorporation doesn't allow the addition of a syntactic argument that corresponds to the incorporated noun. Type IV noun incorporation, often called classificatory constructions, incorporates a more general noun into the verb, whose referent can be made more specific by the addition of a syntactic argument in the clause. In the less grammaticalized types of noun incorporation (Types I-III), both a predicate and arguments are identified, as in 5. The more grammaticalized types of noun incorporation, as in 6, are treated like derivational morphology and only a predicate is identified. 3 Nonverbal clauses: Different packaging of "predicate" and arguments Nonverbal clauses, such as locative, possessive, object, and property predication, and equational clauses, vary across languages in terms of how concepts are packaged into words (Stassen, 1997;Stassen, 2009). There are three nonverbal clause strategies, two of which are problematic for the predicate-argument structure of AMR. 4 These strategies are shown in 7 and 8 from Kukama. In 7, the theme participant and the noun 'shaman' each correspond to a single word, but the predication does not map to a specific word, though it is inherent in the construction. This poses a problem in annotating the "predicate" of the clause. In 8, the possessum and the predication correspond to the same word, that is, an "argument" is predicativized. Like participant indexation and noun incorporation, these types of constructions pose a problem for the annotation of predicate-argument structure. From a semantic perspective, it's important that the different strategies receive comparable annotations, since they have the same meaning. These two different problematic strategies require different solutions. In the case of predicativized arguments as in 8, we use the same solution as for pronominal affixes and less-grammaticalized noun incorporation: both a nonverbal clause function and argument are identified and annotated separately. When there is no predicate, as in 7, then we assume that the annotator is able to recognize the type of nonverbal clause function, and use an abstract predicate in the annotation. Some of the nonverbal clause functions have specialized predicates in AMR, but not all; we propose additional predicates for those functions (see Table 1; ARG0 is always an argument, but ARG1 may be predicativized). The first four types in Table 1 describe possession and location. Possession and location may be predicated of the possession and the spatial figure, as in This bicycle belongs to my brother and The bicycle is in the garage. However, possession and location may be used in a context in which the information is presented as 'thetic' or 'all-new' in the terms of Lambrecht's (1994) theory of information structure (cf. the contrast between 'have' and 'belong' possession in Heine (1997)). One common thetic function is presentational, as in I have one brother or In the garage was a single bicycle. AMR has predicates for thetic possession (HAVE-03) and predicative location (HAVE-LOCATION-91); we add predicates for thetic location (EXIST-91) and predicative possession (BELONG-01). The predication of properties (Susan is smart) and object categories (Susan is a professor) can be distinguished straightforwardly. AMR uses HAVE-MOD-91 for property predication and some types of object predication; we propose to restrict it to property predication. Other types of object predication are expressed in AMR with HAVE-REL-ROLE-91 or HAVE-ORG-ROLE-91; we propose a superordinate predicate HAVE-ROLE-91 that covers all object predication clauses. Finally, equational sentences (He is the father of the bride), corresponding to Lambrecht's identificational information structure, are challenging to distinguish from object predication in context (see Stassen (1997, 106-111) The road map Section 2 covered solutions to typological issues that are raised by the inclusion of low-and no-resource languages in semantic annotation efforts. This section puts forth a "road map" approach to annotation, which synthesizes the typological solutions with practical solutions for the inclusion of languages with few existing computational or documentary resources. The road map approach both ensures comparability across diverse languages, and allows for flexibility in the annotation of any one language. The road map specifies a starting point for languages with few resources (Stage 0), the end point for fully specified annotation (Stage 1), and a process for moving between these, defined for each annotation category. These are not discrete annotation stages, and languages will move gradually from the Stage 0 to Stage 1 annotation. Where a language begins on the road map for each annotation category is determined by the typological features of its grammar, its state of documentation, and the computational resources developed thus far. The road map allows for flexibility across languages and annotation categories. Languages with a paucity of linguistic or computational resources can still begin annotation efforts. Languages with typological features that complicate the annotation of certain semantic categories can still be annotated for those categories, albeit at a less detailed level. Within a language, different annotation categories may be annotated at different stages, depending on the language's typological features and existing resources. The road map approach also ensures comparability across languages, even when languages are at different stages, because annotation values retain their meaning across the road map stages. This also ensures that different-stage annotations for the same language are compatible. As annotation and documentation efforts continue, the annotation of a language may progress along the road map. But, the annotations done at the beginning stages are still accessible and comparable to the later stage annotations. For languages that have limited resources in terms of time investment by speakers and/or field linguists, having this type of compatibility built into the annotation scheme is critical. The remainder of this section will demonstrate how the road map approach functions with regard to a number of annotation categories: annotation targets, participant roles, aspect, and modal strength and polarity. The road map for these categories is summarized in Table 2. Annotation targets The main cross-linguistic issues with the identification of annotation targets (i.e., predicates and arguments) are the types of multiconcept words covered in §2. The annotation of multiconcept words is the same throughout the stages of the road map; however, their representation in the lexicon builds up in complexity. For example, verbs with derivational morphology are first treated as different words than their non-derived counterparts in the lexicon. As the understanding of the language progresses, multiconcept words are morphologically decomposed and morphological relations are added to the lexicon. The identification of a span of text for each annotation target is determined by the language experts for each language, since what is considered the 'citation form' of a word differs across languages. Fusional morphology, such as that for pronominal indexation in Arapaho (see 3, 5, 6 above), cannot be split apart at any stage of the road map and therefore a span of text is not indicated for those arguments. Aspect Aside from the multiconcept word issues with regard to aspectual morphology discussed above, the main issue with aspect annotations cross-linguistically is that languages differ widely in terms of which aspectual distinctions are conventionalized in their grammar. In order to resolve these differences, we utilize the aspectual lattice from , shown in the supplementary material. It ranges from the most coarse-grained categories of IMPERFECTIVE and PROGRESSIVE, to ATELIC PROCESS and PERFECTIVE, to the 'basic' level of STATE, ACTIVITY, ENDEAVOR, and PERFORMANCE, and finally, very fine-grained categories, such as POINT STATE or DIRECTED IRREVERSIBLE ACHIEVEMENT. For a language at an earlier stage of linguistic analysis, it may not be clear to the annotator which of the more fine-grained aspect values should apply. Therefore, annotators may select a more coarsegrained category on the lattice. For example, the linguistic analysis of aspect in Sanapaná, in 9, is still under way. The aspectual implications of the suffixal morphology (specifically, the passive -akp which also functions as a reciprocal, and the subjunctive -o), are not yet fully understood. Therefore, the more coarse-grained ATELIC PROCESS value is used, instead of an ACTIVITY or ENDEAVOR value. Stage 2 of the aspect annotation uses the more fine-grained categories on the aspect lattice. Example 5 above from Arapaho expresses an event that is aspectually similar to 9. Since Arapaho has a longer history of linguistic study, the more fine-grained annotation of ENDEAVOR can be applied. Modal strength and polarity We follow in representing modal strength and polarity as a dependency structure. The nodes are events or conceivers (i.e., a source, an entity whose perspective on an event is modeled in the text). The edges in the dependency structure correspond to epistemic strength and polarity values; event nodes are the children of either conceivers or other events on whom they depend for their modal value. Like aspect, languages differ in the modal strength distinctions that are conventionalized in their grammar and therefore we use a typological lattice, shown in the supplementary materials. This lattice is based around a FULL vs. PARTIAL vs. NEUTRAL modal strength distinction; the coarse-grained categories are NON-FULL and NON-NEUTRAL; the finer-grained categories include WEAK PARTIAL, STRONG NEU-TRAL, etc. These combine with an AFFIRMATIVE/NEGATIVE polarity distinction. 5 The Stage 0 annotation involves the underspecification of some parts of the modal dependency structure. Events are annotated for their modal strength (MODSTR) using the lattice, but conceivers are unspecified. Some event types receive special annotations; two of these are events under the scope of a modal predicate, and events under the scope of a reporting/speech predicate. 6 A placeholder MODAL value is used for modal predicates; and a QUOT value is used for reporting predicates. Events under the scope of modals don't receive a MODSTR value; reported events do receive a MODSTR value in the same way as other predicates. This way, events under the scope of other predicates in the modal dependency receive a consistent annotation, while annotators avoid the complexity of annotating the full dependency structure. This annotation for modal predicates is shown above in the English translation of 3. The MODAL and QUOT values can be automatically converted into an underspecified dependency structure; the participant role annotation can also be leveraged to specify conceivers (e.g., the EXPERIENCER of a modal predicate is its conceiver). The modal strength imparted by modal predicates is unspecified in the dependency structure at Stage 0. Stage 1 involves adding this information to the lexicon entries for modal predicates (e.g., want imparts a NEUT strength on its complement) and filling in other unspecified values to reach a fully specified modal dependency structure. Participant roles Semantic role annotation is one category where issues related to typological differences and resource disparities intersect. As has been noted in the verbal semantic literature (Croft, 2012;Hartmann et al., 2014), semantic roles, such as AGENT or PATIENT are difficult to apply consistently across languages. 7 (Hartmann, 2013;Malchukov and Comrie, 2015) and semantic annotation (e.g., PropBank) have moved away from general semantic roles and towards microroles, or lexicalized semantic roles. Roles are defined for each verb (e.g., eat has an EATER and FOOD); this allows for valid cross-linguistic comparison in typology and consistency in semantic annotation. The major drawback of this approach is that it requires an existing lexicon complete with lexicalized roles for the verbs in a language. For languages that do not have this, the creation of such a resource is a rather large hurdle to overcome in order to begin annotation. Therefore, the road map moves from more general semantic roles at Stage 0 to lexicalized microroles at Stage 1. For languages that have PropBankstyle frame files created, annotation can begin at Stage 1. For languages that do not, annotation begins with general semantic roles at Stage 0, while simultaneously building up a lexicon of frame files. Stage 0 of the road map involves selecting a label for each participant from a set of general (i.e., non-lexicalized) semantic roles, shown in Table 3. This inventory is largely an extension of the AMR inventory of non-core roles, with roles added for core arguments such as STIMULUS. These additions are based upon the cross-linguistic argument realization patterns in the ValPaL database (Hartmann, 2013); this ensures that the labels reflect distinctions that are common in the grammatical systems of the world's languages. At Stage 0, implicit (i.e., unexpressed) participants are not annotated; this is shown in the Arapaho example in 3 above, where the goal participant is not annotated, as it is not overly expressed. In order for a language to progress along the road map with regard to participant roles, Stage 0 also involves beginning to set up a lexicon with frame files. Within each frame file, the mapping between a lexicalized semantic role and its non-lexicalized counterpart is indicated. This way, annotations at different stages of the road map will be comparable with each other. As frame files are created, annotators use the lexicalized roles for verbs that have them; for other verbs, the general semantic roles are used. At Stage 1, the lexicalized roles are used; this is shown below for example 3 in §2.1. In Arapaho, the existing lexical description with argument structure information can be leveraged in annotation to create frame files like the one shown below. Stage 2 also involves the annotation of implicit roles, based on the frame files; therefore, the goal (ARG2) participant for example 3 is annotated. predicate: BEETWON3EIIN arguments: ARG0: putter → ACTOR ARG1: put thing → THEME ARG2: putting goal → GOAL (n / beetwon3eiin 'want to go and put s.t. inside a place' :ARG0 (a / '3S') :ARG1 (t / '2S') :ARG2 (g / 'jail') :aspect Habitual :modstr Neut) Conclusion This paper recognizes issues not previously dealt with in the annotation of cross-linguistic semantic information: multiconcept words and no-resource languages. As multiconcept words are more common in languages with a smaller speech community, they have not been dealt with in past annotation schemes. We present solutions for extending AMR across languages, including the annotation of multiconcept words; these depend on the semantic category of the concept. We have also outlined a road map approach to beginning annotation on very low or no-resource languages, ensuring that the annotation is truly crosslinguistic in terms not only of typological diversity but of resource availability as well. Credits We gratefully acknowledge the support of the National Science Foundation Award Nos. 1764091 to the University of New Mexico and 1764048 to the University of Colorado (Collaborative Research: Building a Uniform Meaning Representation for Natural Language Processing). el-vet-angv-ay-akm-e' 2/3M-DISTR-see-LOC-PST/HAB-APPRX-V1. IMPERF-want.to-ALLAT-put.inside.a.place-3S/2S 'Right away he wants to go and put you in jail.'xonouu niibeetwon3eiinein. xonouu immediately nii-beet-won-3eiin-ein (n / beetwon3eiin 'want to go and put s.t. inside a place' :Actor (a / '3S') :Theme (t / '2S') :aspect Habitual :modstr Neut) (w / want :Experiencer (h / he) :Stimulus (g / go :Actor (h) :aspect Habitual) :Stimulus (p / put :Actor (h) :Theme (y / you) :Goal (j / jail) :aspect Habitual ) :aspect Habitual :modstr Aff :modal g :modal p) ang-kelvana. 2/3F-woman 'Afterwards, they arrived and saw a person, a woman.'(v / engvetangvayam 'arrive and see':Experiencer (a1 / apk-el-'3PL.M') :Stimulus (n / nenhlet 'person' :mod (a2 / angkelvana 'woman') :quant 1) :aspect State :modstr Aff) (a / arrive :Actor (t / they) :aspect Performance :modstr Aff) (s / see :Experiencer (t) :Stimulus (p / person :mod (w / woman) :quant 1) :aspect State :modstr Aff) NARR.PST.IMPERF-REDUP-throughbixoh'oekoohuutoo-no' act.so.that.hand.appears.quickly-PL 'they were sticking their hands right through them [the ghosts] to the other side' NARR.PST-pull.rope-like.thing.out 'At the [time] when he wasn't yet an eagle, he took [it] out of his mouth, the sinew.'(5) he'ih'iixooxookbixoh'oekoohuutoono' He'ih'ii-xoo-xook- (b/ bixoh'oekoohuutoo 'stick hands through' :Actor (a1 / '3PL') :Theme (t / 'hands') :Undergoer (g / '[ghosts]') :aspect Endeavor :modstr Aff) (6) hoono' nuhu' tihciinii'eihiinit, he'ih'etoocein nuhu' hitiine' nuhu' hoote. hoono' not.yet nuhu' this tih-cii-nii'eihiini-t when.PST-NEG-be.eagle-3.S he'ih-'etoocein nuhu' this hi-tiin-e' 3S-mouth-LOC nuhu' this hoote sinew (e / 'etoocein 'pull rope-like thing out' :Actor (a / '3S') :Theme (h1 / hoote 'sinew') :Material/Source (h2 / hi-tiin-e' 'his mouth' :part-of a) :Temporal (h3 / have-role-91 :ARG0 (a) :ARG1 (n / nii'eihiini 'be eagle') :aspect State :modstr Neg) :aspect Performance :modstr Aff) ). Where this can be done, we propose to use the predicate IDENTITY-91.Clause type Predicate ARG0 ARG1 thetic/presentational possession have-03 possessor possession predicative possession belong-01 possession possessor thetic/presentational location exist-91 location theme predicative location have-location-91 theme location property predication have-mod-91 theme property object predication have-role-91 theme object category equational identity-91 theme equated referent Table 1 : 1Nonverbal clause predicates Table 2 : 2Road map annotation stages Central rolesActor, Undergoer, Theme, Recipient, Force, Causer, Experiencer, Stimulus Peripheral roles Instrument, Companion, Material/Source, Place, Start, Goal, Affectee Roles for entities and events Cause, Manner, Reason, Purpose, Temporal, Extent Table 3 : 3UMR non-lexicalized roles Therefore, both typological research We present examples with annotations for predicate-argument structure, modal strength and polarity, and aspectual structure; temporal annotations have been omitted. The annotations make use of the general 'Stage 0' participant roles; §3 explains the relevant annotation categories in more detail. Abbreviations used in glosses are the following: 2 = second person; 3 = third person; ALLAT = allative; APPL = applicative; APPRX = approximative; CER = certainty; CAU = causative; DEF = definite; DISTR = distributive; IC = initial change; IMPERF = imperfective; INF = inferred; INS = instrumental; LOC = locative; M = masculine; NARR = narrative; PAS = passive; PL = plural; PST = past; REDUP = reduplication; S = singular; SBJ = subjunctive.2 The aspect indicated by the morphology is reflected in the aspect annotation. Inceptive phasal aspect is annotated as ACTIVITY to reflect that the event may be ongoing. Due to space limitations, the English translation annotations for these examples are included in the supplementary material. 4 The third strategy is the use of a verb separate from either participant, such as have in the English translation of 8, or the copula in the translation of 7. In this paper, we use the default level annotations to yield six modal strength values: full affirmative AFF, partial affirmative PARTAFF, neutral affirmative NEUTAFF, neutral negative NEUTNEG, partial negative PARTNEG, and full negative NEG.6 Conditionals and purpose clauses also receive special placeholder annotation values, COND and PURP respectively. 7 For example, transfer constructions can realize either the giver as subject (I gave the cat some wet food), or the recipient as subject (the cat received her wet food). This varies both within and across languages, making it unclear which participant should receive the AGENT semantic role. Lasha Abzianidze, Rik Van Noord, Hessel Haagsma, Johan Bos, arXiv:2005.13399The first shared task on discourse representation structure parsing. arXiv preprintLasha Abzianidze, Rik van Noord, Hessel Haagsma, and Johan Bos. 2020. The first shared task on discourse representation structure parsing. arXiv preprint arXiv:2005.13399. Abstract meaning representation for sembanking. Laura Banarescu, Claire Bonial, Shu Cai, Madalina Georgescu, Kira Griffitt, Ulf Hermjakob, Kevin Knight, Philipp Koehn, Martha Palmer, Nathan Schneider, Proceedings of the 7th Linguistic Annotation Workshop and Interoperability with Discourse. the 7th Linguistic Annotation Workshop and Interoperability with DiscourseSofia, BulgariaAssociation for Computational LinguisticsLaura Banarescu, Claire Bonial, Shu Cai, Madalina Georgescu, Kira Griffitt, Ulf Hermjakob, Kevin Knight, Philipp Koehn, Martha Palmer, and Nathan Schneider. 2013. Abstract meaning representation for sembanking. In Proceedings of the 7th Linguistic Annotation Workshop and Interoperability with Discourse, pages 178-186, Sofia, Bulgaria, August. Association for Computational Linguistics. Inflectional synthesis of the verb. Balthasar Bickel, Johanna Nichols, The World Atlas of Language Structures Online. Matthew S. Dryer and Martin HaspelmathLeipzigMax Planck Institute for Evolutionary AnthropologyBalthasar Bickel and Johanna Nichols. 2013. Inflectional synthesis of the verb. In Matthew S. Dryer and Martin Haspelmath, editors, The World Atlas of Language Structures Online. Max Planck Institute for Evolutionary Anthropology, Leipzig. The uniqueness of the word. Dwight Bolinger, Lingua. 122Dwight Bolinger. 1963. The uniqueness of the word. Lingua, 12(2):113-136. The groningen meaning bank. Johan Bos, Valerio Basile, Kilian Evang, J Noortje, Johannes Venhuizen, Bjerva, Handbook of linguistic annotation. SpringerJohan Bos, Valerio Basile, Kilian Evang, Noortje J Venhuizen, and Johannes Bjerva. 2017. The groningen mean- ing bank. In Handbook of linguistic annotation, pages 463-496. Springer. . G Greville, Corbett, Cambridge University PressGreville G. Corbett. 2000. Number. Cambridge University Press. The Arapaho language. Andrew Cowell, Alonzo Moss Sr, University Press of ColoradoAndrew Cowell and Alonzo Moss Sr. 2008. The Arapaho language. University Press of Colorado. Verbs: Aspect and causal structure. William Croft, Oxford University PressOxfordWilliam Croft. 2012. Verbs: Aspect and causal structure. Oxford University Press, Oxford. Universal stanford dependencies: A cross-linguistic typology. Marie-Catherine De Marneffe, Timothy Dozat, Natalia Silveira, Katri Haverinen, Filip Ginter, Joakim Nivre, D Manning, Christopher , Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14). the Ninth International Conference on Language Resources and Evaluation (LREC'14)European Language Resources Association (ELRAMarie-Catherine de Marneffe, Timothy Dozat, Natalia Silveira, Katri Haverinen, Filip Ginter, Joakim Nivre, and D. Manning, Christopher. 2014. Universal stanford dependencies: A cross-linguistic typology. In Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14), pages 4585-4592. European Language Resources Association (ELRA). Word: A cross-linguistic typology, chapter Word: a typological framework. Alexandra Y Robert Mw Dixon, Aikhenvald, Robert MW Dixon, Alexandra Y Aikhenvald, et al., 2002. Word: A cross-linguistic typology, chapter Word: a typological framework, pages 1-41. M David, Gary F Eberhard, Charles D Simons, Fennig, Ethnologue: Languages of the world. twentythird edition. David M Eberhard, Gary F. Simons, and Charles D. Fennig. 2020. Ethnologue: Languages of the world. twenty- third edition. http://www.ethnologue.com. Sanapaná uma lingua maskoy: Aspectos gramaticais. Antonio Almir Silva Gomes, Universidade Estadual de CampinasPh.D. thesisAntonio Almir Silva Gomes. 2013. Sanapaná uma lingua maskoy: Aspectos gramaticais. Ph.D. thesis, Universi- dade Estadual de Campinas. Identifying semantic role clusters and alignment types via microrole coexpression tendencies. Studies in Language. International Journal sponsored by the Foundation "Foundations of Language. Iren Hartmann, Martin Haspelmath, Michael Cysouw, 38Iren Hartmann, Martin Haspelmath, and Michael Cysouw. 2014. Identifying semantic role clusters and alignment types via microrole coexpression tendencies. Studies in Language. International Journal sponsored by the Foundation "Foundations of Language", 38(3):463-484. Valency Patterns Leipzig. Hartmann, Iren HaspelmathLeipzigMax Planck Institute for Evolutionary AnthropologyMartin Taylor Bradley (eds.) Hartmann, Iren Haspelmath, editor. 2013. Valency Patterns Leipzig. Max Planck Institute for Evolutionary Anthropology, Leipzig. Possession: Cognitive sources, forces, and grammaticalization. Cambridge Studies in Linguistics. Bernd Heine, Cambridge University Press83CambridgeBernd Heine. 1997. Possession: Cognitive sources, forces, and grammaticalization. Cambridge Studies in Linguistics. 83. Cambridge University Press, Cambridge. From discourse to logic: Introduction to modeltheoretic semantics of natural language, formal logic and discourse representation theory. Hans Kamp, Uwe Reyle, Springer Science & Business Media42Hans Kamp and Uwe Reyle. 2013. From discourse to logic: Introduction to modeltheoretic semantics of natural language, formal logic and discourse representation theory, volume 42. Springer Science & Business Media. Information structure and sentence form: Topic, focus, and the mental representations of discourse referents. Cambridge studies in linguistics: 71. Knud Lambrecht, Cambridge University PressKnud Lambrecht. 1994. Information structure and sentence form: Topic, focus, and the mental representations of discourse referents. Cambridge studies in linguistics: 71. Cambridge University Press. Language structure is partly determined by social structure. Gary Lupyan, Rick Dale, PloS one. 518559Gary Lupyan and Rick Dale. 2010. Language structure is partly determined by social structure. PloS one, 5(1):e8559. Valency classes in the world's languages. Andrej Malchukov, Bernard Comrie, Walter de GruyterBerlin/BostonAndrej Malchukov and Bernard Comrie. 2015. Valency classes in the world's languages. Walter de Gruyter, Berlin/Boston. Semeval-2017 task 9: Abstract meaning representation parsing and generation. Jonathan May, Jay Priyadarshi, Proceedings of the 11th International Workshop on Semantic Evaluation. the 11th International Workshop on Semantic EvaluationJonathan May and Jay Priyadarshi. 2017. Semeval-2017 task 9: Abstract meaning representation parsing and generation. In Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017), pages 536-545. The evolution of noun incorporation. Language. Marianne Mithun, 60Marianne Mithun. 1984. The evolution of noun incorporation. Language, 60:847-94. The proposition bank: An annotated corpus of semantic roles. Martha Palmer, Daniel Gildea, Paul Kingsbury, Computational linguistics. 311Martha Palmer, Daniel Gildea, and Paul Kingsbury. 2005. The proposition bank: An annotated corpus of semantic roles. Computational linguistics, 31(1):71-106. Intransitive predication. Leon Stassen, Clarendon PressOxfordLeon Stassen. 1997. Intransitive predication. Clarendon Press, Oxford. Predicative Possession. Leon Stassen, Oxford University PressOxford, UKLeon Stassen. 2009. Predicative Possession. Oxford University Press, Oxford, UK. Rosa Vallejos, Rosa Amías, Diccionario kukama-kukamiria castellano. Iquitos: AIDESEP: ISEPL: FORMABIAP. Rosa Vallejos and Rosa Amías. 2015. Diccionario kukama-kukamiria castellano. Iquitos: AIDESEP: ISEPL: FORMABIAP. The kukama-kukamiria documentation project. Rosa Vallejos, Rosa Vallejos. 2014. The kukama-kukamiria documentation project. https://elar.soas.ac.uk/Collection/MPI971108 (accessed: 4 August 2020). A Grammar of Kukama-Kukamiria: A language from the Amazon. Rosa Vallejos, BrillRosa Vallejos. 2016. A Grammar of Kukama-Kukamiria: A language from the Amazon. Brill. Cross-linguistic semantic annotation: Reconciling the language-specific and the universal. Jens E L Van Gysel, Meagan Vigus, Pavlina Kalm, Sook-Kyung Lee, Michael Regan, William Croft, Proceedings of the First International Workshop on Designing Meaning Representations. the First International Workshop on Designing Meaning RepresentationsFlorence, ItalyAssociation for Computational LinguisticsJens E. L. Van Gysel, Meagan Vigus, Pavlina Kalm, Sook-kyung Lee, Michael Regan, and William Croft. 2019. Cross-linguistic semantic annotation: Reconciling the language-specific and the universal. In Proceedings of the First International Workshop on Designing Meaning Representations, pages 1-14, Florence, Italy, August. Association for Computational Linguistics. Jens E L Van Gysel, Temporal predicative particles in Sanapaná and the Enlhet-Enenlhet language family (Paraguay): A descriptive and comparative study. Universiteit LeidenMA ThesisJens E. L. Van Gysel. 2017. Temporal predicative particles in Sanapaná and the Enlhet-Enenlhet language family (Paraguay): A descriptive and comparative study. MA Thesis, Universiteit Leiden. A documentation of historical narratives amongst the Sanapaná (Enlhet-Enenlhet) of the Paraguayan Chaco. Jens E L Van Gysel, 31Jens E. L. Van Gysel. 2020. A documentation of historical narratives amongst the Sanapaná (Enlhet-Enenlhet) of the Paraguayan Chaco. https://elar.soas.ac.uk/Collection/MPI1234837 (accessed: 31 October 2020). A dependency structure annotation for modality. Meagan Vigus, Jens E L Van Gysel, William Croft, Proceedings of the First International Workshop on Designing Meaning Representations. the First International Workshop on Designing Meaning RepresentationsMeagan Vigus, Jens E. L. Van Gysel, and William Croft. 2019. A dependency structure annotation for modality. In Proceedings of the First International Workshop on Designing Meaning Representations, pages 182-198. Wordhood issues: typology and grammaticalization. Tim Zingler, University of New MexicoPh.D. thesisTim Zingler. 2020. Wordhood issues: typology and grammaticalization. Ph.D. thesis, University of New Mexico.
220,446,097
[]
Interface Web pour l'annotation morpho-syntaxique de textes Thierry Hamon hamon@limsi.fr CNRS Université Paris-Saclay Campus Universitaire91405Bat 508, rue John von Neumann, OrsayFrance ( Université Paris 13 -Sorbonne Paris Cité, 99 avenue J.B. Clément93430VilletaneuseFrance Interface Web pour l'annotation morpho-syntaxique de textes M- : Annotation morphosyntaxiqueLemmatisationMultext-EastUkrainien K : Morpho-syntactic annotationLemmatizationMultext-EastUkrainian RNous présentons une interface Web pour la visualisation et l'annotation de textes avec des étiquettes morphosyntaxiques et des lemmes. Celle-ci est actuellement utilisée pour annoter des textes ukrainiens avec le jeu d'étiquettes Multext-East. Les utilisateurs peuvent rapidement visualiser les annotations associées aux mots d'un texte, modifier les annotations existantes ou en ajouter de nouvelles. Les annotations peuvent chargées et exportées au format XML TEI, mais aussi sous forme tabulée. Des scripts de conversion de format et de chargement dans une base de données sont mis à disposition.AWeb interface for the morpho-syntactic annotation of textsWe present a Web interface for visualizing and annotating texts with POS tags and lemma. This interface is currently used for the annotation of Ukrainian texts with the Multext-East POS tagset. The users have a fast access to the annotations associated with words from a text. They can also modify existing and add new annotations. The annotations can be loaded or exported in the TEI XML format and tabular separated format. Several scripts for loading and converting the data are also available. R Nous présentons une interface Web pour la visualisation et l'annotation de textes avec des étiquettes morphosyntaxiques et des lemmes. Celle-ci est actuellement utilisée pour annoter des textes ukrainiens avec le jeu d'étiquettes Multext-East. Les utilisateurs peuvent rapidement visualiser les annotations associées aux mots d'un texte, modifier les annotations existantes ou en ajouter de nouvelles. Les annotations peuvent chargées et exportées au format XML TEI, mais aussi sous forme tabulée. Des scripts de conversion de format et de chargement dans une base de données sont mis à disposition. A Web interface for the morpho-syntactic annotation of texts We present a Web interface for visualizing and annotating texts with POS tags and lemma. This interface is currently used for the annotation of Ukrainian texts with the Multext-East POS tagset. The users have a fast access to the annotations associated with words from a text. They can also modify existing and add new annotations. The annotations can be loaded or exported in the TEI XML format and tabular separated format. Several scripts for loading and converting the data are also available. M- : Annotation morphosyntaxique, Lemmatisation, Multext-East, Ukrainien. K : Morpho-syntactic annotation, Lemmatization, Multext-East, Ukrainian. Introduction La mise au point de méthodes de TAL sur des langues peu outillées nécessite de constituer des corpus annotés manuellement. Ainsi, dans le cadre du développement d'outils pour l'ukrainien, et notamment un étiqueteur morphosyntaxique, nous souhaitons disposer de textes annotés morphosyntaxiquement avec le jeu d'étiquettes Multext-East (Erjavec, 2012). Ainsi, nous avons développé une interface Web pour l'annotation des mots d'un texte avec des étiquettes morphosyntaxiques et des lemmes, mais aussi la correction de ces informations lorsque le texte est pré-annoté. Notre objectif est d'une part, de faciliter l'utilisation du jeu d'étiquettes Multext-East, et d'autre part, de limiter les actions de l'utilisateur afin de réduire le temps d'annotation. En effet, ce jeu d'étiquette morphosyntaxique est complexe puisqu'il propose 12 catégories grammaticales, mais aussi jusqu'à 10 traits morphologiques pour les adjectifs et 11 valeurs de traits possibles pour décrire les types de pronom. Il s'agit donc de ne présenter que les traits morphologiques et les valeurs pertinentes pour une catégorie donnée. L'utilisateur doit donc également avoir une vue synthétique des annotations et la visualisation des annotations associées à un mot doit être rapide. L'interface d'annotation Afin de répondre aux objectifs présentés ci-dessus, nous avons développé une interface d'annotation s'appuyant sur des technologies Web (XHTML, PHP, AJAX). Une base de données est également utilisée pour le stockage des annotations et la description du jeu d'étiquettes. L'adaptation de l'interface à un autre jeu d'étiquettes ne nécessite donc que la modification de la description du jeu d'étiquettes. Les textes au format tabulé ou XML conforme à la TEI 1 (Wittern et al., 2009) pré-annotés ou non peuvent être chargés dans la base à l'aide de scripts Perl. Les textes annotés peuvent exportés dans les mêmes formats. La figure 1 présente une capture d'écran de l'interface d'annotation. Le document à annoter est affiché une seule fois dans l'interface et les annotations déjà associées aux mots apparaissent dynamiquement. Une vue synthétique des annotations est proposée lorsque l'utilisateur passe avec la souris sur un mot (par exemple, використані à la figure 1). Les annotations peuvent être modifiées ou ajoutées en cliquant sur le mot concerné (par exemple приймаєте visible à droite sur la figure 1). L'interface est téléchargeable à l'adresse suivante : https://perso.limsi.fr/hamon/Rada/ index.php. Remerciements Ce travail a été financé par l'action incitative Outiller l'Ukranien du LIMSI-CNRS. Nous remercions également Natalia Grabar et Anastasiia Kuznietsova pour leur commentaires sur les premières versions de l'interface. F 1 : 1Exemple de visualisation d'un document en cours d'annotation. http://www.tei-c.org Actes de la conférence conjointe JEP-TALN-RECITAL 2016, volume 5 : Démos Multext-east : Morphosyntactic resources for central and eastern european languages. Language Resources and Evaluation. E T, 46E T. (2012). Multext-east : Morphosyntactic resources for central and eastern european languages. Language Resources and Evaluation, 46(1), 131--142. The making of tei p5. C W, C A. &amp; T C, Literary and Linguistic Computing. 243W C., C A. & T C. (2009). The making of tei p5. Literary and Linguistic Computing, 24(3), 281--296. . Actes De La Conférence Conjointe, Jep-Taln-Recital , Démos5Actes de la conférence conjointe JEP-TALN-RECITAL 2016, volume 5 : Démos
258,486,868
The identification of Verbal Multiword Expressions (VMWEs) presents a greater challenge compared to non-verbal MWEs due to their higher surface variability. VMWEs are linguistic units that exhibit varying levels of semantic opaqueness and pose difficulties for computational models in terms of both their identification and the degree of compositionality. In this study, a new approach to predicting the compositional nature of VMWEs in Persian is presented. The method begins with an automatic identification of VMWEs in Persian sentences, which is approached as a sequence labeling problem for recognizing the components of VMWEs. The method then creates word embeddings that better capture the semantic properties of VMWEs and uses them to determine the degree of compositionality through multiple criteria. The study compares two neural architectures for identification, BiLSTM and ParsBERT, and shows that a fine-tuned BERT model surpasses the BiLSTM model in evaluation metrics with an F1 score of 89%. Next, a word2vec embedding model is trained to capture the semantics of identified VMWEs and is used to estimate their compositionality, resulting in an accuracy of 70.9% as demonstrated by experiments on a collected dataset of expert-annotated compositional and non-compositional VMWEs.
[ 6622179, 220048355, 207556454, 203279, 14182801, 216923442, 56595638, 67864863 ]
May 6, 2023 Proceedings of the 19th Workshop on Multiword Expressions (MWE 2023) the 19th Workshop on Multiword Expressions (MWE 2023)May 6, 2023 The identification of Verbal Multiword Expressions (VMWEs) presents a greater challenge compared to non-verbal MWEs due to their higher surface variability. VMWEs are linguistic units that exhibit varying levels of semantic opaqueness and pose difficulties for computational models in terms of both their identification and the degree of compositionality. In this study, a new approach to predicting the compositional nature of VMWEs in Persian is presented. The method begins with an automatic identification of VMWEs in Persian sentences, which is approached as a sequence labeling problem for recognizing the components of VMWEs. The method then creates word embeddings that better capture the semantic properties of VMWEs and uses them to determine the degree of compositionality through multiple criteria. The study compares two neural architectures for identification, BiLSTM and ParsBERT, and shows that a fine-tuned BERT model surpasses the BiLSTM model in evaluation metrics with an F1 score of 89%. Next, a word2vec embedding model is trained to capture the semantics of identified VMWEs and is used to estimate their compositionality, resulting in an accuracy of 70.9% as demonstrated by experiments on a collected dataset of expert-annotated compositional and non-compositional VMWEs. Introduction In today's world, multiword expression detection and embedding are trending topics, particularly among the research conducted on natural language processing. Multiword expressions (MWEs) are word combinations that  These two authors contributed equally to this work display some form of idiomaticity, in which the semantics of some of the MWEs cannot be predicted from the semantics of their component. These expressions comprised of at least two words, inclusive of a headword and syntactically related words that display some degree of lexical, morphological, syntactic, and/or semantic idiosyncrasy (Sag et al., 2002). In this paper, we focus on verbal MWE (VMWE) which is a multiword expression such that its syntactic head is a verb and its other components are directly dependent on the verb (Sag et al., 2002). Identifying a VMWE in a Persian sentence poses many challenges, like in other languages (Constant et al., 2017). One of the primary ones is the violation of the compositionality principle, leading to the inability to deduce the semantic meaning of the VMWE from the meanings of its individual components as shown in (1). (1) ‫رو‬ ‫دست‬ ‫ی‬ ‫گذاشتن‬ ‫دست‬ lit. put hand on hand doing nothing Discontiguous VMWEs pose an extra challenge, as shown in the example (2). (2) ‫خودکش‬ ‫به‬ ‫اقدام‬ ‫او‬ ‫ی‬ ‫کرد‬ lit. he attempt to suicide did he attempted suicide In (2), identifying the compound verb ‫کرد"‬ ‫"اقدام‬ (attempt did => attempted suicide) becomes challenging through traditional approaches. Finally, the assignment of grammatical roles to certain word sequences can be entirely dependent on the sense of the words and the context in which they are used. (3) and (4), although the sense of the word ‫"بلند"‬ (tall) is the same in both examples, the expressions ‫کرد"‬ ‫"بلند‬ (I did tall) have different meaning depending on the context (raised and stole, respectively). Furthermore, representing VMWEs as unified units in embeddings is challenging due to the limitation of traditional static embeddings generating one embedding per token, while VMWEs consist of multiple tokens. Alternative representation methods need exploration. Additionally, as previously mentioned, VMWEs can possess both idiomatic and literal meanings, leading to syntactic ambiguity. This creates a problem for the generation of embedding vectors that accurately capture the semantic meaning of such expressions. Contribution: The contributions of this paper are two-fold. First, we propose non-contextual and contextual methods to identify VMWEs. For the non-contextual strategy, we use a VMWE dataset based on Persian WordNet, while LSTM and BERT models are used as the contextual methods. Though the BERT model uses contextual embedding for each word, our LSTM model has a non-contextual embedding layer in its network. In our second contribution, we aim to measure the degree of compositionality of a VMWE by analyzing the semantic similarity between its components and the expression as a whole. To do this, we utilize two word-level and character-level embedding methods: word2vec and fasttext, which capture the semantic meaning of the VMWEs by concatenating detected VMWEs in the training corpus. We then determine the compositionality of a VMWE by using six different metrics. Finally, we have gathered a dataset that includes around 55 VMWEs, which have been tagged as either compositional or noncompositional, to evaluate the accuracy of our predictions. Predicting Compositionality of Verbal Multiword Expressions in Persian In Section 2, a review of existing methods is presented. The proposed algorithm for identification and prediction of compositionality is detailed in Section 3 and 4, respectively. The effectiveness of the introduced approaches is assessed through experiments, the results of which are presented in Section 5. Finally, in Section 6, the results are discussed and concluding remarks are drawn. Related Work VMWEs identification: There are generally two types of methods to identify VMWEs in a sentence: language-dependent and languageindependent methods. In terms of languagedependent methods, (Chaghari and Shamsfard, 2013) introduced an unsupervised method to identify Persian VMWEs by defining a set of linguistic rules. (Saljoughi Badlou, 2016) also introduced a language-dependent method to identify Persian MWEs by creating regular expressions by Persian linguistic rules and searching extracted MWEs from Wikipedia article titles and FarsNet (Shamsfard, 2007). Moreover, (Salehi et al., 2012) introduced a method that utilized a bilingual parallel corpus and evaluated the efficacy of seven linguistically-informed features in automatically detecting Persian LVCs with the aid of two classifiers. In recent years, deep learning has demonstrated remarkable success in sequence tagging tasks, including MWE identification (Ramisch et al., 2018;Taslimipoor and Rohanian, 2018). RNNs and ConvNets have shown significant progress in this area. (Gharbieh et al., 2017) achieved their best results on the DiMSUM (Schneider et al., 2016) dataset using a ConvNet architecture to identify MWEs. (Taslimipoor and Rohanian, 2018) proposed a language-independent LSTM architecture to identify VMWEs, which includes both convolutional and recurrent layers, and an optional high-level CRF layer. Additionally, (Rohanian et al., 2020) focused on using MWEs to identify verbal metaphors and proposed a deep learning model based on attention-guided GCNs, which incorporate both syntactic dependencies and information about VMWEs. Supervised techniques like deep learning require vast amounts of labeled data. The fine-tuning step of the BERT model has the capability to tackle this issue, making it a powerful tool. ParsBERT, developed by (Farahani et al., 2021), is a monolingual Persian language model based on Google's BERT architecture that utilizes the same BERT-Base settings. It was trained on over 2 million diverse documents, allowing it to perform various tasks, including sentiment analysis, text classification, and named entity recognition. VMWEs compositionality prediction: Compositionality prediction of MWEs has garnered considerable attention in recent years. One popular method for measuring the compositionality of MWEs is through the use of word embeddings. (Salehi et al., 2015) were among the first to explore this approach by comparing the performance of two embedding models, word2vec and MSSG, in predicting the degree of compositionality of MWEs in English and German datasets. Their hypothesis was that the similarity between MWEs and their component words' embedding vectors would be indicative of the MWEs' compositionality. They then found that combining string similarity with the word embedding approach was comparable to existing state-of-the-art methods (Salehi and Cook, 2013). A study by (Nandakumar et al., 2018) provides a similar examination, using word-level, characterlevel, and document-level embeddings to calculate the compositionality of MWEs in English. Their results suggest that the word2vec (Mikolov et al., 2013) model, followed by fasttext (Bojanowski et al., 2017) and infersent (Conneau et al., 2017), outperformed other embedding models. (Cordeiro et al., 2019) improved that method and proposed that multi-word expressions (MWEs) should be preprocessed into a single unit prior to model training. This has a drawback that a comprehensive list of MWEs be available beforehand to accurately identify and consolidate them into a single token. Additionally, any alterations to the set of MWEs would mandate retraining of the model. Consequently, this study aims to determine the degree of compositionality of each VMWE by first identifying them and training an embedding model to capture their semantic information. The resulting embedding vectors are then utilized to predict the compositionality of each VMWE. Despite numerous studies on predicting MWEs compositionality, much of the research has been concentrated on English and European language corpora. To the best of our knowledge, there has been no investigation on compositionality prediction of VMWEs in Persian, which is a lowresource language. Thus, in this work, we aim to address these two issues by leveraging the methods established in previous MWE studies. 1 Light Verb Particle 2 Non-Verbal Element VMWE Identification In this section, we first present the datasets utilized in the proposed approach for VMWE identification, followed by a detailed description of the methods and models employed for this task. To detect VMWEs, a combination of a non-contextual method and two deep learning models are employed. These deep learning models treat the VMWE detection task as a sequence labelling problem, where the goal is to assign a relevant tag to each token in the sequence. To accomplish this, an IOB-like labelling format was used to tag the VMWEs in sentences, where the beginning component of the expression is tagged as 'B', its other components are tagged as 'I', and the words in the sentence that do not belong to any VMWE receive an 'O' tag. Additionally, sentences containing two VMWEs with mixed components were removed for simplicity (e.g. 5). The two deep learning models used are an LSTM-based architecture and a BERT-based model. Dataset for the identification of VMWE In terms of datasets, the Parseme Corpus (Savary et al., 2017) serves as the annotated corpus of tagged VMWEs, comprising 3226 sentences. The VMWEs in this corpus were manually annotated by a single annotator per file. Every verb-particle construction (VPC) that is fully non-compositional, where the particle modifies the meaning of the verb, is tagged, and a number bonds the components of the VMWE. Additionally, Persian Dependency Treebank (PerDT) contains 30 thousand tagged sentences (Rasooli et al., 2013). PerDT was tagged using both rule-based and manual strategies. The first strategy utilized the dependency tree to identify the components of VMWEs by extracting words with LVP 1 , NVE 2 , and VPRT 3 tags and their connected verbs, resulting in the detection of 32056 VMWEs in the training set of the corpus. A manual annotation of VMWEs was also performed on 1000 sentences of 3 Verb-Particle Construction the corpus. Although this method resulted in fewer tagged sentences, it was more accurate and reliable compared to the previous strategy. We evaluated our non-contextual method on the Parseme Corpus and trained neural networks on both corpora. Non-contextual method The first strategy for identifying VMWEs involves a straightforward approach that seeks to identify such expressions within a sentence. To achieve this, a dataset of VMWEs was created by collecting all compound verbs in FarsNet, which is the Persian wordnet With 100,000 words developed by the natural language processing laboratory at Shahid Beheshti University. We extracted 21462 VMWEs from FarsNet. To identify VMWEs in a sentence, the n-grams (for n=2,3,4) were extracted and searched for the presence of all components of a multi-word verb within the n-gram. Not all cases that are found are VMWEs, and not all VMWEs can be found in this way, especially if there are intermediate words. However, this approach can help identify potential VMWEs. The effectiveness of this approach will be evaluated in the evaluation section. Long Short-Term Memory (LSTM) A neural network architecture comprised of a convolution network and an LSTM network was utilized. The network was designed with an embedding layer as the initial component, which is demonstrated to produce better results than utilizing a standalone embedding model. To enhance the accuracy of predictions, the inputs to the network were augmented with POS tags. The architecture of the layers is illustrated in Figure 1. The first layer encompasses a combination of token vectors derived from the embedding layer, concatenated with 50-dimension features and a dropout rate of 0.2. The output of this layer and the POS tags were then concatenated as a numerical code at the end of the embedding vector of each word and then, fed into a ConvNet layer containing 200 neurons and a filter size of 1. No dropout was applied to the ConvNet layer and the activation function used was Rectified Linear Unit (ReLU). The output of the convolutional layer was then fed into a bi-directional LSTM network with 100 neurons and a recurrent dropout rate of 0.5. BERT BERT (Bidirectional Encoder Representations from Transformers) is a pre-trained neural model based on self-attention blocks. It has achieved state-of-the-art results on various natural language processing tasks, such as question answering (Devlin et al., 2018) and Multi-Genre Natural Language Inference (Nangia et al., 2017), due to its ability to embed each token in a sentence contextually, it can capture the meaning of each token within its context. The advantage of BERT is that it is a general architecture that can be applied to multiple problems, and its pre-training on raw, unlabeled texts minimizes the need for labeled data. Additionally, BERT has been pre-trained in 104 languages, including Persian. In this study, we utilize the ParsBERT model, pre-trained on Persian text, to identify VMWEs in Persian sentences. The ParsBERT model is fine-tuned on datasets specifically for the task of tagging tokens that are part of a VMWE. Predicting the Compositionality of VMWEs The primary objective of this paper is to predict the compositionality of VMWEs. Our assumption is that the degree of compositionality of a multiword expression can be determined by evaluating the semantic similarity between its constituent components and the expression itself. This evaluation is conducted by comparing the similarity of the embedding vectors of the corresponding word tokens. To accomplish this, we follow the studies of (Salehi et al., 2015) and (Nandakumar et al., 2018) and investigate six metrics to determine the compositionality of VMWEs. In this section, the criteria for the task and a description of the datasets are presented. Methodology One of the defining challenges of VMWEs is their compositional nature, where the semantic meaning of a VMWE can be dissimilar from the meanings of its individual components. Therefore, the objective of this research is to determine the degree of compositional property by analyzing the embedding vectors of both the VMWEs and their components. We begin with the preparation of four different corpora for training embedding models. The detected VMWEs are pre-processed by removing all spaces and semi-spaces 4 , and replacing them with an underscore symbol to consider the VMWE as a single word. Two word-level and characterlevel embedding models, namely word2vec and fasttext, are then trained on the processed corpora. To assess the compositionality of the VMWEs, six different criteria are leveraged to predict the compositionality of the VMWEs based on the generated VMWE-specific embedding vectors. It is assumed that the compositionality of an MWE can be captured by computing the relative similarity between the MWE's component embedding vectors and the embedding vector of the MWE. Consequently, the majority of the proposed metrics focus on calculating this similarity, followed by the determination of a threshold that indicates whether a VMWE is compositional or not based on the computed metric value. We compare the performance of different criteria in distinguishing compositional and non-compositional VMWEs. All similarity calculations between two vectors are performed using cosine similarity. Additionally, the embedding models are trained on the original corpora to obtain the embedding vectors of all VMWE components. In this study, the overall compositionality of VMWEs is computed using six metrics. In order to evaluate the used embedding vectors, we introduced a new metric called Syn_sim. This is in addition to two previously introduced metrics, Direct_pre and Direct_post, by Salehi et al. (2015) and Nandakumar et al. (2018). Furthermore, (Rossyaykin and Loukachevitch, 2019) and (Loukachevitch and Parkhomenko, 2018) proposed DFsing and DFsum, while 4 In Persian typography, a semi-space is a zero-width-space character that separates two sides without leaving any space between them. Loukachevitch and Parkhomenko (2018) suggested DFcomp. These criteria are explained in more detail. Syn_sim: Intuitively, we can demonstrate that an embedding effectively captures the semantic meaning of a VMWE if it's similar to the embedding vector of that VMWE's synonymous simple verb, which is extracted through Farsnet. We directly compare two different similarity metrics: (1) the similarity between the VMWE's embedding vector and that of the synonymous simple verb; and (2) the similarity between the synonymous verb and 'combined' vector, which is computing an element-wise sum over VMWE's components embedding vector. We calculate these two similarities of the embeddings of the VMWE and its synonymous simple verb using the following three formulas: = ∑ =1 (1) _ _ = ( , _ 1 ) (2) _ _ = cos ( , _ 1 )(3) Where: vmwe, wi, and syn_verb1 are the embeddings for the VMWE, i-th components of VMWE, and synonymous simple verb, respectively. In all cases, if the sim_syn_vmwe is greater than the sim_syn_combined, it means that the constructed VMWE's vector provides a better representation than the combined vector; Thus, the use of the introduced embedding model leads to a better result as it produces better semantic-aware representation for VMWEs. Direct_pre: Assuming that compositional VMWEs tend to have a similar context with their components, we compare the vector embedding of the VMWE with the 'combined' vector of its components by calculating the cosine similarity between them. Formally: _ = ( , )(4) Direct_post: The similarity between the vector embedding of a VMWE and each of its components is first measured. Then the overall compositionality of the VMWE is computed by combining the similarity scores below. = ( , 1) + (1 − ) * cos ( , 2)(5) Where w1 and w2 denote the embedding for the first and second component of the VMWE, respectively. Here, we assume that the VMWE consists of two components as most of Persian VMWEs are light verb constructions (LVCs(, but the formula can be easily generalized to concider more than two components. DFsum: The similarity between the vector embedding of a VMWE and the element-wise sum of normalized vectors of its components is computed. Formally: _ _ = ∑ | | =1 (6) = ( , _ _ ) (7) DFcomp: The similarity between the VMWE's components' word vectors is computed. Formally: = ( 1 , 2 ) (8) DFsing: The similarity between the vector embedding of a VMWE and the vector of the most similar single word (sim_word) is calculated as below : DFsing = cos(vmwe, sim_word) (9) Dataset for compositionality prediction For our experiment, we use four current Persian corpora, namely Bijankhan, HmBlogs, PARSEME, and PerDT to statistically study the occurrences of VMWES in Persian texts. Bijankhan: The dataset of Bijankhan is a tagged corpus that is gathered from daily news and common texts (Bijankhan, 2004). This corpus contains about 2.6 million tagged words with 550 Persian part-of-speech tags. HmBlogs: A tokenized corpus of 500 million sentences and 6.5 billion tokens is gathered by (Khansari and Shamsfard, 2021) We use the first 1 million sentences of it. Compositional and non-compositional VMWE dataset: A self-gathered dataset of compositional and non-compositional verbs was identified by linguists, which annotated for compositionality on a binary scale. According to (Karimi, 1997) and (Sharif, 2017), 33 compositional and 22 non-compositional verbs were extracted in an infinitive form. Results and Discussion This section showcases the evaluation outcomes achieved during the testing phase for identifying VMWEs and predicting their compositionality. The evaluation was performed on the Parseme corpus test-set for all identification techniques. VMWE Identification Evaluation We trained our identification networks using the Parseme and PerDT corpora, identifying 2451 VMWEs and 1669 unique ones in Parseme, and using IOB format for tagging. We also tagged and used VMWEs from PerDT for the train set. Table 1 and Table 2 specify the results. The first row of Table 1 shows the results of the non-contextual For the other rows, the first row of each method was trained on Parseme corpus, while the other rows used both corpora to train the models. However, the second and third rows consider the rule-based and manually tagged PerDT, respectively. It is not surprising that contextual methods utilizing neural networks exhibit a substantial improvement over non-contextual methods. The LSTM model performs relatively better with a train-set size increase, achieving about 73% F1-score. The BERT model has the highest F1-score of 89.07% on the PARSEME train-set. The BERT model performs better on PARSEME due to inaccuracies in manual and rule-based tagging methods, caused by the absence of expert annotators and limited expert evaluation. Additionally, BERT's sensitivity to incorrect data is higher than the LSTM model as it is pre-trained on Persian, resulting in lower performance for the second and third rows. We also analyzed the results based on seen and unseen verbs. Table 2 shows the evaluation results of the best model (BERT fine-tuned on Parseme) on seen and unseen verbs by two approaches.  We considered seen verbs as verbs whose exact forms (like their persons, tenses etc.) exist in the train set.  For finding seen verbs, we turn the core (the main verb) of all verbal expressions in the test and train set to their infinitive form and then check whether the expression exists in the train set. Compositionality Prediction of VMWEs The experiments began with analysing the top most similar words or expressions to some of the frequent VMWEs to find the best embedding model capable of capturing VMWE's semantics. By increasing the corpus size, we observe that the top most similar expressions of a VMWE are closer to the meaning of that VMWE. Take for example, the meaning of similar top expressions using word embedding models trained on relatively more minor corpora such as Parseme and PerDT is far different from the semantic meaning of the verb. Besides, most of the VMWEs in Persian are considered Light verb constructions (LVCs), which consist of a semantically reduced verb and a NVE. Also, a limited set of light verbs, around 20 Persian full verbs (Family, 2006) Although Direct_post is also accurate, DFcomp had the lowest accuracy and did not effectively separate the two categories. Analysis of Proposed Criteria Further analysis Syn_sim reveals that out of 75152 non repetitive VMWE in the corpus, synonymous simple verbs for 4384 VMWE have been extracted; among them, for 3558 VMWE, the similarity of the synonymous simple verb to the VMWE is greater than the similarity of the synonymous simple verb to the combined vector (Table 4). Therefore, in 81% of VMWEs, the VMWE embedding vector constructed by the proposed method provides a better representation than the combined vector. Conclusion To conclude, this paper presented an approach to predicting the compositional nature of VMWEs in Persian. The proposed method utilized automatic identification of VMWEs, followed by the creation of word embeddings that better capture the semantic properties of these expressions, and multiple criteria to determine their degree of compositionality. The study compared two neural architectures, BiLSTM and ParsBERT, and found that a fine-tuned BERT model outperformed the BiLSTM model with an F1 score of 89%. Moreover, the paper demonstrated the effectiveness of a word2vec embedding model in capturing the semantics of identified VMWEs and used criteria, resulting in an accuracy of 70.9% on a collected dataset of expert-annotated compositional and non-compositional VMWEs. These findings have important implications for further research in predicting the compositional nature of multiword expressions. Limitations The limitations of our approach are mainly attributed to the limited annotated dataset of compositional and non-compositional VMWEs used in our experiments, which may not be representative of the full population of VMWEs in the Persian language. Moreover, the high prevalence of VMWEs in Persian and the varying perspectives among linguists on their compositional status add to the limitations of our results. Furthermore, the reliance on word embeddings for our approach may lead to potential inaccuracies in capturing the semantic information of words, especially for Persian which is a lowresource language. The limited data available for training word embeddings may not accurately reflect the language usage, resulting in a higher risk of inaccuracies for common words in the language that may not appear frequently in the training corpus. Moreover, as a further research we should evaluate the rule-based method against neural network-based models thoroughly, which requires more expert-annotated dataset. In addition, for future research endeavors, it is imperative to conduct a comprehensive evaluation of rule-based approaches in comparison to neural network-based models. However, such an evaluation would necessitate a more substantial dataset annotated by domain experts. Given these limitations, the results should be interpreted with caution, and further research is needed to fully understand the complexities of VMWEs in the Persian language. Figure 1 : 1The architecture of ConvNet + LSTM model , can be combined with an NVE to form a VMWE. Most of the top most similar expressions obtained using fasttext generated embedding vectors have a similar verbalCriterion threshold accuracy Direct pre 0.23 0.709 Direct post 0.27 0.655 DFcomp 0.23 0.618 DFsum 0.23 0.709 Table 3 : 3Evaluation results of the criteriaSeen proportion CDSV CDUV 1 33.33% 89.00% 62.56% 2 73.12% 80.42% 46.75% Table 2 : 2Proportion of seen VMWEs in Parseme and the percentage of correct detection of seen(CDSV) and unseen verbs(CDUV) Table 4 : 4The degree of similarity with a synonymous simple verb Direct_post, DFsum, and DFcomp using the optimal threshold. The most accurate threshold was determined for each criterion within the calculated range of values. Direct_pre and DFsum achieved the highest accuracy of 70.9% among the proposed metrics, distinguishing between compositional and non-compositional verbs. A Direct_pre criterion value or DFsum above 0.23 indicates a compositional verb, while a value below indicates a non-compositional verb.element with different NVE due to the character- level attitude of fasttext embedding models. Therefore, the semantics of the VMWE is not well- captured by fasttext. This being the case, for analyzing the compositionality of VMWE, only the word2vec model trained on Hmblog, which is the largest corpus, is considered. To assess the compositional nature of a verb in the dataset, the median value of each proposed criterion is calculated for the five most frequently occurring inflections of the verb. This median value is then used to determine the degree of compositionality of the infinitive verb, as measured by the given metric. Table 3 presents our experiment results for Direct_pre, Table 5 shows 5Direct_pre results for various VMWEs, where the values are highly similar to those of the DFsum metric. Non-compositional verbs in column one typically have a lower calculated criterion than compositional verbs in column five. However, some non-compositional verbs such as ‫"چشم_زدن"‬ (eye hitting => jinxing) have unexpectedly high calculated values due to their low occurrence frequency. This shows that higher occurrence frequency is likely to result in a more accurate calculated value, and should be taken into consideration when predicting compositionality.Moreover, DFcomp overestimates non-compositional verbs compared to compositional ones, and DFsing is unsuitable as the most similar expressions are often compound verbs.freq Direct_pre DFcomp compositional freq DFcomp Direct_pre non- compositional 296 0.37 0.30 ‫نگاه_کن‬ ‫ید‬ (look do => look) 7 0.22 0.23 ‫چشم_زدن‬ (eye hitting => jinxing) 130 0.43 0.33 ‫تغ‬ ‫ییر‬ ‫_کند‬ (change do => change) 28 0.40 0.25 ‫فر‬ ‫یب‬ ‫_خورده‬ (deception ate => deceived) 3 0.23 0.16 ‫خاک_کرد‬ (soil did => buried) 1032 0.56 0.10 ‫دوست_دارم‬ (friend have => to like) 258 0.40 0.24 ‫فکر_کن‬ ‫ید‬ (think do => think) 132 0.51 0.17 ‫شکست_خورده‬ (failure ate => failed) 1806 0.38 0.32 ‫قرار_دادن‬ (put have => putting up) 50 0.29 0.13 ‫زم‬ ‫ین‬ ‫_خوردن‬ (land eating => falling down) 105 0.51 0.25 ‫به_دن‬ ‫یا‬ ‫آمده_‬ (to world came => born) 62 0.4 0.14 ‫چانه_زدن‬ (chin hiting => to bargaining) Table 5 : 5Samples of Direct_pre and DFcomp results The role of the corpus in writing a grammar: An introduction to a software. Mahmood Bijankhan, Iranian Journal of Linguistics. 192Mahmood Bijankhan. 2004. The role of the corpus in writing a grammar: An introduction to a software. Iranian Journal of Linguistics, 19(2):48-67. Enriching word vectors with subword information. Piotr Bojanowski, Edouard Grave, Armand Joulin, Tomas Mikolov, Transactions of the association for computational linguistics. 5Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the association for computational linguistics, 5:135-146. Identification of verbs in Persian language sentences. A Chaghari, Mehrnoush Shamsfard, Journal of Computer Science and Engineering. A Chaghari and Mehrnoush Shamsfard. 2013. Identification of verbs in Persian language sentences. Journal of Computer Science and Engineering. Supervised learning of universal sentence representations from natural language inference data. Alexis Conneau, Douwe Kiela, Holger Schwenk, Loic Barrault, Antoine Bordes, arXiv:1705.02364arXiv preprintAlexis Conneau, Douwe Kiela, Holger Schwenk, Loic Barrault, and Antoine Bordes. 2017. Supervised learning of universal sentence representations from natural language inference data. arXiv preprint arXiv:1705.02364. Multiword expression processing: A survey. Mathieu Constant, Gülşen Eryiğit, Johanna Monti, Lonneke Van Der, Carlos Plas, Michael Ramisch, Amalia Rosner, Todirascu, Computational Linguistics. 434Mathieu Constant, Gülşen Eryiğit, Johanna Monti, Lonneke Van Der Plas, Carlos Ramisch, Michael Rosner, and Amalia Todirascu. 2017. Multiword expression processing: A survey. Computational Linguistics, 43(4):837-892. Unsupervised compositionality prediction of nominal compounds. Silvio Cordeiro, Aline Villavicencio, Marco Idiart, Carlos Ramisch, Computational Linguistics. 451Silvio Cordeiro, Aline Villavicencio, Marco Idiart, and Carlos Ramisch. 2019. Unsupervised compositionality prediction of nominal compounds. Computational Linguistics, 45(1):1-57. Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, arXiv:1810.04805Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprintJacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. Explorations of semantic space: The case of light verb constructions in Persian. Neiloufar Family, Paris, EHESSPhD ThesisNeiloufar Family. 2006. Explorations of semantic space: The case of light verb constructions in Persian. PhD Thesis, Paris, EHESS. Parsbert: Transformer-based model for persian language understanding. Mehrdad Farahani, Mohammad Gharachorloo, Marzieh Farahani, Mohammad Manthouri, Neural Processing Letters. 53Mehrdad Farahani, Mohammad Gharachorloo, Marzieh Farahani, and Mohammad Manthouri. 2021. Parsbert: Transformer-based model for persian language understanding. Neural Processing Letters, 53:3831-3847. Deep learning models for multiword expression identification. Waseem Gharbieh, Virendrakumar Bhavsar, Paul Cook, Proceedings of the 6th Joint Conference on Lexical and Computational Semantics (* SEM 2017). the 6th Joint Conference on Lexical and Computational Semantics (* SEM 2017)Waseem Gharbieh, Virendrakumar Bhavsar, and Paul Cook. 2017. Deep learning models for multiword expression identification. In Proceedings of the 6th Joint Conference on Lexical and Computational Semantics (* SEM 2017), pages 54-64. Persian complex verbs: Idiomatic or compositional. Simin Karimi, LEXICOLOGY-BERLIN-3Simin Karimi. 1997. Persian complex verbs: Idiomatic or compositional. LEXICOLOGY-BERLIN-, 3:273- 318. Motahari Hamzeh, Mehrnoush Khansari, Shamsfard, arXiv:2111.02362HmBlogs: A big general Persian corpus. arXiv preprintHamzeh Motahari Khansari and Mehrnoush Shamsfard. 2021. HmBlogs: A big general Persian corpus. arXiv preprint arXiv:2111.02362. Recognition of multiword expressions using word embeddings. Natalia Loukachevitch, Ekaterina Parkhomenko, Russian Conference on Artificial Intelligence. SpringerNatalia Loukachevitch and Ekaterina Parkhomenko. 2018. Recognition of multiword expressions using word embeddings. In Russian Conference on Artificial Intelligence, pages 112-124. Springer. Distributed representations of words and phrases and their compositionality. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, Jeff Dean, Advances in neural information processing systems. 26Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S. Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. Advances in neural information processing systems, 26. A comparative study of embedding models in predicting the compositionality of multiword expressions. Navnita Nandakumar, Bahar Salehi, Timothy Baldwin, Proceedings of the Australasian Language Technology Association Workshop. the Australasian Language Technology Association WorkshopNavnita Nandakumar, Bahar Salehi, and Timothy Baldwin. 2018. A comparative study of embedding models in predicting the compositionality of multiword expressions. In Proceedings of the Australasian Language Technology Association Workshop 2018, pages 71-76. Nikita Nangia, Adina Williams, Angeliki Lazaridou, Samuel R Bowman, arXiv:1707.08172The repeval 2017 shared task: Multi-genre natural language inference with sentence representations. arXiv preprintNikita Nangia, Adina Williams, Angeliki Lazaridou, and Samuel R. Bowman. 2017. The repeval 2017 shared task: Multi-genre natural language inference with sentence representations. arXiv preprint arXiv:1707.08172. Edition 1.1 of the PARSEME shared task on automatic identification of verbal multiword expressions. Carlos Ramisch, Silvio Cordeiro, Agata Savary, Veronika Vincze, Archna Verginica Barbu Mititelu, Maja Bhatia, Marie Buljan, Polona Candito, Voula Gantar, Giouli, Proceedings of the Joint Workshop on Linguistic Annotation, Multiword Expressions and Constructions. the Joint Workshop on Linguistic Annotation, Multiword Expressions and ConstructionsLAW-MWE-CxGCarlos Ramisch, Silvio Cordeiro, Agata Savary, Veronika Vincze, Verginica Barbu Mititelu, Archna Bhatia, Maja Buljan, Marie Candito, Polona Gantar, and Voula Giouli. 2018. Edition 1.1 of the PARSEME shared task on automatic identification of verbal multiword expressions. In Proceedings of the Joint Workshop on Linguistic Annotation, Multiword Expressions and Constructions (LAW-MWE-CxG- Development of a Persian syntactic dependency treebank. Mohammad Sadegh Rasooli, Manouchehr Kouhestani, Amirsaeid Moloodi, Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesMohammad Sadegh Rasooli, Manouchehr Kouhestani, and Amirsaeid Moloodi. 2013. Development of a Persian syntactic dependency treebank. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 306-314. Verbal multiword expressions for identification of metaphor. Marek Omid Rohanian, Shiva Rei, Le Taslimipoor, Ha, ACL. Omid Rohanian, Marek Rei, Shiva Taslimipoor, and Le Ha. 2020. Verbal multiword expressions for identification of metaphor. In ACL. Measure clustering approach to MWE extraction. P O Rossyaykin, N V Loukachevitch, Komp'juternaja Lingvistika i Intellektual'nye Tehnologii. P. O. Rossyaykin and N. V. Loukachevitch. 2019. Measure clustering approach to MWE extraction. In Komp'juternaja Lingvistika i Intellektual'nye Tehnologii, pages 562-575. Multiword expressions: A pain in the neck for NLP. A Ivan, Timothy Sag, Francis Baldwin, Ann Bond, Dan Copestake, Flickinger, Computational Linguistics and Intelligent Text Processing: Third International Conference, CICLing. Mexico City, MexicoSpringer3Ivan A. Sag, Timothy Baldwin, Francis Bond, Ann Copestake, and Dan Flickinger. 2002. Multiword expressions: A pain in the neck for NLP. In Computational Linguistics and Intelligent Text Processing: Third International Conference, CICLing 2002 Mexico City, Mexico, February 17-23, 2002 Proceedings 3, pages 1-15. Springer. Automatic identification of Persian light verb constructions. Bahar Salehi, Narjes Askarian, Afsaneh Fazly, International Conference on Intelligent Text Processing and Computational Linguistics. SpringerBahar Salehi, Narjes Askarian, and Afsaneh Fazly. 2012. Automatic identification of Persian light verb constructions. In International Conference on Intelligent Text Processing and Computational Linguistics, pages 201-210. Springer. Predicting the compositionality of multiword expressions using translations in multiple languages. Bahar Salehi, Paul Cook, Proceedings of the Main Conference and the Shared Task: Semantic Textual Similarity. the Main Conference and the Shared Task: Semantic Textual Similarity1Second Joint Conference on Lexical and Computational Semantics (* SEM)Bahar Salehi and Paul Cook. 2013. Predicting the compositionality of multiword expressions using translations in multiple languages. In Second Joint Conference on Lexical and Computational Semantics (* SEM), Volume 1: Proceedings of the Main Conference and the Shared Task: Semantic Textual Similarity, pages 266-275. A word embedding approach to predicting the compositionality of multiword expressions. Bahar Salehi, Paul Cook, Timothy Baldwin, Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesBahar Salehi, Paul Cook, and Timothy Baldwin. 2015. A word embedding approach to predicting the compositionality of multiword expressions. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 977-983. Recognizing MultiWord Expressions in Persian. Pourya Saljoughi, Badlou, Shahid Beheshti UniversityPh.D. thesisPourya Saljoughi Badlou. 2016. Recognizing MultiWord Expressions in Persian. Ph.D. thesis, Shahid Beheshti University. Fabienne Cap, Voula Giouli, and Ivelina Stoyanova. 2017. The PARSEME shared task on automatic identification of verbal multiword expressions. Agata Savary, Carlos Ramisch, Silvio Ricardo Cordeiro, Federico Sangati, Veronika Vincze, Marie Behrang Qasemi Zadeh, Candito, The 13th Workshop on Multiword Expression at EACL. Agata Savary, Carlos Ramisch, Silvio Ricardo Cordeiro, Federico Sangati, Veronika Vincze, Behrang Qasemi Zadeh, Marie Candito, Fabienne Cap, Voula Giouli, and Ivelina Stoyanova. 2017. The PARSEME shared task on automatic identification of verbal multiword expressions. In The 13th Workshop on Multiword Expression at EACL, pages 31-47. SemEval-2016 task 10: Detecting minimal semantic units and their meanings (DiMSUM). Nathan Schneider, Dirk Hovy, Anders Johannsen, Marine Carpuat, Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016). the 10th International Workshop on Semantic Evaluation (SemEval-2016)Nathan Schneider, Dirk Hovy, Anders Johannsen, and Marine Carpuat. 2016. SemEval-2016 task 10: Detecting minimal semantic units and their meanings (DiMSUM). In Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016), pages 546-559. Developing FarsNet: A lexical ontology for Persian. Mehrnoush Shamsfard, 413Mehrnoush Shamsfard. 2007. Developing FarsNet: A lexical ontology for Persian. GWC 2008:413. Persian Compound Verb Formation from a Cognitive Grammar Viewpoint. Babak Sharif, Language Related Research. 82Babak Sharif. 2017. Persian Compound Verb Formation from a Cognitive Grammar Viewpoint. Language Related Research, 8(2):149-170. Shiva Taslimipoor, Omid Rohanian, arXiv:1809.03056Shoma at parseme shared task on automatic identification of vmwes: Neural multiword expression tagging with high generalisation. arXiv preprintShiva Taslimipoor and Omid Rohanian. 2018. Shoma at parseme shared task on automatic identification of vmwes: Neural multiword expression tagging with high generalisation. arXiv preprint arXiv:1809.03056.
6,530,117
Ukwabelana -An open-source morphological Zulu corpus
[ 13149107 ]
Ukwabelana -An open-source morphological Zulu corpus Coling 2010. August 2010 Sebastian Spiegler spiegler@cs.bris.ac.uk Linguistics Department Intelligent Systems Group Intelligent Systems Group University of Bristol University of the Witwatersrand University of Bristol Andrew Van Der Spuy andrew.vanderspuy@wits.ac.za Linguistics Department Intelligent Systems Group Intelligent Systems Group University of Bristol University of the Witwatersrand University of Bristol Peter A Flach peter.flach@bris.ac.uk Linguistics Department Intelligent Systems Group Intelligent Systems Group University of Bristol University of the Witwatersrand University of Bristol Ukwabelana -An open-source morphological Zulu corpus Proceedings of the 23rd International Conference on Computational Linguistics the 23rd International Conference on Computational LinguisticsBeijingColing 2010. August 2010 Introduction Zulu (also known as isiZulu) is a Bantu language of South Africa, classified as S.30 in Guthrie's classification scheme (Guthrie, 1971). Since 1994, it has been recognized as one of the eleven official languages of South Africa. It has a written history of about 150 years: the first grammar was published by Grout (1859), and the first dictionary by Colenso (1905). There are about 11 million mother-tongue speakers, who constitute approximately 23% of South Africa's population, making Zulu the country's largest language. Zulu is highly mutually intelligible with the Xhosa, Swati and Southern Ndebele languages, and with Ndebele of Zimbabwe (Lanham, 1960), to the extent that all of these can be considered dialects or varieties of a single language, Nguni. Despite its size, Zulu is considerably under-resourced, compared to Western languages with similar numbers of speakers, e.g. Swedish. There are only about four regular publications in Zulu, there are few published books, and the language is not used as a medium of instruction. This of course is partly due to the short timespan of its written history, but the main reason, of course, is the apartheid history of South Africa: for most of the twentieth century resources were allocated to Afrikaans and English, the two former official languages, and relatively few resources to the indigenous Bantu languages. Since 1994, Zulu has had a much larger presence in the media, with several television programs being broadcast in Zulu every day. Yet much needs to be done in order to improve the resources available to Zulu speakers and students of Zulu. The aim of the project reported in this paper was to establish a Zulu corpus, named the Ukwabelana corpus 1 , consisting of morphologically labeled words (that is, word types) and part-ofspeech (POS) tagged sentences. Along with the labeled corpus, unlabeled words and sentences, a morphological grammar, a semi-automatic mor-phological analyzer and a POS tagger for morphologically analyzed words will be provided. The sources used for the corpus were limited to fictional works and the Zulu Bible. This means that there is not a wide variety of registers, and perhaps even of vocabulary items. This defect will have to be corrected in future work. The Ukwabelana corpus can be used to develop and train automatic morphological analyzers, which in turn tag a large corpus of written Zulu, similar to the Brown corpus or the British National Corpus. Moreover, the list of POS tagged sentences is an essential step towards building an automatic syntactic tagger, which still does not exist for Zulu, and a tagged corpus of Zulu. Such a corpus would be beneficial to language researchers as it provides them with examples of actual usage, as opposed to elicited or invented examples, which may be artificial or unlikely to occur in real discourse. This would greatly improve the quality of Zulu dictionaries and grammars, most of which rely heavily on the work of Doke (1927) and Doke, Malcom and Sikakana (1958), with little in the way of innovation. Morphological tagging is also useful for practical computational applications like predictive text, spell-checking, grammar checking and machine translation; in the case of Zulu, where a large percentage of grammatical information is conveyed by prefixes and suffixes rather than by separate words, it is essential. For example, in English, the negative is expressed by means of a separate word 'not', but in Zulu the negative is constructed using a prefix-and-suffix combination on the verb, and this combination differs according to the mood of the verb (indicative, participial or subjunctive). The practical computational applications mentioned could have a very great impact on the use of Zulu as a written language, as spell-checking and grammar checking would benefit proofreaders, editors and writers. Machine translation could aid in increasing the number of texts available in Zulu, thus making it more of a literary language, and allowing it to become established as a language of education. The use of Zulu in public life could also increase. Currently, the tendency is to use English, as this is the language that reaches the widest audience. If high-quality automatic translation becomes available, this would no longer be necessary. As it is hoped that the Ukwabelana corpus will be of benefit to any person doing research on Zulu or on computer-aided analysis of languages, it will be made available as the first morphologically analysed corpus of Zulu in the public domain. Related work In this section, we will give an overview of linguistic research on Nguni languages, following the discussions in van der Spuy (2001), and thereafter a summary of computational approaches to the analysis of Zulu. Linguistic research on Nguni languages The five Nguni languages Zulu, Xhosa, South African Ndebele, Swati, and Zimbabwean Ndebele are highly mutually intelligible, and for this reason, works on any of the other Nguni languages are directly relevant to an analysis of Zulu. There have been numerous studies of Nguni grammar, especially its morphology; in fact, the Nguni languages probably rival Swahili and Chewa for the title of most-studied Bantu language. The generative approach to morphological description (as developed by Aronoff (1976), Selkirk (1982), Lieber (1980), Lieber (1992) has had very little influence on most of the work that has been done on Nguni morphology. Usually, the descriptions have been atheoretical or structuralist. Doke's paradigmatic description of the morphology (Doke, 1927;Doke, 1935) has remained the basis for linguistic work in the Southern Bantu languages. Doke (1935) criticized previous writers on Bantu grammars for basing their classification, treatment and terminology on their own mother tongue or Latin. His intention was to create a grammatical structure for Bantu which did not conform to European or classical standards. Nevertheless, Doke himself could not shake off the European mindset: he treated the languages as if they had inflectional paradigms, with characteristics like subjunctive or indicative belonging to the whole word, rather than to identifiable affixes; in fact, he claimed (1950) that Bantu languages are "inflectional with [just] a tendency to agglutination", and assumed that the morphol-ogy was linear not hierarchical. Most subsequent linguistic studies and reference grammars of the Southern Bantu languages have been directed at refining or redefining Doke's categories from a paradigmatic perspective. Important Nguni examples are Van Eeden (1956), Van Wyk (1958), Beuchat (1966), Wilkes (1971), Nkabinde (1975), Cope (1984), Davey (1984), Louw (1984), Ziervogel et al. (1985), Gauton (1990), Gauton (1994), Khumalo (1992), Poulos and Msimang (1998), Posthumus (1987), Posthumus (1988), Posthumus (1988) and Posthumus (2000). Among the very few generative morphological descriptions of Nguni are Lanham (1971), Mbadi (1988) and Du Plessis (1993). Lanham (1971) gives a transformational analysis of Zulu adjectival and relative forms. This analysis can be viewed as diachronic rather than synchronic. Mbadi (1988) applies Lieber (1980) and Selkirk's percolation theory (Selkirk, 1982) to a few Xhosa morphological forms. Du Plessis (1993) gives a hierarchical description of the morphology of the verb, but he assumes that derivation is syntactical rather than lexical. In short, there has been no thorough-going generative analysis of the morphology which has treated the Nguni languages as agglutinative rather than inflectional. Computational approaches to analyzing Zulu In the last decade, various computational approaches for Zulu have been reported. Based on the Xerox finite-state toolbox by Beesley and Karttunen (2003), Pretorius and Bosch (2003) developed a prototype of a computational morphological analyzer for Zulu. Using a semi-automated process, a morphological lexicon and a rule-base were built incrementally. Later work (Pretorius and Bosch, 2007) dealt with overgeneration of the Zulu finite-state tool concerning locative formation from nouns and verbal extensions to verb roots. Pretorius and Bosch (2009) also used crosslinguistic similarities and dissimilarities of Zulu to bootstrap a morphological analyser for Xhosa. Joubert et al. (2004) followed a bootstrapping approach to morphological analysis. A simple framework uses morpheme lists, morphophono-logical and morphosyntactic rules which are learnt by consulting an oracle, in their case a linguistic expert who corrects analyses. The framework then revises its grammar so that the updated morpheme lists and rules do not contradict previously found analyses. Botha and Barnard (2005) compared two approaches for gathering Zulu text corpora from the World Wide Web. They drew the conclusion that using commercial search engines for finding Zulu websites outperforms webcrawlers even with a carefully selected starting point. They saw the reason for that in the fact that most documents on the internet are in one of the world's dominant languages. Bosch and Eiselen (2005) presented a spell checker for Zulu based on morphological analysis and regular expressions. It was shown that after a certain threshold for the lexicon size performance could only be improved by incrementally extending morphological rules. Experiments were performed for basic and complex Zulu verbs and nouns, and large numbers of words still were not recognized. Spiegler et al. (2008) performed experiments where they tested four machine learning algorithms for morphological analysis with different degrees of supervision. An unsupervised algorithm analyzed a raw word list, two semi-supervised algorithms were provided with word stems and subsequently segmented prefix and suffix sequences, and the supervised algorithm used a language model of analysed words which was applied to new words. They experimentally showed that there is a certain trade-off between the usage of labeled data and performance. They also reckoned that computational analysis improves if words of different grammatical categories are analysed separately since there exist homographic morphemes across different word categories. Zulu morphology Zulu is an agglutinative language, with a complex morphology. It presents an especial problem for computational analysis, because words usually incorporate both prefixes and suffixes, and there can be several of each. This makes it hard to identify the root by mechanical means, as the root could be the first, second, third, or even a later morpheme in a word. The complexities involved are exacerbated by the fact that a considerable number of affixes, especially prefixes, have allomorphic forms. This is largely brought about by the fact that Zulu has a prohibition against sequences of vowels, so that a prefix whose canonical form is nga-will have an allomorph ng-before roots that begin with vowels. Given a sequence nga-, then, it is possible that it constitutes an entire morpheme, or the beginning of a morpheme like the verb root ngabaz-'to be uncertain', or a morpheme ng-followed by a vowel-commencing root like and-'to increase'. Furthermore, many morphemes are homographs, so that the prefix nga-could represent either the potential mood morpheme or a form of the negative that occurs in subordinate clauses; and the sequence ng-could be the allomorph of either of these, or of a number of homographic morphemes ngi-, which represent the first person singular in various moods. Besides these phonologically conditioned allomorphs, there are also morphologically conditioned ones, for example the locative prefix e-has an allomorph o-that occurs in certain morphological circumstances. Certain morpheme sequences also exhibit syncretism, so that while most nouns take a sequence of prefixes known as the initial vowel and the noun prefix, as in i-mi-zi 'villages', nouns of certain classes, like class 5, syncretise these two prefixes, as in i-gama 'name', where the prefix i-represents both the initial vowel and the noun prefix. Like all other Bantu languages, Zulu divides its nouns into a number of classes. The class is often identifiable from the noun prefix that is attached to the noun, and it governs the agreement of all words that modify the noun, as well as of predicates of which the noun is a subject. Object agreement may also be marked on the predicate. Two examples of this agreement are given below. Example 1. Leso si-tshudeni e-si-hle e-ngi-si-fundis-ile si-phas-e kahle. that student who-AGR-good who-I-him-teach-PAST AGRpass-PAST well. 'That good student whom I taught passed well.' Example 2. Lowo m-fundi o-mu-hle e-ngi-m-fundis-ile u-phas-e kahle. that learner who-AGR-good who-I-him-teach-PAST AGRpass-PAST well. 'That good learner whom I taught passed well.' The differences in agreement morphology in the two sentences is brought about because the nouns sitshudeni and mfundi belong to different classes. Canonici (1996) argues that a noun should be assigned to a class by virtue of the agreement that it takes. In terms of this criterion, there are twelve noun classes in Zulu. These classes are numbered 1-7, 9, 10, 11, 14, 15. The numbering system was devised by Meinhof (1906), and reflects the historical affinities between Zulu and other Bantu languages: Zulu lacks classes 8, 12 and 13, which are found in other Bantu languages. In the labels used on the database, morphemes that command or show agreement have been labeled as <xn>, where x is a letter or sequence of letters, and n is a number: thus the morpheme m-in mfundi is labeled <n1>, as it marks the noun as belonging to noun class 1. The morpheme si-in engisifundisile is marked <o7>, as it shows object agreement with a noun of class 7. Zulu predicatives may be either verbal or nonverbal -the latter are referred to in the literature as copulatives. Copulatives usually consist of a predicative prefix and a base, which may be a noun, an adjective, or a prepositional, locative or adverbial form. There may also be various tense, aspect and polarity markers. They translate the English verb 'be', plus its complement -Zulu has no direct equivalent of 'be'; the verb -ba, which has the closest meaning, is probably better translated as 'become'. Examples of copulative forms are ubenguthisha 'he was a teacher', zimandla 'they are strong', basekhaya 'they are at home'. Predicatives may occur in a variety of moods, tenses, aspects and polarities; these are usually distinguished by the affixes attached to the base form. Thus in engasesendlini '(s)he no longer being in the house', the initial prefix e-indicates third person singular, class 1, participial mood; the prefix nga-denotes negative; the first prefix se-denotes continuative aspect; the second prefix se-is the locative prefix; n-shows that the noun belongs to class 9; dl-is the noun root meaning 'house', an allomorph of the canonical form -dlu; and -ini is the locative suffix. Thus in typical agglutinative manner, each affix contributes a distinctive part of the meaning of the word as a whole. This characteristic of the language was exploited in the labeling system used for the morphological corpus: labels were designed so as to indicate the grammatical function of the morpheme. A person searching for past tense negative verbs, for example, could simply search for the combination of <past >, <neg> and <vr>. A complete list of morphemes, allomorphs and their labels is provided along with the corpus and other resources. According to the Dokean grammatical tradition (Doke, 1927), Zulu has a large number of parts of speech. This is because what would be separate words in other languages are often prefixes in Zulu, and also because various subtypes of determiner are given individual names. The parts of speech recognised in the corpus are: noun, verb, adjective, pronoun, adverb, conjunction, prepositional, possessive, locative, demonstrative, presentative, quantitative, copulative and relative. Adjective includes the traditional Dokean adjective (a closed class of roots which take noun prefixes as their agreement prefixes) and the predicative form of the Dokean relative, which is seen as an open class of adjectives (cf. van der Spuy (2006)). Pronouns are the personal pronouns, which may also (sometimes in allomorphic form) be used as agreement morphemes in quantifiers. Adverbs may be forms derived from adjectives by prefixing ka-to the root, or morphologically unanalysable forms like phansi 'in front, forward'. Ideophones have been included as adverbs. Prepositionals are words that incorporate the Dokean "adverbials" na-'with', nga-'by means of', njenga-'like', kuna-'more than', etc., which are better analysed as prepositions. The presentative is Doke's "locative demonstrative copulative" -the briefer name was suggested by van der Spuy (2001). Copulatives are all Doke's copulatives, excluding the adjectives mentioned above. Relatives are all predicative forms incorporating a relative prefix. The labeling scheme The labeling scheme has been based on the idea that each morpheme in a word should be labeled, even when words belong to a very restricted class. For example, the demonstratives could have been labeled as composite forms, but instead it is assumed that demonstratives contain between one and three morphemes, e.g. le<d>si<d7>ya<po3> 'a demonstrative of the third position referring to class 7' -i.e.. 'that one yonder, class 7'. It should be possible from this detailed labeling to build up an amalgam of the morphological structure of the word. The labels have been chosen to be both as brief as possible and as transparent as possible, though transparency was often sacrificed for brevity. Thus indicative subject prefixes are labeled <i1-15>, relative prefixes are labeled <r>, and noun prefixes are labeled <n1-15>; but negative subject prefixes are labeled <g1-15> and possessive agreement prefixes are labeled <z1-15>. Sometimes a single label was used for several different forms, when these are orthographically distinct, so for example <asp> (aspect) is used as a label for the following, among others: the continuative prefix sa-and its allomorph se-, the exclusive prefix se-, and the potential prefix nga-and its allomorph ng-. A person searching for forms containing the potential aspect would have to search for 'nga<asp> + ng<asp>'. However, there should be no ambiguity, as the orthographic form would eliminate this. The detailed description of the scheme is provided by Spiegler et al. (2010). Annotation process The goal of this project was to build a reasonably sized corpus of morphologically annotated words of high quality which could be later used for developing and training automatic morphological analyzers. For this reason, we had gathered a list of the commonest Zulu word types, defined a partial grammar and parsed Zulu words with a logic algorithm which proposes possible parses based on the partial grammar. Compared to a completely manual approach, this framework provided possible annotations to choose from or the option to type in an annotation if none of the suggestions was the correct one. This semi-automatic process speeded up the labeling by an estimated factor of 3-4, compared to a purely manual approach. In Figure 1 we illustrate the annotation process and in the following subsections each step is detailed. Unannotated word list A list of unannotated Zulu words has been compiled from fictional works and the Zulu Bible. The original list comprises around 100,000 of the commonest Zulu word types. No information, morphological or syntactic, was given along with the words. We selected an initial subset of 10,000 words although our long-term goal is the complete analysis of the entire word list. Partial grammar Our choice for representing the morphological Zulu grammar was the formalism of Definite Clause Grammars (DCGs) used in the logic programming language Prolog. Although we defined our grammar as a simple context-free grammar, DCGs can also express context-sensitive grammars by associating variables as arguments to non-terminal symbols (Gazdar and Mellish, 1989). When defining our morphological grammar, we assumed that a linguistic expert could enumerate all or at least the most important morphological rules and morphemes of 'closed' morpheme categories, e.g. prefixes and suffixes of nouns and verbs. Morphemes of 'open' categories like noun and verb roots, however, would need to be hypothesized during the semi-automatic analysis and confirmed by the linguistic expert. Our final grammar comprised around 240 morphological rules and almost 300 entries in the morpheme dictionary. Since we did not only want to recognize admissible Zulu words but also obtain their morphological structure, we needed to extend our DCG by adding parse construction arguments as shown in the example below. Example 3. w((X)) --> n(X). n((X,Y,Z)) --> iv(X),n2(Y),nr(Z). iv(iv(a)) --> [a]. n2(n2(ba))--> [ba]. A possible parse for the word abantu 'people' could be iv(a),n2(ba), * nr(ntu) where ' * ' marks the hypothesized noun root. With our partial grammar we could not directly use the inbuilt Prolog parser since we had to account for missing dictionary entries: Zulu verb and noun roots. We therefore implemented an algorithm which would generate hypotheses for possible parses according to our grammar. The algorithm will be described in the next subsection. Hypothesis generation For the hypothesis generation we reverted to logic programming and abductive reasoning. Abduction is a method of reasoning which is used with incomplete information. It generates possible hypotheses (parses) for an observation (word) and a given theory (grammar). Depending on the implementation, abduction finds the best hypothesis by evaluating all possible explanations. Our abductive algorithm is an extension of the metainterpreter designed by Flach (1994) which only enumerates possible parses based on the grammar. A linguistic expert would then choose the best hypothesis. The algorithm invokes rules top-down starting with the most general until it reaches the last level of syntactic variables. These variables are then matched against their dictionary entries from the left to the right of the word. A possible parse is found if either all syntactic variables can be matched to existing dictionary entries or if an unmatched variable is listed as abducible. Abducibles are predefined non-terminal symbols whose dictionary entry can be hypothesized. In our case, abducibles were noun and verb roots. Evaluation and best hypothesis Our annotation framework only enumerated allowable parses for a given word, therefore a linguistic expert needed to evaluate hypotheses. We provided a web-interface to the annotation framework, so that multiple users could participate in the annotation process. They would choose either a single or multiple correct parses. If none of the hypotheses were correct, the user would provide the correct analysis. Although our grammar was incomplete it still generated a substantial number of hypotheses per word. These were in no particular order and a result of the inherent ambiguity of Zulu morphology. We therefore experimented with different ways of improving the presentation of parses. The most promising approach was structural sorting. Parses were alphabetically re-ordered according to their morphemes and labels such that similar results were presented next to each other. Grammar update The grammar was defined in an iterative process and extended if the linguistic expert found morphemes of closed categories which had not been listed yet or certain patterns of incomplete or incorrect parses caused by either missing or inaccurate rules. The updated rules and dictionary were considered for newly parsed words. Annotated word list and curation process Although there had been great effort in improving the hypothesis generation of the parsing algorithm, a reasonable number of morphological analyses still had to be provided manually. During the curation process, we therefore had to deal with removing typos and standardizing morpheme labels provided by different experts. In order to guarantee a high quality of the morphological cor- types Verb 6965 4825 Noun 1437 1420 Relative 1042 988 Prepositional 969 951 Possessive 711 647 Copulative 558 545 Locative 380 379 Adverb 156 155 Modal 113 113 Demonstrative 63 61 Pronoun 38 31 Interjection 24 24 Presentative 15 15 Adjective 14 14 Conjunction 3 3 Total # 12488 10171 pus, we also inspected single labels and analyses for their correctness. This was done by examining frequencies of labels and label combinations assuming that infrequent labels and combinations were likely to be incorrect and needed to be manually examined again. The finally curated corpus has an estimated error of 0.4 ± 0.5 incorrect single labels and 2.8 ± 2.1 incorrect complete analyses per 100 parses. Along with each word's analysis we wanted to provide part-of-speech (POS) tags. This was done by using a set of rules which determine the POS tag based on the morphological structure. We developed a prototype of a POS tagger which would assign the part-of-speech to a given morphological analysis based on a set of 34 rules. A summary of morphological analyses and words is given in Table 1. The rules are provided in Spiegler et al. (2010). Category # Analyses # Word POS tagging of sentences In addition to the list of morphologically labeled words, we assigned parts-of-speech to a subset of 30,000 Zulu sentences. This task is straightforward if each word of a sentence only belongs to a single grammatical category. This was the case for 2595 sentences. For 431 sentences, however, we needed to disambiguate POS tags. We achieved this by analysing the left and right context of a word form and selecting the most probable partof-speech from a given list of possible tags. The overall error is estimated at 3.1±0.3 incorrect POS tags per 100 words for the 3,000 sen- tences we tagged. The summary statistics for raw and tagged sentences are shown in Table 2. The Ukwabelana corpus -a resource description The Ukwabelana corpus is three-fold: 1. It contains 10,000 morphologically labeled words and 3,000 POS-tagged sentences. 2. The corpus also comprises around 100,000 common Zulu word types and 30,000 Zulu sentences compiled from fictional works and the Zulu Bible, from which the labeled words and sentences have been sampled. 3. Furthermore, all software and additional data used during the annotation process is provided: the partial grammar in DCG format, the abductive algorithm for parsing with incomplete information and a prototype for a POS tagger which assigns word categories to morphologically analyzed words. We are making these resources publicly available from http://www.cs.bris.ac.uk/Research/ MachineLearning/Morphology/Resources/ so that they will be of benefit to any person doing research on Zulu or on computer-aided analysis of languages. Conclusions and future work In this paper, we have given an overview of the morphology of the language Zulu, which is spoken by 23% and understood by more than half of the South African population. As an indigenous language with a written history of 150 years which was only recognised as an official languages in 1994, it is considerably under-resourced. We have spent considerable effort to compile the first opensource corpus of labeled and unlabeled words as well as POS-tagged and untagged sentences to promote research on this Bantu language. We have described the annotation process and the tools for compiling this corpus. We see this work as a first step in an ongoing effort to ultimately label the entire word and sentence corpus. Our future work includes further automation of the annotation process by extending the described abductive algorithm with a more sophisticated hypothesis evaluation and by combining syntactical and morphological information during the decision process. Our research interest also lies in the field of automatic grammar induction which will help to refine our partial grammar. Another aspect is interactive labeling where a linguistic expert directs the search of an online parsing algorithm by providing additional information. Apart from the benefits to language researchers, we foresee an application of the corpus by machine learners which can develop and train their algorithms for morphological analysis. Figure 1 : 1Process view of the annotation. Table 1 : 1Categories of labeled words. Dataset # Sentences # Word tokens #Word types # Words per sentence Word lengthRaw 29,424 288,106 87,154 9.79±6.74 7.49±2.91 Tagged 3,026 21,416 7,858 7.08±3.75 6.81±2.68 Table 2 : 2Statistics of raw and POS-tagged sentences. Ukwabelana means 'to share' in Zulu where the 'k' is pronounced voiced like a [g]. AcknowledgementsWe would like to thank Etienne Barnard and the Human Language Technologies Research Group from the Meraka Institute for their support during this project. Furthermore, we want to acknowledge Johannes Magwaza, Bruno Golénia, Ksenia Shalonova and Roger Tucker. The research work was sponsored by EPSRC grant EP/E010857/1 Learning the morphology of complex synthetic languages and a grant from the NRF (S. Africa). Word Formation in Generative Grammar. Aronoff, The MIT PressAronoff. 1976. Word Formation in Generative Grammar. The MIT Press. Finite State Morphology. Karttunen Beesley, University of Chicago PressBeesley and Karttunen. 2003. Finite State Morphology. University of Chicago Press. The Verb in Zulu. Beuchat, African Studies. 22Beuchat. 1966. The Verb in Zulu. African Studies, 22:137- 169. The Effectiveness of Morphological Rules for an isiZulu Spelling Checker. Eiselen Bosch, S. African Journal of African Lang. 25Bosch and Eiselen. 2005. The Effectiveness of Morpho- logical Rules for an isiZulu Spelling Checker. S. African Journal of African Lang., 25:25-36. Two Approaches to Gathering Text Corpora from the World Wide Web. Barnard Botha, 16th Ann. Symp. of the Pattern Recog. Ass. of S. Africa. Botha and Barnard. 2005. Two Approaches to Gathering Text Corpora from the World Wide Web. 16th Ann. Symp. of the Pattern Recog. Ass. of S. Africa. . Canonici, Zulu Grammatical Structure. Zulu Lang. and Literature. University of NatalCanonici. 1996. Zulu Grammatical Structure. Zulu Lang. and Literature, University of Natal, Durban. Zulu-English Dictionary. Colenso, Slatter & CoNatal, VauseColenso. 1905. Zulu-English Dictionary. Natal, Vause, Slatter & Co. An Outline of Zulu Grammars. Cope, African Studies. 432Cope. 1984. An Outline of Zulu Grammars. African Stud- ies, 43(2):83-102. Adjectives and Relatives in Zulu. Davey, S. African Journal of African Lang. 4Davey. 1984. Adjectives and Relatives in Zulu. S. African Journal of African Lang., 4:125-138. Text Book of Zulu Grammar. Doke, Witwatersrand University PressDoke. 1927. Text Book of Zulu Grammar. Witwatersrand University Press. Bantu Linguistic Terminology. Longman, Green and Co. Doke, LondonDoke. 1935. Bantu Linguistic Terminology. Longman, Green and Co, London. Handbook of African Lang., chapter The S.ern Bantu Lang. Doke, Oxford University PressDoke. 1954. Handbook of African Lang., chapter The S.ern Bantu Lang. Oxford University Press. Zulu-English vocabulary. Malcom Doke, Sikakana, Witwatersrand Uni. PressDoke, Malcom and Sikakana. 1958. Zulu-English vocabu- lary. Witwatersrand Uni. Press. Linguistica: Festschrift EB van Wyk, chapter Inflection in Syntax. Du Plessis, Van SchaikDu Plessis. 1993. Linguistica: Festschrift EB van Wyk, chapter Inflection in Syntax, pp. 61-66. Van Schaik, Pre- toria. Simply Logical. Flach, John WileyFlach. 1994. Simply Logical. John Wiley. Adjektiewe en Relatiewe in Zulu. Gauton, University of PretoriaMaster's thesisGauton. 1990. Adjektiewe en Relatiewe in Zulu. Master's thesis, University of Pretoria. Towards the Recognition of a Word Class 'adjective' for Zulu. Gauton, S. African Journal of African Lang. 14Gauton. 1994. Towards the Recognition of a Word Class 'adjective' for Zulu. S. African Journal of African Lang., 14:62-71. Natural Language Processing in Prolog. Mellish Gazdar, Addison-WesleyGazdar and Mellish. 1989. Natural Language Processing in Prolog. Addison-Wesley. The Isizulu: A Grammar Of The Zulu Lang. Grout, Kessinger PublishingGrout. 1859. The Isizulu: A Grammar Of The Zulu Lang. Kessinger Publishing. Comparative Bantu: An Introduction to the Comparative Linguistics and Prehistory of the Bantu Lang. Guthrie , Gregg International PublishersFarnboroughGuthrie. 1971. Comparative Bantu: An Introduction to the Comparative Linguistics and Prehistory of the Bantu Lang. Farnborough, Gregg International Publishers. A Framework for Bootstrapping Morphological Decomposition. Joubert, Davel Zimu, Barnard , CSIR/University of Pretoria, S. AfricaTech. reportJoubert, Zimu, Davel, and Barnard. 2004. A Framework for Bootstrapping Morphological Decomposition. Tech. report, CSIR/University of Pretoria, S. Africa. African Linguistic Contributions, chapter The morphology of the direct relative in Zulu. Via Afrika. Khumalo, Khumalo. 1992. African Linguistic Contributions, chapter The morphology of the direct relative in Zulu. Via Afrika. The Comparative Phonology of Nguni. Lanham, Witwatersrand Uni., Jo'burg, S. AfricaPh.D. thesisLanham. 1960. The Comparative Phonology of Nguni. Ph.D. thesis, Witwatersrand Uni., Jo'burg, S. Africa. The Noun as Deep-Structure Source for Nguni Adjectives and Relatives. Lanham, African Studies. 30Lanham. 1971. The Noun as Deep-Structure Source for Nguni Adjectives and Relatives. African Studies, 30:294- 311. On the Organization of the Lexicon. Lieber, Massachusetts Institute of TechnologyPh.D. thesisLieber. 1980. On the Organization of the Lexicon. Ph.D. thesis, Massachusetts Institute of Technology. Deconstructing Morphology. Lieber, Chicago PressThe University ofLieber. 1992. Deconstructing Morphology. The University of Chicago Press. Word Categories in Southern Bantu. Louw, African Studies. 432Louw. 1984. Word Categories in Southern Bantu. African Studies, 43(2):231-239. Mbadi, Anthology of Articles on African Linguistics and Literature, chapter The Percolation Theory in Xhosa Morphology. Lexicon, Jo'burg. Mbadi. 1988. Anthology of Articles on African Linguistics and Literature, chapter The Percolation Theory in Xhosa Morphology. Lexicon, Jo'burg. Grundzüge einer Vergleichenden Grammatik der Bantusprachen. Reimer. Meinhof, BerlinMeinhof. 1906. Grundzüge einer Vergleichenden Gram- matik der Bantusprachen. Reimer, Berlin. A Revision of the Word Categories in Zulu. Nkabinde, University of S. AfricaPh.D. thesisNkabinde. 1975. A Revision of the Word Categories in Zulu. Ph.D. thesis, University of S. Africa. Relevancy and Applicability of Terminology Concerning the Essential Verb Categories in African Lang. Posthumus, Logos. 7Posthumus. 1987. Relevancy and Applicability of Terminol- ogy Concerning the Essential Verb Categories in African Lang. Logos, 7:185-212. . Posthumus, Identifying Copulatives in Zulu and S.ern Sotho. S. African Journal of African Lang. 8Posthumus. 1988. Identifying Copulatives in Zulu and S.ern Sotho. S. African Journal of African Lang., 8:61-64. The So-Called Adjective in Zulu. Posthumus, S. African Journal of African Lang. 20Posthumus. 2000. The So-Called Adjective in Zulu. S. African Journal of African Lang., 20:148-158. A Linguistic Analysis of Zulu. Via Afrika. Msimang Poulos, Poulos and Msimang. 1998. A Linguistic Analysis of Zulu. Via Afrika. Finite-State Computational Morphology: An Analyzer Prototype For Zulu. Machine Translation. Bosch Pretorius, 18Pretorius and Bosch. 2003. Finite-State Computational Morphology: An Analyzer Prototype For Zulu. Machine Translation, 18:195-216. Containing Overgeneration in Zulu Computational Morphology. Bosch Pretorius, Proceedings of. null3Pretorius and Bosch. 2007. Containing Overgeneration in Zulu Computational Morphology. Proceedings of 3rd . Lang Technology Conference, PoznanLang. and Technology Conference, pp. 54 -58, Poznan. Exploiting Cross-Linguistic Similarities in Zulu and Xhosa Computational Morphology. Bosch Pretorius, Workshop on Lang. Technologies for African Lang. (AfLaT). Pretorius and Bosch. 2009. Exploiting Cross-Linguistic Similarities in Zulu and Xhosa Computational Morphol- ogy. Workshop on Lang. Technologies for African Lang. (AfLaT), pp. 96-103. The Syntax of Words. Selkirk, MIT PressSelkirk. 1982. The Syntax of Words. MIT Press. Learning the Morphology of Zulu with Different Degrees of Supervision. Golenia Spiegler, Flach Shalonova, Tucker , IEEE Workshop on Spoken Lang. Tech. Spiegler, Golenia, Shalonova, Flach, and Tucker. 2008. Learning the Morphology of Zulu with Different Degrees of Supervision. IEEE Workshop on Spoken Lang. Tech. Additional material for the Ukwabelana Zulu corpus. Van Spiegler, Der Spuy, Flach, U.KUniversity of BristolTech. reportSpiegler, van der Spuy, Flach. 2010. Additional material for the Ukwabelana Zulu corpus. Tech. report, University of Bristol, U.K. Grammatical Structure and Zulu Morphology. Spuy Van Der, University of the Witwatersrand, Jo'burg, S. AfricaPh.D. thesisvan der Spuy. 2001. Grammatical Structure and Zulu Mor- phology. Ph.D. thesis, University of the Witwatersrand, Jo'burg, S. Africa. . Spuy Van Der, Wordhood in Zulu. S.ern African Linguistics and Applied Lang. Studies. 243van der Spuy. 2006. Wordhood in Zulu. S.ern African Lin- guistics and Applied Lang. Studies, 24(3):311-329. Zoeloe-Grammatika. Pro Ecclesia, Stellenbosch. Van Eeden, Van Eeden. 1956. Zoeloe-Grammatika. Pro Ecclesia, Stel- lenbosch. Woordverdeling in Noord-Sotho en Zulu: 'n bydrae tot die vraagstuk van word-identifikasie in die Bantoetale. Van Wyk, University of PretoriaPh.D. thesisVan Wyk. 1958. Woordverdeling in Noord-Sotho en Zulu: 'n bydrae tot die vraagstuk van word-identifikasie in die Bantoetale. Ph.D. thesis, University of Pretoria. Agtervoegsels van die werkwoord in Zulu. Wilkes, Rand Afrikaans UniversityPh.D. thesisWilkes. 1971. Agtervoegsels van die werkwoord in Zulu. Ph.D. thesis, Rand Afrikaans University. A Handbook of the Zulu Lang. Louw Ziervogel, Taljaard , Van Schaik, PretoriaZiervogel, Louw, and Taljaard. 1985. A Handbook of the Zulu Lang. Van Schaik, Pretoria.
252,091,127
Semantic Role Labeling for Sentiment Inference: A Case Study
In this paper, we evaluate in a case study whether semantic role labelling (SRL) can be reliably used for verb-based sentiment inference (SI). SI strives to identify polar relations (against, in-favour-of) between discourse entities. We took 300 sentences with 10 different verbs that show verb alternations or are ambiguous in order to find out if current SRL systems actually can assign the correct semantic roles and find the correct underlying predicates. Since in SI each verb reading comes with a particular polar profile, SRL is useful only if its analyses are consistent and reliable. We found that this is not (yet) given for German.
[ 2486369, 244119782, 202777610, 235097227, 222132943, 201668305, 14254034, 250164438, 9210201, 10401980, 34416957, 2085726 ]
Semantic Role Labeling for Sentiment Inference: A Case Study Manfred Klenner klenner@cl.uzh.ch Department of Computational Linguistics University of Zurich Anne Göhring goehring@cl.uzh.ch Department of Computational Linguistics University of Zurich Semantic Role Labeling for Sentiment Inference: A Case Study In this paper, we evaluate in a case study whether semantic role labelling (SRL) can be reliably used for verb-based sentiment inference (SI). SI strives to identify polar relations (against, in-favour-of) between discourse entities. We took 300 sentences with 10 different verbs that show verb alternations or are ambiguous in order to find out if current SRL systems actually can assign the correct semantic roles and find the correct underlying predicates. Since in SI each verb reading comes with a particular polar profile, SRL is useful only if its analyses are consistent and reliable. We found that this is not (yet) given for German. Introduction Sentiment Inference (SI) is the task of predicting opponents and proponents given a text. SI reveals how the writer conceptualises the world and how she perceives the discourse entities she refers to. Take for instance the sentence This government cheats the world. The writer tries to convey that the government is against the world and that it is -in the perspective of the writer -a negative actor and the world is the victim, which means that there is a negative effect on the world. We, thus, can talk about positive and negative actors, positive and negative effects, about negative (opponents) and positive (proponents) relations. We call these specifications the polar profile of a verb. In (Klenner et al., 2017), we introduced a verbbased SI system that uses dependency labels in order to express such polar profiles. For instance, the subject of the verb cheat -if used in a factual sentence -is identified as indicating a negative actor, the filler of the direct object receives a negative effect, and a negative relation (against) between the two is casted. Even after normalization of dependency trees, e.g. by resolving passive voice, some problems remain, namely verb alternations and verb ambiguity. It certainly will lead to false analyses. Verb alternation, among others, is given if a semantic role changes its syntactic host. As an example of an instrument-subject verb alternation, compare The police man killed the aggressor with a knife versus The knife killed the aggressor. For a dependency-based approach the police man and the knife are both the subjects although the police man is the agent and the knife is the instrument. There should be a negative polar relation between police man and aggressor, but not between knife and aggressor (a knife cannot be against somebody). If SRL was used instead of dependency parsing, the agent role would indicate the against relation while the instrument role would block such an inference 1 and thus might be a means to provide a general solution to this problem. SRL could also be useful for verb sense disambiguation. Part of SRL is a step called predicate identification (Conia et al., 2021b), where a verb is mapped to a predicate frame covering the semantic roles of the underlying verb reading. Take as an example German bedauern which has a subject and a direct object. It could mean either feel sorry for as in Ich bedauere diese Menschen (I feel sorry for these people), or regret as illustrated by Ich bedauere den Vorfall (I regret the incident). In the first case, there is a in-favour-of relation while in the second one the relation is against. In this example, it is not the semantic role that makes the difference in the first place, but the predicate identification (feel sorry for versus regret). In this paper, we describe a case study applying SRL to cases of verb alternations and verb ambiguity. For SRL to be applicable, it must hold that the identification of semantic roles is consistent given some verb and that predicate identification is reliable. We found both requirements are currently not given for German. Verb Alternations and Verb Ambiguity As a first step, we identified 10 German verbs 2 from our verb lexicon (Klenner and Amsler, 2016) that have verb alternations or are ambiguous. We focused on challenging cases where a verb has at least two semantic frames given a single dependency frame. Take the transitive (i.e. subject,object) and ambiguous verb verbessern which might mean improve or correct. In a dependency setting we just have the subjects and objects of the particular verb verbessern. In our current system we cannot distinguish the readings and, thus, only have one polar profile. But in fact we'd need two: for both readings. So either verb disambiguation (which is not available for German) or SRL might do the trick. As an example of verb alternation take drohen (threaten), which has an instrument alternation: (1) He is threatened with retribution Only in (1) there is a polar relation (against) between the agent (He) and the recipient (him). In our case study we looked at the transitive versions of such cases: Er droht ihm versus Vergeltung droht ihm (a bit unusal word order, but correct). Again, in the dependency setting we have a single transitive verb with two unaccesible readings (threaten versus face). We semi-automatically extracted 300 sentences from a newspaper corpus where for each verb at least two different semantic frames were given. For instance for the verb drohen, we found 5 sentences with an actor as subject (one reading) and 8 with a theme as subject (the second reading). We applied InVeRo in the PropBank and the VerbAtlas mode and manually analysed the results. We will now introduce these tools. Semantic Role Labeling for German We have tried to find SRL systems for German, but only InVeRo (Conia et al., 2021b) using Verb-Atlas (Di Fabio et al., 2019) was available. It was 2 See the appendix for the full verb list. not possible to install SRL-S2S 3 (Daza and Frank, 2019), and the DameSRL 4 system described in (Do et al., 2018a,b) has no predicate identification model for German which is needed for a proper SRL. Another option was to train our own model. However after we have analysed the available resources, the CoNLL shared task description and data (Hajič et al., 2009), and the Universal Proposition Bank (Akbik et al., 2015), we skipped this idea. The German data from CoNLL is derived from Salsa (Erk et al., 2003), the German version of FrameNet. It came into existence by mapping FrameNet roles, which are very fine-grained, to more coarse-grained PropBank semantic roles (Palmer et al., 2005). However, the mapping procedure is hardly described and no quality control is reported. We do not know how much noise was introduced by this mapping. In a footnote, Daza and Frank (2020) reflect on the difficulty of using heterogeneous SRL styles, above all for a crosslingual comparison, and comment that "annotations for German use a role inventory with roles A0-A9, and a one-to-one mapping to all English labels is not available". Also, after we analysed a few entries in the German Universal Propositions Bank 5 , we had to recognise that this semi-automatically generated resource is too noisy. Training our own SRL model no longer was an option. We, thus, carried out our experiments with InVeRo (Conia et al., 2021a). InVeRo is a multi-lingual SRL model that was trained on various languages including German. Given a (German) sentence, predicate identification yields an English (predicate) frame and the corresponding semantic roles. The frames are from Verb-Atlas, a hand-crafted lexical-semantic resource that uses the verb synsets of BabelNet (Navigli and Ponzetto, 2010), a multilingual encyclopedic dictionary that covers 500 languages (actually the synsets of WordNet are used via BabelNet which integrates Wordnet). VerbAtlas frames specify a prototypical argument structure including implicit and so-called shadowed arguments (Conia et al., 2021a). Such a frame clusters verb meanings having similar semantics. Also selectional preferences (not restrictions) are formulated on the basis of WordNet synsets. In Figure 1 predicate identification maps the verb verurteilen to accuse and criticize. As a consequence, two different roles for the direct object become available, namely recipient and patient. The selectional preferences for the patient role of criticize are individual and social group. Although situation is not subsumed under neither restriction, we get a result. The system is robust, thus. However sometimes restrictions seem to be taken seriously and no result appears. The sentence Sie kämpft für mehr Geld (She fights for more money) is correctly analysed. If we substitute Gerechtigkeit (justice) for Geld (money), no result is given, presumably since Gerechtigkeit is not subsumed under the restriction which is entity. Empirical Evaluation We manually analysed the output of InVeRo for the 300 sentences. Three types of errors or problems can be distinguished: • predicate identification (disambiguation) fails • assigning different semantic roles given a single predicate • assigning a particular semantic role to syntactically different phrases for the same verb (under a particular reading) Why are these three points problematic in SI? As we have discussed on various examples, each verb reading has its own polar profile, thus it is crucial to find the right reading (problem 1). A polar profile assigns a directed polar relation (against, in-favourof) to a verb as well as a holder role (e.g. the agent) and a target role (e.g. theme). That is, in order to specify these relations, the semantic roles of the holder and target roles must be known and they must be stable (not assigned to different roles), otherwise no lexical entry is possible (problem 2). If SRL assigns for a verb reading different roles and role pairings, it is unclear how to anchor the relation correctly. Finally, SRL is syntax-agnostic (problem 3): the same semantic role of a verb might be assigned to different syntactic phrases thereby possibly collapsing verb readings. In the examples (3) and (4) both sentences (according to VerbAtlas 6 have a theme role. In sentence (3) it is realized as a to-infinitive, in sentence (4) as a prepositional phrase (PP). (3) (4) there is a negative actor, but no negative effect. SRL is not helpful in these cases, it also collapses readings (danger, threatens). Predicate identification failure is most problematic. In the examples above, both (3) and (4) get the same predicate assigned: guarantee/ensure/promise 7 . However, only sentence (4) is an instance of this predicate. This problem becomes clearer, in our case study, if we quantify the number of predicates and predicate frames 8 that were chosen by InVeRo per verb (see the last line of Table 2 in the appendix). For PropBank a verb is, in the mean, mapped to 1.55 predicates, and 3.7 different frames, i.e. pairing of semantic roles, per predicate are used. For Verb-Atlas it is 2.75 and 4.5, respectively. Ideally, only one mapping would be given: a verb maps to one or more predicates, each predicate has a stable subcategorization frame (expressed with semantic roles). If this was the case, we could assign a single polar profile to a particular verb reading. Table 1 shows the mappings for bedauern. In the first column the feel-sorry-for reading is given. Here we have a single mapping, both with respect to PropBank (DE) and VerbAtlas style (VA). However in the second column, the regret reading, Prop-Bank mode shows a variation in the assignment of semantic roles (A0,A1 versus A0, A3). The VerbAtlas analysis is even more confusing. Here three predicates are identified and within the same predicate (e.g. REGRET_SORRY), different roles and role pairings are present. We carried out an error analysis in order to find out how many of the 38 sentences with bedauern are wrongly analysed either by choosing the wrong predicate or the wrong semantic role pairing (the subcategorization frame): 7 cases (18.5%) are clearly wrong, 8 cases are hard to decide. Not in every case does the usage of bedauern actually involve a (real) regret. Sometimes it is used in more formal way in order to express dislike (as suggested by InVeRo): without context this cannot be resolved reliably (some of the 8 cases are of that type). But nevertheless, even if InVeRo sometimes is right to map a verb to more than one predicate, the diversity of suggested solutions makes it impossible to carry out SI in a lexicon-based way: the necessary mapping from a single polar profile of a verb to some VerbAtlas representation in a one-to-many fashion is bound to produce errors, as our little error analysis with bedauern reveals. Also, although in principle assigning semantic roles depending on the filler object is a desirable solution, if it comes in such an unpredictable diverse way, a lexicon-based approach cannot make use of it. The problem is not neglectable, since the distribution of semantic role pairings for different VerbAtlas predicates is high. The numbers at the end of the roles pairings (in square brackets) in Table 1 indicate the frequency of a pairing. For instance, DISLIKE (Agent,Theme) was assigned 2 times, DISLIKE (Experiencer,Stimulus) 4 times. The statistics we have gathered on the diversity of predicate and frame mappings coming with In-VeRo makes it superfluous to have a full-fledged error analysis for all 300 sentences (like we did for bedauern). The InVeRo results are just too diverse to be useful (see Table 2 in the appendix). In the course of our case study, we have noticed that there is a correlation between the (non)animacy of role fillers and different verb readings. Actually, all examples in this paper could be analysed correctly by taking (non)animacy into account: compare e.g. er bedauert sie (he feels sorry for her) with er bedauert den Vorfall (he regrets the incident). We have trained an animacy classifier (Klenner and Göhring, 2022) and are about to apply it to the small data set of 300 sentences. To sketch the idea: depending on the animacy of the filler of a dependency label of a verb, different polar profiles become available. Related Work Sentiment inference is sometimes called sentiment propagation and opinion implicature. It also shares similarities with finegrained opinion analysis (Marasović and Frank, 2018a). Our positive/negative effects are comparable to the GoodFor/BadFor distinction of (Choi and Wiebe, 2014). However, we also distinguish positive/negative actors. In a sophisticated rule-based system was introduced that specifies general inference rules on the basis of GoodFor/BadFor effects. Approaches exist that claim that the combination of SRL and Opinion Role Labeling, i.e. the identification of opinion holder and target, is beneficial, e.g. in (Marasović and Frank, 2018b) a multi-task learning-based joint model is introduced. Conclusion German Semantic Role Labeling does not provide a suitable solution for our task: German sentiment inference based on polar profiles of verb readings. With InVeRo, lexicon design is difficult since (too) many verb-predicate mappings and role pairings occur. InVeRo is only partially able to deal with the -admittedly -difficult cases of verb alternations and verb ambiguity. Instead of SRL, a combination of dependency parsing and animacy detection might be useful for the task at hand. We are currently evaluating such a disambiguation strategy for sentiment inference. 1.00 1 1 1.00 avg 1.55 3.70 2.43 2.75 4.50 1.81 Table 2: Number of predicates (pr), frames (fr) and frames per predicate (fr/pr) the SRL assigned to example sentences of the listed 10 pairs of verb profiles (each verb has 2 profiles). Average (avg) over all profiles (macro = micro). The German PropBank scheme (DE) seems to assign less different predicates per verb profile than the VerbAtlas (VA) scheme (1.55 compared to 2.75), though with proportionally more frames (fr/pr= 2.43). Figure 1 : 1InVero's predicate identification for two German sentences with the verb verurteilen, and their corresponding semantic role frames ('He accuses the man' versus 'He criticizes the situation').Semantic roles are either in PropBank style or following VerbNet nomenclature (25 roles like agent, patient, etc.)(Kipper Schuler et al., 2009). As a consequence, these two verb readings would have the same semantic role frame. However, their polar profiles differ. Sentence (3) casts a negative effect on the experiencer (He), while inEr agent droht verb zu scheitern to-infinitive-theme He is in danger to fail (4) Er agent droht verb mit Konsequenzen PP-theme He threatens consequences Table 1 : 1Different predicates and roles for the verb 'bedauern' according to two readings: feel-sorry-for and regret. In square brackets are the numbers of sentences labeled with the given semantic roles. https://github.com/Heidelberg-NLP/ SRL-S2S 4 https://liir.cs.kuleuven.be/software_ pages/damesrl.php 5 http://alanakbik.github.io/ UniversalPropositions_German https://verbatlas.org, accessed 2022-06-03.7 Predicates in VerbAtlas are sometimes specified with reference to more than one label. 8 frame here refers to role pairings. Generating high quality proposition Banks for multilingual semantic role labeling. Alan Akbik, Laura Chiticariu, Marina Danilevsky, Yunyao Li, Shivakumar Vaithyanathan, Huaiyu Zhu, 10.3115/v1/P15-1039Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing. the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language ProcessingBeijing, China. ACLAlan Akbik, Laura Chiticariu, Marina Danilevsky, Yun- yao Li, Shivakumar Vaithyanathan, and Huaiyu Zhu. 2015. Generating high quality proposition Banks for multilingual semantic role labeling. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, pages 397-407, Beijing, China. ACL. +/-effectwordnet: Sense-level lexicon acquisition for opinion inference. Yoonjung Choi, Janyce Wiebe, Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing. the 2014 Conference on Empirical Methods in Natural Language ProcessingYoonjung Choi and Janyce Wiebe. 2014. +/- effectwordnet: Sense-level lexicon acquisition for opinion inference. In Proceedings of the 2014 Con- ference on Empirical Methods in Natural Language Processing, EMNLP, SIGDAT, pages 1181-1191. Unifying cross-lingual semantic role labeling with heterogeneous linguistic resources. Simone Conia, Andrea Bacciu, Roberto Navigli, 10.18653/v1/2021.naacl-main.31Proceedings of the 2021 Conference of the North American Chapter of the ACL: Human Language Technologies. the 2021 Conference of the North American Chapter of the ACL: Human Language TechnologiesSimone Conia, Andrea Bacciu, and Roberto Navigli. 2021a. Unifying cross-lingual semantic role labeling with heterogeneous linguistic resources. In Proceed- ings of the 2021 Conference of the North American Chapter of the ACL: Human Language Technologies, pages 338-351. InVeRo-XL: Making cross-lingual Semantic Role Labeling accessible with intelligible verbs and roles. Simone Conia, Riccardo Orlando, Fabrizio Brignone, Francesco Cecconi, Roberto Navigli, 10.18653/v1/2021.emnlp-demo.36Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing: System Demonstrations. the 2021 Conference on Empirical Methods in Natural Language Processing: System DemonstrationsDominican Republic. ACLOnline and Punta CanaSimone Conia, Riccardo Orlando, Fabrizio Brignone, Francesco Cecconi, and Roberto Navigli. 2021b. InVeRo-XL: Making cross-lingual Semantic Role Labeling accessible with intelligible verbs and roles. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 319-328, Online and Punta Cana, Dominican Republic. ACL. Translate and label! an encoder-decoder approach for cross-lingual semantic role labeling. Angel Daza, Anette Frank, 10.18653/v1/D19-1056Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing. the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language ProcessingHong Kong, ChinaAngel Daza and Anette Frank. 2019. Translate and la- bel! an encoder-decoder approach for cross-lingual semantic role labeling. In Proceedings of the 2019 Conference on Empirical Methods in Natural Lan- guage Processing and the 9th International Joint Conference on Natural Language Processing, pages 603-615, Hong Kong, China. X-SRL: A parallel cross-lingual semantic role labeling dataset. Angel Daza, Anette Frank, 10.18653/v1/2020.emnlp-main.321Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)Online. ACLAngel Daza and Anette Frank. 2020. X-SRL: A paral- lel cross-lingual semantic role labeling dataset. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 3904-3914, Online. ACL. Lingjia Deng, Janyce Wiebe, Sentiment propagation via implicature constraints. Meeting of the European Chapter of the Association for Computational Linguistics (EACL-2014). Lingjia Deng and Janyce Wiebe. 2014. Sentiment prop- agation via implicature constraints. Meeting of the European Chapter of the Association for Computa- tional Linguistics (EACL-2014). VerbAtlas: a novel large-scale verbal semantic resource and its application to semantic role labeling. Andrea Di Fabio, Simone Conia, Roberto Navigli, 10.18653/v1/D19-1058Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing. the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language ProcessingHong Kong, China. ACLAndrea Di Fabio, Simone Conia, and Roberto Navigli. 2019. VerbAtlas: a novel large-scale verbal semantic resource and its application to semantic role labeling. In Proceedings of the 2019 Conference on Empiri- cal Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, pages 627-637, Hong Kong, China. ACL. Geert Heyman, and Marie-Francine Moens. 2018a. A flexible and easy-to-use semantic role labeling framework for different languages. Thi Quynh Ngoc, Artuur Do, Leeuwenberg, Proceedings of the 27th International Conference on Computational Linguistics: System Demonstrations. the 27th International Conference on Computational Linguistics: System DemonstrationsSanta Fe, New Mexico. ACLQuynh Ngoc Thi Do, Artuur Leeuwenberg, Geert Hey- man, and Marie-Francine Moens. 2018a. A flexible and easy-to-use semantic role labeling framework for different languages. In Proceedings of the 27th Inter- national Conference on Computational Linguistics: System Demonstrations, pages 161-165, Santa Fe, New Mexico. ACL. How to use damesrl: A framework for deep multilingual semantic role labeling. Thi Quynh Ngoc, Artuur Do, Leeuwenberg, Proceedings of the CLARIN Annual Conference. the CLARIN Annual ConferencePisa, ItalyGeert Heyman, and Marie-Francine MoensQuynh Ngoc Thi Do, Artuur Leeuwenberg, Geert Hey- man, and Marie-Francine Moens. 2018b. How to use damesrl: A framework for deep multilingual se- mantic role labeling. In Proceedings of the CLARIN Annual Conference, pages 159-162, Pisa, Italy. Towards a resource for lexical semantics: A large German corpus with extensive semantic annotation. Katrin Erk, Andrea Kowalski, Sebastian Padó, Manfred Pinkal, 10.3115/1075096.1075164Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics. the 41st Annual Meeting of the Association for Computational LinguisticsSapporo, Japan. ACLKatrin Erk, Andrea Kowalski, Sebastian Padó, and Man- fred Pinkal. 2003. Towards a resource for lexical semantics: A large German corpus with extensive semantic annotation. In Proceedings of the 41st An- nual Meeting of the Association for Computational Linguistics, pages 537-544, Sapporo, Japan. ACL. The CoNLL-2009 shared task: Syntactic and semantic dependencies in multiple languages. Jan Hajič, Massimiliano Ciaramita, Richard Johansson, Daisuke Kawahara, Maria Antònia Martí, Lluís Màrquez, Adam Meyers, Joakim Nivre, Sebastian Padó, Proceedings of the 13th. Conference on Computational Natural Language Learning. Štěpánek, Pavel Straňák, Mihai Surdeanu, Nianwen Xue, and Yi Zhangthe 13th. Conference on Computational Natural Language LearningBoulder, Colorado. ACLJan Hajič, Massimiliano Ciaramita, Richard Johans- son, Daisuke Kawahara, Maria Antònia Martí, Lluís Màrquez, Adam Meyers, Joakim Nivre, Sebastian Padó, Jan Štěpánek, Pavel Straňák, Mihai Surdeanu, Nianwen Xue, and Yi Zhang. 2009. The CoNLL- 2009 shared task: Syntactic and semantic depen- dencies in multiple languages. In Proceedings of the 13th. Conference on Computational Natural Lan- guage Learning (CoNLL 2009), pages 1-18, Boulder, Colorado. ACL. VerbNet overview, extensions, mappings and applications. Karin Kipper Schuler, Anna Korhonen, Susan Brown, Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics, Companion Volume: Tutorial Abstracts. Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics, Companion Volume: Tutorial AbstractsBoulder, Colorado. ACLKarin Kipper Schuler, Anna Korhonen, and Susan Brown. 2009. VerbNet overview, extensions, map- pings and applications. In Proceedings of Human Language Technologies: The 2009 Annual Confer- ence of the North American Chapter of the Asso- ciation for Computational Linguistics, Companion Volume: Tutorial Abstracts, pages 13-14, Boulder, Colorado. ACL. Sentiframes: A resource for verb-centered German sentiment inference. Manfred Klenner, Michael Amsler, Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC 2016). the Tenth International Conference on Language Resources and Evaluation (LREC 2016)Paris, FranceEuropean Language Resources Association (ELRAManfred Klenner and Michael Amsler. 2016. Sen- tiframes: A resource for verb-centered German sen- timent inference. In Proceedings of the Tenth In- ternational Conference on Language Resources and Evaluation (LREC 2016), Paris, France. European Language Resources Association (ELRA). Verb-mediated composition of attitude relations comprising reader and writer perspective. Manfred Klenner, Simon Clematide, Don Tuggener, 10.1007/978-3-319-77116-8_1118th International Conference on Computational Linguistics and Intelligent Text Processing. ResearchBibManfred Klenner, Simon Clematide, and Don Tuggener. 2017. Verb-mediated composition of attitude rela- tions comprising reader and writer perspective. In 18th International Conference on Computational Lin- guistics and Intelligent Text Processing. ResearchBib. Animacy denoting german nouns: Annotation and classification. Manfred Klenner, Anne Göhring, Proceedings of the Language Resources and Evaluation Conference. the Language Resources and Evaluation ConferenceMarseille, FranceEuropean Language Resources Association (ELRAManfred Klenner and Anne Göhring. 2022. Animacy denoting german nouns: Annotation and classifica- tion. In Proceedings of the Language Resources and Evaluation Conference, pages 1360-1364, Marseille, France. European Language Resources Association (ELRA). SRL4ORL: Improving opinion role labeling using multi-task learning with semantic role labeling. Ana Marasović, Anette Frank, 10.18653/v1/N18-1054Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesLong Papers; New Orleans, Louisiana. ACL1Ana Marasović and Anette Frank. 2018a. SRL4ORL: Improving opinion role labeling using multi-task learning with semantic role labeling. In Proceedings of the 2018 Conference of the North American Chap- ter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Pa- pers), pages 583-594, New Orleans, Louisiana. ACL. SRL4ORL: Improving opinion role labeling using multi-task learning with semantic role labeling. Ana Marasović, Anette Frank, 10.18653/v1/N18-1054Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics. the 2018 Conference of the North American Chapter of the Association for Computational LinguisticsNew Orleans, Louisiana. ACL1Ana Marasović and Anette Frank. 2018b. SRL4ORL: Improving opinion role labeling using multi-task learning with semantic role labeling. In Proceed- ings of the 2018 Conference of the North American Chapter of the Association for Computational Lin- guistics, Volume 1, pages 583-594, New Orleans, Louisiana. ACL. Ba-belNet: Building a very large multilingual semantic network. Roberto Navigli, Simone Paolo Ponzetto, Proceedings of the 48th Annual Meeting of the ACL. the 48th Annual Meeting of the ACLUppsala, SwedenRoberto Navigli and Simone Paolo Ponzetto. 2010. Ba- belNet: Building a very large multilingual semantic network. In Proceedings of the 48th Annual Meeting of the ACL, pages 216-225, Uppsala, Sweden. The Proposition Bank: An annotated corpus of semantic roles. Martha Palmer, Daniel Gildea, Paul Kingsbury, 10.1162/0891201053630264Computational Linguistics. 311Martha Palmer, Daniel Gildea, and Paul Kingsbury. 2005. The Proposition Bank: An annotated corpus of semantic roles. Computational Linguistics, 31(1):71- 106. An account of opinion implicatures. Janyce Wiebe, Lingjia Deng, 10.48550/ARXIV.1404.6491Janyce Wiebe and Lingjia Deng. 2014. An account of opinion implicatures.
8,888,540
Feature Noising for Log-linear Structured Prediction
NLP models have many and sparse features, and regularization is key for balancing model overfitting versus underfitting. A recently repopularized form of regularization is to generate fake training data by repeatedly adding noise to real data. We reinterpret this noising as an explicit regularizer, and approximate it with a second-order formula that can be used during training without actually generating fake data. We show how to apply this method to structured prediction using multinomial logistic regression and linear-chain CRFs. We tackle the key challenge of developing a dynamic program to compute the gradient of the regularizer efficiently. The regularizer is a sum over inputs, so we can estimate it more accurately via a semi-supervised or transductive extension. Applied to text classification and NER, our method provides a >1% absolute performance gain over use of standard L 2 regularization.
[ 10977241, 2433417 ]
Feature Noising for Log-linear Structured Prediction Association for Computational LinguisticsCopyright Association for Computational Linguistics18-21 October 2013. 2013 Sida I Wang sidaw@cs.stanford.edu Mengqiu Wang mengqiu@cs.stanford.edu Stefan Wager swager@stanford.edu Department of Statistics Stanford University 94305StanfordCAUSA Percy Liang pliang@cs.stanford.edu Christopher D Manning manning@cs.stanford.edu Department of Computer Science Feature Noising for Log-linear Structured Prediction Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing the 2013 Conference on Empirical Methods in Natural Language ProcessingSeattle, Washington, USAAssociation for Computational Linguistics18-21 October 2013. 2013 NLP models have many and sparse features, and regularization is key for balancing model overfitting versus underfitting. A recently repopularized form of regularization is to generate fake training data by repeatedly adding noise to real data. We reinterpret this noising as an explicit regularizer, and approximate it with a second-order formula that can be used during training without actually generating fake data. We show how to apply this method to structured prediction using multinomial logistic regression and linear-chain CRFs. We tackle the key challenge of developing a dynamic program to compute the gradient of the regularizer efficiently. The regularizer is a sum over inputs, so we can estimate it more accurately via a semi-supervised or transductive extension. Applied to text classification and NER, our method provides a >1% absolute performance gain over use of standard L 2 regularization. Introduction NLP models often have millions of mainly sparsely attested features. As a result, balancing overfitting versus underfitting through good weight regularization remains a key issue for achieving optimal performance. Traditionally, L 2 or L 1 regularization is employed, but these simple types of regularization penalize all features in a uniform way without taking into account the properties of the actual model. An alternative approach to regularization is to generate fake training data by adding random noise to the input features of the original training data. Intuitively, this can be thought of as simulating miss- * Both authors contributed equally to the paper ing features, whether due to typos or use of a previously unseen synonym. The effectiveness of this technique is well-known in machine learning (Abu-Mostafa, 1990;Burges and Schölkopf, 1997;Simard et al., 2000;Rifai et al., 2011a;van der Maaten et al., 2013), but working directly with many corrupted copies of a dataset can be computationally prohibitive. Fortunately, feature noising ideas often lead to tractable deterministic objectives that can be optimized directly. Sometimes, training with corrupted features reduces to a special form of regularization (Matsuoka, 1992;Bishop, 1995;Rifai et al., 2011b;Wager et al., 2013). For example, Bishop (1995) showed that training with features that have been corrupted with additive Gaussian noise is equivalent to a form of L 2 regularization in the low noise limit. In other cases it is possible to develop a new objective function by marginalizing over the artificial noise (Wang and Manning, 2013;van der Maaten et al., 2013). The central contribution of this paper is to show how to efficiently simulate training with artificially noised features in the context of log-linear structured prediction, without actually having to generate noised data. We focus on dropout noise (Hinton et al., 2012), a recently popularized form of artificial feature noise where a random subset of features is omitted independently for each training example. Dropout and its variants have been shown to outperform L 2 regularization on various tasks (Hinton et al., 2012;Wang and Manning, 2013;Wan et al., 2013). Dropout is is similar in spirit to feature bagging in the deliberate removal of features, but performs the removal in a preset way rather than randomly (Bryll et al., 2003;Sutton et al., 2005;Smith et al., 2005). Our approach is based on a second-order approximation to feature noising developed among others by Bishop (1995) and Wager et al. (2013), which allows us to convert dropout noise into a form of adaptive regularization. This method is suitable for structured prediction in log-linear models where second derivatives are computable. In particular, it can be used for multiclass classification with maximum entropy models (a.k.a., softmax or multinomial logistic regression) and for the sequence models that are ubiquitous in NLP, via linear chain Conditional Random Fields (CRFs). For linear chain CRFs, we additionally show how we can use a noising scheme that takes advantage of the clique structure so that the resulting noising regularizer can be computed in terms of the pairwise marginals. A simple forward-backward-type dynamic program can then be used to compute the gradient tractably. For ease of implementation and scalability to semi-supervised learning, we also outline an even faster approximation to the regularizer. The general approach also works in other clique structures in addition to the linear chain when the clique marginals can be computed efficiently. Finally, we extend feature noising for structured prediction to a transductive or semi-supervised setting. The regularizer induced by feature noising is label-independent for log-linear models, and so we can use unlabeled data to learn a better regularizer. NLP sequence labeling tasks are especially well suited to a semi-supervised approach, as input features are numerous but sparse, and labeled data is expensive to obtain but unlabeled data is abundant (Li and McCallum, 2005;Jiao et al., 2006). Wager et al. (2013) showed that semi-supervised dropout training for logistic regression captures a similar intuition to techniques such as entropy regularization (Grandvalet and Bengio, 2005) and transductive SVMs (Joachims, 1999), which encourage confident predictions on the unlabeled data. Semisupervised dropout has the advantage of only using the predicted label probabilities on the unlabeled data to modulate an L 2 regularizer, rather than requiring more heavy-handed modeling of the unlabeled data as in entropy regularization or expectation regularization (Mann and McCallum, 2007). In experimental results, we show that simulated feature noising gives more than a 1% absolute boost y t y t+1 y t−1 f (y t , x t ) f (y t−1 , y t ) f (y t , y t+1 ) y t y t+1 y t−1 f (y t , x t ) f (y t−1 , y t ) f (y t , y t+1 ) Figure 1: An illustration of dropout feature noising in linear-chain CRFs with only transition features and node features. The green squares are node features f (y t , x t ), and the orange squares are edge features f (y t−1 , y t ). Conceptually, given a training example, we sample some features to ignore (generate fake data) and make a parameter update. Our goal is to train with a roughly equivalent objective, without actually sampling. in performance over L 2 regularization, on both text classification and an NER sequence labeling task. Feature Noising Log-linear Models Consider the standard structured prediction problem of mapping some input x ∈ X (e.g., a sentence) to an output y ∈ Y (e.g., a tag sequence). Let f (y, x) ∈ R d be the feature vector, θ ∈ R d be the weight vector, and s = (s 1 , . . . , s |Y| ) be a vector of scores for each output, with s y = f (y, x) · θ. Now define a log-linear model: p(y | x; θ) = exp{s y − A(s)},(1) where A(s) = log y exp{s y } is the log-partition function. Given an example (x, y), parameter estimation corresponds to choosing θ to maximize p(y | x; θ). The key idea behind feature noising is to artificially corrupt the feature vector f (y, x) randomly into somef (y, x) and then maximize the average log-likelihood of y given these corrupted featuresthe motivation is to choose predictors θ that are robust to noise (missing words for example). Lets, p(y | x; θ) be the randomly perturbed versions corresponding tof (y, x). We will also assume the feature noising preserves the mean: E[f (y, x)] = f (y, x), so that E[s] = s. This can always be done by scaling the noised features as described in the list of noising schemes. It is useful to view feature noising as a form of regularization. Since feature noising preserves the mean, the feature noising objective can be written as the sum of the original log-likelihood plus the difference in log-normalization constants: E[logp(y | x; θ)] = E[s y − A(s)] (2) = log p(y | x; θ) − R(θ, x), (3) R(θ, x) def = E[A(s)] − A(s).(4) Since A(·) is convex, R(θ, x) is always positive by Jensen's inequality and can therefore be interpreted as a regularizer. Note that R(θ, x) is in general nonconvex. Computing the regularizer (4) requires summing over all possible noised feature vectors, which can imply exponential effort in the number of features. This is intractable even for flat classification. Following Bishop (1995) and Wager et al. (2013), we approximate R(θ, x) using a second-order Taylor expansion, which will allow us to work with only means and covariances of the noised features. We take a quadratic approximation of the log-partition function A(·) of the noised score vectors around the the unnoised score vector s: A(s) A(s) + ∇A(s) (s − s)(5)+ 1 2 (s − s) ∇ 2 A(s)(s − s). Plugging (5) into (4), we obtain a new regularizer R q (θ, x), which we will use as an approximation to R(θ, x): R q (θ, x) = 1 2 E[(s − s) ∇ 2 A(s)(s − s)](6)= 1 2 tr(∇ 2 A(s) Cov(s)).(7) This expression still has two sources of potential intractability, a sum over an exponential number of noised score vectorss and a sum over the |Y| components ofs. Multiclass classification If we assume that the components ofs are independent, then Cov(s) ∈ R |Y|×|Y| is diagonal, and we have R q (θ, x) = 1 2 y∈Y µ y (1 − µ y ) Var[s y ],(8) where the mean µ y def = p θ (y | x) is the model probability, the variance µ y (1 − µ y ) measures model uncertainty, and Var[s y ] = θ Cov[f (y, x)]θ(9) measures the uncertainty caused by feature noising. 1 The regularizer R q (θ, x) involves the product of two variance terms, the first is non-convex in θ and the second is quadratic in θ. Note that to reduce the regularization, we will favor models that (i) predict confidently and (ii) have stable scores in the presence of feature noise. For multiclass classification, we can explicitly sum over each y ∈ Y to compute the regularizer, but this will be intractable for structured prediction. To specialize to multiclass classification for the moment, let us assume that we have a separate weight vector for each output y applied to the same feature vector g(x); that is, the score s y = θ y · g(x). Further, assume that the components of the noised feature vectorg(x) are independent. Then we can simplify (9) to the following: Var[s y ] = j Var[g j (x)]θ 2 yj .(10) Noising schemes We now give some examples of possible noise schemes for generatingf (y, x) given the original features f (y, x). This distribution affects the regularization through the variance term Var[s y ]. • Additive Gaussian: f (y, x) = f (y, x) + ε, where ε ∼ N (0, σ 2 I d×d ). In this case, the contribution to the regularizer from noising is Var[s y ] = j σ 2 θ 2 yj . • Dropout: f (y, x) = f (y, x) z, where takes the elementwise product of two vectors. Here, z is a vector with independent components which has z i = 0 with probability δ, z i = 1 1−δ with probability 1 − δ. In this case, Var[s y ] = j g j (x) 2 δ 1−δ θ 2 yj . • Multiplicative Gaussian: f (y, x) = f (y, x) (1 + ε), where ε ∼ N (0, σ 2 I d×d ). Here, Var[s y ] = j g j (x) 2 σ 2 θ 2 yj . Note that under our secondorder approximation R q (θ, x), the multiplicative Gaussian and dropout schemes are equivalent, but they differ under the original regularizer R(θ, x). Semi-supervised learning A key observation (Wager et al., 2013) is that the noising regularizer R (8), while involving a sum over examples, is independent of the output y. This suggests estimating R using unlabeled data. Specifically, if we have n labeled examples D = {x 1 , x 2 , . . . , x n } and m unlabeled examples D unlabeled = {u 1 , u 2 , . . . , u n }, then we can define a regularizer that is a linear combination the regularizer estimated on both datasets, with α tuning the tradeoff between the two: R * (θ, D, D unlabeled )(11)def = n n + αm n i=1 R(θ, x i ) + α m i=1 R(θ, u i ) . Feature Noising in Linear-Chain CRFs So far, we have developed a regularizer that works for all log-linear models, but-in its current formis only practical for multiclass classification. We now exploit the decomposable structure in CRFs to define a new noising scheme which does not require us to explicitly sum over all possible outputs y ∈ Y. The key idea will be to noise each local feature vector (which implicitly affects many y) rather than noise each y independently. Assume that the output y = (y 1 , . . . , y T ) is a sequence of T tags. In linear chain CRFs, the feature vector f decomposes into a sum of local feature vectors g t : f (y, x) = T t=1 g t (y t−1 , y t , x),(12) where g t (a, b, x) is defined on a pair of consecutive tags a, b for positions t − 1 and t. Rather than working with a score s y for each y ∈ Y, we define a collection of local scores s = {s a,b,t }, for each tag pair (a, b) and position t = 1, . . . , T . We consider noising schemes which independently setg t (a, b, x) for each a, b, t. Lets = {s a,b,t } be the corresponding collection of noised scores. We can write the log-partition function of these local scores as follows: A(s) = log y∈Y exp T t=1 s y t−1 ,yt,t .(13) The first derivative yields the edge marginals under the model, µ a,b,t = p θ (y t−1 = a, y t = b | x), and the diagonal elements of the Hessian ∇ 2 A(s) yield the marginal variances. Now, following (7) and (8), we obtain the following regularizer: R q (θ, x) = 1 2 a,b,t µ a,b,t (1 − µ a,b,t ) Var[s a,b,t ],(14) where µ a,b,t (1 − µ a,b,t ) measures model uncertainty about edge marginals, and Var[s a,b,t ] is simply the uncertainty due to noising. Again, minimizing the regularizer means making confident predictions and having stable scores under feature noise. Computing partial derivatives So far, we have defined the regularizer R q (θ, x) based on feature noising. In order to minimize R q (θ, x), we need to take its derivative. First, note that log µ a,b,t is the difference of a restricted log-partition function and the log-partition function. So again by properties of its first derivative, we have: ∇ log µ a,b,t = E p θ (y|x,y t−1 =a,yt=b) [f (y, x)] (15) − E p θ (y|x) [f (y, x)]. Using the fact that ∇µ a,b,t = µ a,b,t ∇ log µ a,b,t and the fact that Var[s a,b,t ] is a quadratic function in θ, we can simply apply the product rule to derive the final gradient ∇R q (θ, x). A Dynamic Program for the Conditional Expectation A naive computation of the gradient ∇R q (θ, x) requires a full forward-backward pass to compute E p θ (y|y t−1 =a,yt=b,x) [f (y, x)] for each tag pair (a, b) and position t, resulting in a O(K 4 T 2 ) time algorithm. In this section, we reduce the running time to O(K 2 T ) using a more intricate dynamic program. By the Markov property of the CRF, y 1:t−2 only depends on (y t−1 , y t ) through y t−1 and y t+1:T only depends on (y t−1 , y t ) through y t . First, it will be convenient to define the partial sum of the local feature vector from positions i to j as follows: G i:j = j t=i g t (y t−1 , y t , x).(16) Consider the task of computing the feature expectation E p θ (y|y t−1 =a,yt=b) [f (y, x)] for a fixed (a, b, t). We can expand this quantity into y:y t−1 =a,yt=b p θ (y −(t−1:t) | y t−1 = a, y t = b)G 1:T . Conditioning on y t−1 , y t decomposes the sum into three pieces: y:y t−1 =a,yt=b [g t (y t−1 = a, y t = b, x) + F a t + B b t ], where F a t = y 1:t−2 p θ (y 1:t−2 | y t−1 = a)G 1:t−1 ,(17)B b t = y t+1:T p θ (y t+1:T | y t = b)G t+1:T ,(18) are the expected feature vectors summed over the prefix and suffix of the tag sequence, respectively. Note that F a t and B b t are analogous to the forward and backward messages of standard CRF inference, with the exception that they are vectors rather than scalars. We can compute these messages recursively in the standard way. The forward recurrence is F a t = b p θ (y t−2 = b | y t−1 = a) g t (y t−2 = b, y t−1 = a, x) + F b t−1 , and a similar recurrence holds for the backward messages B b t . Running the resulting dynamic program takes O(K 2 T q) time and requires O(KT q) storage, where K is the number of tags, T is the sequence length and q is the number of active features. Note that this is the same order of dependence as normal CRF training, but there is an additional dependence on the number of active features q, which makes training slower. Fast Gradient Computations In this section, we provide two ways to further improve the efficiency of the gradient calculation based on ignoring long-range interactions and based on exploiting feature sparsity. Exploiting Feature Sparsity and Co-occurrence In each forward-backward pass over a training example, we need to compute the conditional expectations for all features active in that example. Naively applying the dynamic program in Section 3 is O(K 2 T ) for each active feature. The total complexity has to factor in the number of active features, q. Although q only scales linearly with sentence length, in practice this number could get large pretty quickly. For example, in the NER tagging experiments (cf. Section 5), the average number of active features per token is about 20, which means q 20T ; this term quickly dominates the computational costs. Fortunately, in sequence tagging and other NLP tasks, the majority of features are sparse and they often co-occur. That is, some of the active features would fire and only fire at the same locations in a given sequence. This happens when a particular token triggers multiple rare features. We observe that all indicator features that only fired once at position t have the same conditional expectations (and model expectations). As a result, we can collapse such a group of features into a single feature as a preprocessing step to avoid computing identical expectations for each of the features. Doing so on the same NER tagging experiments cuts down q/T from 20 to less than 5, and gives us a 4 times speed up at no loss of accuracy. The exact same trick is applicable to the general CRF gradient computation as well and gives similar speedup. Short-range interactions It is also possible to speed up the method by resorting to approximate gradients. In our case, the dynamic program from Section 3 together with the trick described above ran in a manageable amount of time. The techniques developed here, however, could prove to be useful on larger tasks. Let us rewrite the quantity we want to compute slightly differently (again, for all a, b, t): T i=1 E p θ (y|x,y t−1 =a,yt=b) [g i (y i−1 , y i , x)].(19) The intuition is that conditioned on y t−1 , y t , the terms g i (y i−1 , y i , x) where i is far from t will be close to E p θ (y|x) [g i (y i−1 , y i , x)]. This motivates replacing the former with the latter whenever |i − k| ≥ r where r is some window size. This approximation results in an expression which only has to consider the sum of the local feature vectors from i−r to i+r, which is captured by G i−r:i+r : E p θ (y|y t−1 =a,yt=b,x) [f (y, x)] − E p θ (y|x) [f (y, x)] ≈ E p θ (y|y t−1 =a,yt=b,x) [G t−r:t+r ] (20) − E p θ (y|x) [G t−r:t+r ]. We can further approximate this last expression by letting r = 0, obtaining: g t (a, b, x) − E p θ (y|x) [g t (y t−1 , y t , x)].(21) The second expectation can be computed from the edge marginals. The accuracy of this approximation hinges on the lack of long range dependencies. Equation (21) shows the case of r = 0; this takes almost no additional effort to compute. However, for some of our experiments, we observed a 20% difference with the real derivative. For r > 0, the computational savings are more limited, but the bounded-window method is easier to implement. Experiments We show experimental results on the CoNLL-2003 Named Entity Recognition (NER) task, the SANCL Part-of-speech (POS) tagging task, and several document classification tasks. 2 The datasets used are described in Table 1. We used standard splits whenever available; otherwise we split the data at random into a test set and a train set of equal sizes (RCV1 4 , TDT2). CoNLL has a development set of size 51578, which we used to tune regularization parameters. The SANCL test set is divided into 3 genres, namely answers, newsgroups, and reviews, each of which has a corresponding development set. 3 Multiclass Classification We begin by testing our regularizer in the simple case of classification where Y = {1, 2, . . . , K} for K classes. We examine the performance of the noising regularizer in both the fully supervised setting as well as the transductive learning setting. In the transductive learning setting, the learner is allowed to inspect the test features at train time (without the labels). We used the method described in Section 2.1 for transductive dropout. Table 2: Classification performance and transductive learning results on some standard datasets. None: use no regularization, Drop: quadratic approximation to the dropout noise (8), +Test: also use the test set to estimate the noising regularizer (11). Semi-supervised Learning with Feature Noising In the transductive setting, we used test data (without labels) to learn a better regularizer. As an alternative, we could also use unlabeled data in place of the test data to accomplish a similar goal; this leads to a semi-supervised setting. To test the semi-supervised idea, we use the same datasets as above. We split each dataset evenly into 3 thirds that we use as a training set, a test set and an unlabeled dataset. Results are given in Table 3. In most cases, our semi-supervised accuracies are lower than the transductive accuracies given in Table 2; this is normal in our setup, because we used less labeled data to train the semi-supervised classifier than the transductive one. 4 The Second-Order Approximation The results reported above all rely on the approximate dropout regularizer (8) that is based on a second-order Taylor expansion. To test the validity of this approximation we compare it to the Gaussian method developed by Wang and Manning (2013) on a two-class classification task. We use the 20-newsgroups alt.atheism vs soc.religion.christian classification task; results are shown in Figure 2. There are 1427 exam- Figure 2: Effect of λ in λ θ 2 2 on the testset performance. Plotted is the test set accuracy with logistic regression as a function of λ for the L 2 regularizer, Gaussian dropout (Wang and Manning, 2013) + additional L 2 , and quadratic dropout (8) + L 2 described in this paper. The default noising regularizer is quite good, and additional L 2 does not help. Notice that no choice of λ in L 2 can help us combat overfitting as effectively as (8) without underfitting. ples with 22178 features, split evenly and randomly into a training set and a test set. Over a broad range of λ values, we find that dropout plus L 2 regularization performs far better than using just L 2 regularization for any value of λ. We see that Gaussian dropout appears to perform slightly better than the quadratic approximation discussed in this paper. However, our quadratic approximation extends easily to the multiclass case and to structured prediction in general, while Gaussian dropout does not. Thus, it appears that our approximation presents a reasonable trade-off between computational efficiency and prediction accuracy. CRF Experiments We evaluate the quadratic dropout regularizer in linear-chain CRFs on two sequence tagging tasks: the CoNLL 2003 NER shared task (Tjong Kim Sang and De Meulder, 2003) and the SANCL 2012 POS tagging task (Petrov and McDonald, 2012) . The standard CoNLL-2003 English shared task benchmark dataset (Tjong Kim Sang and De Meulder, 2003) is a collection of documents from Reuters newswire articles, annotated with four entity types: Person, Location, Organization, and Miscellaneous. We predicted the label sequence Y = {LOC, MISC, ORG, PER, O} T without considering the BIO tags. For training the CRF model, we used a comprehensive set of features from Finkel et al. (2005) that gives state-of-the-art results on this task. A total number of 437906 features were generated on the CoNLL-2003 training dataset. The most important features are: • The word, word shape, and letter n-grams (up to 6gram) at current position • The prediction, word, and word shape of the previous and next position • Previous word shape in conjunction with current word shape • Disjunctive word set of the previous and next 4 positions • Capitalization pattern in a 3 word window • Previous two words in conjunction with the word shape of the previous word • The current word matched against a list of name titles (e.g., Mr., Mrs.) The F β=1 results are summarized in Table 4. We obtain a 1.6% and 1.1% absolute gain on the test and dev set, respectively. Detailed results are broken down by precision and recall for each tag and are shown in Table 6. These improvements are significant at the 0.1% level according to the paired bootstrap resampling method of 2000 iterations (Efron and Tibshirani, 1993). For the SANCL (Petrov and McDonald, 2012) POS tagging task, we used the same CRF framework with a much simpler set of features • word unigrams: w −1 , w 0 , w 1 • word bigram: (w −1 , w 0 ) and (w 0 , w 1 ) F Table 5: SANCL POS tagging F β=1 scores for the 3 official evaluation sets. We obtained a small but consistent improvement using the quadratic dropout regularizer in (14) over the L 2 -regularized CRFs baseline. Although the difference on SANCL is small, the performance differences on the test sets of reviews and newsgroups are statistically significant at the 0.1% level. This is also interesting because here is a situation where the features are extremely sparse, L 2 regularization gave no improvement, and where regularization overall matters less. Conclusion We have presented a new regularizer for learning log-linear models such as multiclass logistic regression and conditional random fields. This regularizer is based on a second-order approximation of feature noising schemes, and attempts to favor models that predict confidently and are robust to noise in the data. In order to apply our method to CRFs, we tackle the key challenge of dealing with feature correlations that arise in the structured prediction setting in several ways. In addition, we show that the regularizer can be applied naturally in the semisupervised setting. Finally, we applied our method to a range of different datasets and demonstrate consistent gains over standard L 2 regularization. Inves- tigating how to better optimize this non-convex regularizer online and convincingly scale it to the semisupervised setting seem to be promising future directions. Table 3 : 3Semisupervised learning results on some standard datasets. A third (33%) of the full dataset was used for training, a third for testing, and the rest as unlabeled. β=1 None L 2 Drop Dev 89.40 90.73 91.86 Test 84.67 85.82 87.42 Table 4 : 4CoNLL summary of results. None: no regularization, Drop: quadratic dropout regularization (14) described in this paper. Drop newsgroups Dev 91.34 91.34 91.47 Test 91.44 91.44 91.81 reviews Dev 91.97 91.95 92.10 Test 90.70 90.67 91.07 answers Dev 90.78 90.79 90.70 Test 91.00 90.99 91.09F β=1 None L 2 Precision Recall F β=1 86.26% 87.74% 86.99 81.52% 77.34% 79.37 88.29% 81.89% 84.97 92.15% 92.68% 92.41 88.40% 86.45% 87.42 (f) CoNLL test set with dropout regularization Table 6 : 6CoNLL NER results broken down by tags and by precision, recall, and F β=1 . Top: development set, bottom: test set performance. Here, we are using the fact that first and second derivatives of the log-partition function are the mean and variance. The document classification data are available at http://www.csie.ntu.edu.tw/˜cjlin/ libsvmtools/datasets and http://www.cad. zju.edu.cn/home/dengcai/Data/TextData.html 3 The SANCL dataset has two additional genres-emails and weblogs-that we did not use, as we did not have access to development sets for these genres. The CoNNL results look somewhat surprising, as the semisupervised results are better than the transductive ones. The reason for this is that the original CoNLL test set came from a different distributions than the training set, and this made the task more difficult. Meanwhile, in our semi-supervised experiment, the test and train sets are drawn from the same distribution and so our semi-supervised task is actually easier than the original one. AcknowledgementsThe authors would like to thank the anonymous reviewers for their comments. We gratefully acknowledge the support of the Defense Advanced Research Projects Agency (DARPA) Broad Operational Language Translation (BOLT) program through IBM. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the view of the DARPA, or the US government. S. Wager is supported by a BC and EJ Eaves SGF Fellowship. 84% 92.44 90.51% 83.52% 86.87 88.35% 85.23% 86.76 93.12% 94.19% 93.65 91.36% 90.11% 90.73CoNLL dev. set with no regularization Precision Recall F β=1 92.05% 92. CoNLL dev. set with no regularization Precision Recall F β=1 92.05% 92.84% 92.44 90.51% 83.52% 86.87 88.35% 85.23% 86.76 93.12% 94.19% 93.65 91.36% 90.11% 90.73 CoNLL dev. set with L 2 regularization Precision Recall F β=1 93. 59% 92.69% 93.14 93.99% 81.47% 87.28 92.48% 84.61% 88.37 94.81% 95.11% 94.96 93.85% 89.96% 91.86CoNLL dev. set with L 2 reg- ularization Precision Recall F β=1 93.59% 92.69% 93.14 93.99% 81.47% 87.28 92.48% 84.61% 88.37 94.81% 95.11% 94.96 93.85% 89.96% 91.86 CoNLL test set with no regularization Precision Recall F β=1 87.96% 86. 13% 87.03 77.53% 79.30% 78.41 81.30% 80.49% 80.89 90.30% 93.33% 91.79 85.57% 86.08% 85.82CoNLL test set with no regularization Precision Recall F β=1 87.96% 86.13% 87.03 77.53% 79.30% 78.41 81.30% 80.49% 80.89 90.30% 93.33% 91.79 85.57% 86.08% 85.82 Learning from hints in neural networks. S Yaser, Abu-Mostafa, Journal of Complexity. 62Yaser S. Abu-Mostafa. 1990. Learning from hints in neural networks. Journal of Complexity, 6(2):192- 198. Training with noise is equivalent to Tikhonov regularization. Chris M Bishop, Neural computation. 71Chris M. Bishop. 1995. Training with noise is equiva- lent to Tikhonov regularization. Neural computation, 7(1):108-116. Attribute bagging: improving accuracy of classifier ensembles by using random feature subsets. Robert Bryll, Ricardo Gutierrez-Osuna, Francis Quek, Pattern recognition. 366Robert Bryll, Ricardo Gutierrez-Osuna, and Francis Quek. 2003. Attribute bagging: improving accuracy of classifier ensembles by using random feature sub- sets. Pattern recognition, 36(6):1291-1302. Improving the accuracy and speed of support vector machines. J C Chris, Bernhard Burges, Schölkopf, Advances in Neural Information Processing Systems. Chris J.C. Burges and Bernhard Schölkopf. 1997. Im- proving the accuracy and speed of support vector ma- chines. In Advances in Neural Information Processing Systems, pages 375-381. An Introduction to the Bootstrap. Brad Efron, Robert Tibshirani, Chapman & HallNew YorkBrad Efron and Robert Tibshirani. 1993. An Introduction to the Bootstrap. Chapman & Hall, New York. Incorporating non-local information into information extraction systems by Gibbs sampling. Jenny Rose Finkel, Trond Grenager, Christopher Manning, Proceedings of the 43rd annual meeting of the Association for Computational Linguistics. the 43rd annual meeting of the Association for Computational LinguisticsJenny Rose Finkel, Trond Grenager, and Christopher Manning. 2005. Incorporating non-local informa- tion into information extraction systems by Gibbs sam- pling. In Proceedings of the 43rd annual meeting of the Association for Computational Linguistics, pages 363-370. Entropy regularization. Yves Grandvalet, Yoshua Bengio, Semi-Supervised Learning. United KingdomSpringerYves Grandvalet and Yoshua Bengio. 2005. Entropy regularization. In Semi-Supervised Learning, United Kingdom. Springer. Improving neural networks by preventing co-adaptation of feature detectors. Geoffrey E Hinton, Nitish Srivastava, Alex Krizhevsky, Ilya Sutskever, Ruslan R Salakhutdinov, arXiv:1207.0580arXiv preprintGeoffrey E. Hinton, Nitish Srivastava, Alex Krizhevsky, Ilya Sutskever, and Ruslan R. Salakhutdinov. 2012. Improving neural networks by preventing co-adaptation of feature detectors. arXiv preprint arXiv:1207.0580. Semisupervised conditional random fields for improved sequence segmentation and labeling. Feng Jiao, Shaojun Wang, Chi-Hoon Lee, Russell Greiner, Dale Schuurmans, Proceedings of the 44th annual meeting of the Association for Computational Linguistics, ACL-44. the 44th annual meeting of the Association for Computational Linguistics, ACL-44Feng Jiao, Shaojun Wang, Chi-Hoon Lee, Russell Greiner, and Dale Schuurmans. 2006. Semi- supervised conditional random fields for improved se- quence segmentation and labeling. In Proceedings of the 44th annual meeting of the Association for Com- putational Linguistics, ACL-44, pages 209-216. Transductive inference for text classification using support vector machines. Thorsten Joachims, Proceedings of the International Conference on Machine Learning. the International Conference on Machine LearningThorsten Joachims. 1999. Transductive inference for text classification using support vector machines. In Proceedings of the International Conference on Ma- chine Learning, pages 200-209. Semi-supervised sequence modeling with syntactic topic models. Wei Li, Andrew Mccallum, Proceedings of the 20th national conference on Artificial Intelligence. the 20th national conference on Artificial Intelligence2AAAI'05Wei Li and Andrew McCallum. 2005. Semi-supervised sequence modeling with syntactic topic models. In Proceedings of the 20th national conference on Arti- ficial Intelligence -Volume 2, AAAI'05, pages 813- 818. Simple, robust, scalable semi-supervised learning via expectation regularization. Gideon S Mann, Andrew Mccallum, Proceedings of the International Conference on Machine Learning. the International Conference on Machine LearningGideon S. Mann and Andrew McCallum. 2007. Sim- ple, robust, scalable semi-supervised learning via ex- pectation regularization. In Proceedings of the Inter- national Conference on Machine Learning. Noise injection into inputs in back-propagation learning. Systems, Man and Cybernetics. Kiyotoshi Matsuoka, IEEE Transactions on. 223Kiyotoshi Matsuoka. 1992. Noise injection into inputs in back-propagation learning. Systems, Man and Cy- bernetics, IEEE Transactions on, 22(3):436-440. Slav Petrov, Ryan Mcdonald, Overview of the 2012 shared task on parsing the web. Notes of the First Workshop on Syntactic Analysis of Non-Canonical Language (SANCL). Slav Petrov and Ryan McDonald. 2012. Overview of the 2012 shared task on parsing the web. Notes of the First Workshop on Syntactic Analysis of Non-Canonical Language (SANCL). The manifold tangent classifier. Salah Rifai, Yann Dauphin, Pascal Vincent, Yoshua Bengio, Xavier Muller, Advances in Neural Information Processing Systems. 24Salah Rifai, Yann Dauphin, Pascal Vincent, Yoshua Ben- gio, and Xavier Muller. 2011a. The manifold tangent classifier. Advances in Neural Information Processing Systems, 24:2294-2302. Adding noise to the input of a model trained with a regularized objective. Salah Rifai, Xavier Glorot, Yoshua Bengio, Pascal Vincent, arXiv:1104.3250arXiv preprintSalah Rifai, Xavier Glorot, Yoshua Bengio, and Pascal Vincent. 2011b. Adding noise to the input of a model trained with a regularized objective. arXiv preprint arXiv:1104.3250. Transformation invariance in pattern recognition: Tangent distance and propagation. Patrice Y Simard, Yann A Le Cun, John S Denker, Bernard Victorri, International Journal of Imaging Systems and Technology. 113Patrice Y. Simard, Yann A. Le Cun, John S. Denker, and Bernard Victorri. 2000. Transformation invariance in pattern recognition: Tangent distance and propagation. International Journal of Imaging Systems and Tech- nology, 11(3):181-197. Logarithmic opinion pools for conditional random fields. Andrew Smith, Trevor Cohn, Miles Osborne, Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics. the 43rd Annual Meeting on Association for Computational LinguisticsAssociation for Computational LinguisticsAndrew Smith, Trevor Cohn, and Miles Osborne. 2005. Logarithmic opinion pools for conditional random fields. In Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics, pages 18- 25. Association for Computational Linguistics. Charles Sutton, Michael Sindelar, Andrew Mccallum, Feature bagging: Preventing weight undertraining in structured discriminative learning. Center for Intelligent Information Retrieval, U. of Massachusetts. Charles Sutton, Michael Sindelar, and Andrew McCal- lum. 2005. Feature bagging: Preventing weight un- dertraining in structured discriminative learning. Cen- ter for Intelligent Information Retrieval, U. of Mas- sachusetts. Introduction to the conll-2003 shared task: languageindependent named entity recognition. Erik F Tjong, Kim Sang, Fien De Meulder, CONLL '03Proceedings of the seventh conference on Natural language learning at HLT-NAACL 2003. the seventh conference on Natural language learning at HLT-NAACL 20034Erik F. Tjong Kim Sang and Fien De Meulder. 2003. Introduction to the conll-2003 shared task: language- independent named entity recognition. In Proceedings of the seventh conference on Natural language learn- ing at HLT-NAACL 2003 -Volume 4, CONLL '03, pages 142-147. Learning with marginalized corrupted features. Laurens Van Der Maaten, Minmin Chen, Stephen Tyree, Kilian Q Weinberger, Proceedings of the International Conference on Machine Learning. the International Conference on Machine LearningLaurens van der Maaten, Minmin Chen, Stephen Tyree, and Kilian Q. Weinberger. 2013. Learning with marginalized corrupted features. In Proceedings of the International Conference on Machine Learning. Stefan Wager, Sida Wang, Percy Liang, Dropout training as adaptive regularization. arXiv preprintStefan Wager, Sida Wang, and Percy Liang. 2013. Dropout training as adaptive regularization. arXiv preprint:1307.1493. Regularization of neural networks using dropconnect. Li Wan, Matthew Zeiler, Proceedings of the International Conference on Machine learning. the International Conference on Machine learningSixin Zhang, Yann LeCun, and Rob FergusLi Wan, Matthew Zeiler, Sixin Zhang, Yann LeCun, and Rob Fergus. 2013. Regularization of neural networks using dropconnect. In Proceedings of the Interna- tional Conference on Machine learning. Fast dropout training. Sida Wang, Christopher D Manning, Proceedings of the International Conference on Machine Learning. the International Conference on Machine LearningSida Wang and Christopher D. Manning. 2013. Fast dropout training. In Proceedings of the International Conference on Machine Learning.
169,525,462
[]
Towards a rule-guided derivation of aspectual readings in Russian 2004. 2004 Barbara Sonnenhauser basonne@rz.uni-leipzig.de University of Leipzig Beethovenstr. 1504107LeipzigGermany Towards a rule-guided derivation of aspectual readings in Russian Fès 212004. 2004Indéterminationaspectinterprétationsémantiquepragmatique Underspecificationaspectinterpretationsemanticspragmatics Les significations des expressions dans les langues naturelles sont souvent indéterminées (sous-spécifiées) et nécessitent d'être enrichies avant de devenir des propositions complètes. La sémantique générale des expressions linguistiques doit être complétée par les inférences pragmatiques, identifiées et captées d'une manière régulière et permettant ainsi un traitement opérationnel et même informatique. Cet article étudie l'indétermination de l'aspect imperfectif en russe et propose un cadre sémantique et pragmatique pour l'identification de ses différentes valeurs sémantiques à la base de règles.Natural language expressions are underspecified and require enrichment to develop into full fledged propositions. Their sense-general semantics must be complemented with pragmatic inferences that have to be systematically figured out and pinned down in a principled way, so as to make them suitable inputs for NLP algorithms. This paper deals with the underspecified ipf 1 aspect in Russian and introduces a semantic and pragmatic framework that might serve as the basis for a rule-guided derivation of its different readings. readings indicates their at least partial pragmatic character. A further difficulty for NLP applications is that -presuming cooperativity -any utterance can receive an interpretation by appropriately accommodating the context. (1) a. actual-processual reading Kogda This reading poses difficulties for accounts of the ipf in terms of 'incompletedness', as the event in question is completed. Here, English does not allow the progressive aspect which is marked for ϕ dyn -selection (section 2) and therefore is incompatible with completedness. This reading arises with any aspectual form in the presence of adverbials of habituality. Basic semantics Semantically, a 'selectional theory' of aspect is assumed (Bickel 1996), where aspect selects phases (ϕ) or boundaries (τ). Presuming a tripartite event structure (Moens/Steedman 1988) consisting of preparation phase (dynamic phase ϕ dyn ), culmination point (boundary τ) and consequent state (static phase ϕ stat ) there are three possibilities for that selection, i.e., for making the selected part of the event visible and accessible for truth-conditional evaluation at a validation interval VI. The non-selected parts of the event are presupposed or left to implicatures. Note, that aspect requires a certain input, and if this input is not given by the verbal basis, it has to be adjusted accordingly 2 . The marked members of the respective aspectual oppositions explicitly select a certain part; the unmarked forms are sense-general, their meaning has to be specified semantically or pragmatically. The readings of ipf can be grouped according to the character of their VI, which may be retrospective or synchronous (bounded or unbounded, cf. Padučeva 1996) with respect to the selected part. The relation characterizes the values ipf may acquire in interpretation. 3 In most cases, VI is lexically specified and serves as a hint as to which group of readings (I-III) applies. The respective reading then is derived by means of context and world-knowledge, cf. Basic pragmatics The pragmatic principles are Levinson's (2000) default heuristics for interpretation based on the Gricean Maxims of Conversation (Grice 1989): Q-inferences are based on the first quantity maxim ('make your statement as informative as possible') and license inference to the negation or invalidity of a stronger expression, M-inferences stem from violations of the manner maxim ('avoid prolixity'), and lincense the inference from marked expressions to marked interpretations. I-inferences are based on the second quantity maxim ('do not say more than necessary') and allow for inference to a stereotype. Contrary to the Gricean view, however, these are assumed to work also on the subpropositional level giving rise to 'explicatures' (Carston 2002), which enrich underspecified lexical representation. Q-inferences derive the meaning of unmarked forms by giving rise to scalar implicatures (scale <pf, ipf>), meaning that the use or the weaker element (ipf) entitles the hearer to infer the non-validity of the stronger expression (pf), thereby giving rise to the three possible values stated above (figure 1). M-inferences occur here with mismatches between aspectual selector and verbal basis, i.e. with the application of a ϕor τ-selector on a basis that does not provide the respective feature, which hast to be induced semantically or pragmatically, thereby enriching the logical structure. This can be systematically captured and formalized by 'coercion operators' (Thomas/Pulman 1999;Pulman 1997). I-inferences refer back to worldknowledge, thereby enriching the lexical meaning of the aspecto-temporal forms. As frequently encountered concepts are more likely to get activated, they constitute the stereotypes to which the I-inferences are drawn. Towards a rule-guided derivation A list of readings has to be established (see figure 1), the factors involved their derivation have to be fixed and rules of interaction have to be stated that can be expressed in the propositional logic form A → B (cf. Vazov/Lapalme 2000). Interpretation of aspectual forms processes incrementally, i.e. information once provided and processed can't be undone. Input factors for algorithms are the following: Verbs indexed for ϕ and τ they contain; lexical items indexed for whether they add ϕ or τ and aspectual selectors indexed for what they select and for their status within the language specific markedness relation. That is how Q-inferences are drawn. VI constrains the interpretations of the unmarked aspectual partner. The default combinations of base and selector have to be stated, as well as rules for resolving the mismatches. M-inferences then can be pinned down by coercion operators (Pulman 1997;Thomas/Pulman 1999). More difficult is the problem of how to specify verbs for the commonsens knowledge they provide access to, which is indespensable for I-inference to be drawn. One means would be corpus analysis in order to detect regularities and coocurrences of lexical items that might hint to a conceptual connection. As the factor 'probability' can't be eliminated, there is to be implemented a condition preferring the shortest line of reasoning (Thomas/Pulman 1999). The default case is a fit of basis and marker, where the verbal basis provides the necessary input for the marker to apply. For ipf, the conditions have to be stated under which the three possibilites ( figure 1) Depending on the semantic representation of the verb, implicatures or presuppositions may arise. Ipf with the structure [ϕ τ] leaves the reaching of the boundary as an implicature, ipf with [τ ϕ] leaves the initial boundary as presupposition. Whenever an aspectual marker is applied on a basis not providing the relevant feature (ϕ or τ) for it to apply, that feature is semantically or pragmatically induced in order to eliminate that mismatch. Coercion operators capture this recategorization process (Pulman 1997): (9) a. Ivan vyigral gonku. Ivan win:PAST:pf race:ACC 'Ivan won the race.' Here, pf is applied to a verb that provides a τ; no coercion is necessary. b. Ivan vyigryval gonku (četyre raza). Ivan win: PAST:ipf race:ACC (four times). 'Ivan won the race four times / was winning the race.' The application of ipf in (9b) requires a ϕ, which the verb vyigrat´ ('win') does not provide. So it has to be induced by iteration or by zooming in on ϕ dyn . Two coercion operators may be applied: "iterate / stretch: point → process" (Pulman 1997 For the consequent-state reading in (9c), the prefix vy-induces the boundary required for pf to apply. The reading arises due to the particle uže ('already'); the coercion operator is "addcstate: X → <X, state>, where X is point or process" (Pulman 1997). The rules for (9b) are: (10) a. IF ipf is applied to a verb providing no phase, AND a lexical item indicating iteration is present THEN induce the phase by application of 'iterate' b. IF ipf is applied to a verb providing no phase AND and adverbial/clause indicating incidence is present THEN induce the phase by application of 'stretch' The application of ipf onto a verbal basis providing merely a τ (prior to coercion) is both pragmatically and morphologically marked, but ipf does not lose its semantic unmarkedness. Though interpretation in terms of coercion is compositional, the specific reading this coercion gives rise to dependens on linguistic context and world-knowledge (de Swart 1998); cf. (11) Whereas (c) can be disambiguated by fixing VI as retrospective, (a) and (b) cannot be distinguished by VI alone as both require it to be synchronous. The distinction between the possible readings is left to contextual disambiguation and world-knowledge. Gaining probability values and for interpretations by a statistical approach taking into account judgements of native speakers helps (Glovinskaja 1982), but the probability rankings can be overriden by the lexical content of verbal phrases. Concluding remarks The framework presented here allows for taking also pragmatic reasoning processes into account in computing interpretations. Without a principled account of inferential principles NLP applications have to fail. This rather sketchy picture presented here is to serve as a starting point for identifying semantic and pragmatic factors in the aspecto-temporal system of Russian. A lot of problems remain to be solved. Corpus analyses and the appropriate annotation of verbs, aspect markers and adverbials are the prerequisite for formulating rules that enable the systematic derivation and computation of the readings. Furthermore, the interaction of the different factors has to be studied in a wider domain, i.e. on the paragraph level. Figure 1 : 1Classification of the readings of Russian ipf aspect This reading arises mainly with a specific group of verbs, combined with manner adverbials.c. potential reading On chorošo igral v šachmaty. he well play:Past:ipf chess. 'He could play chess very well.' = 'He was a good chess-player'. d. habitual reading deduška obyčno guljal so vnukami, s nimi igral v futbol, grandpa usually take a walk:PAST:ipf with grandschildren, with them play:PAST:ipf football, kuril trubku, ... smoke:PAST:ipf pipe, ... 'Grandpa used to go for a walk with the grandchildren, he used to play football with them, he used to smoke a pipe, … IF ipf is applied to a verb providing a phase AND if there is an adverbial fixing VI as retrospective THEN the aspectual form gets a reading out of group III.This shows the incremental way of interpretation, whereby the inner parts are left intact: b. [ syn.unbounded obyčno [ retro ot...do [ syn.bounded guljal]]] A synchronous VI may be bounded or unbounded (group I and II, table 1), cf.IF ipf is applied to a verb providing a phase AND if there is an adverbial fixing VI as synchronous bounded/unbounded THEN the aspectual form gets a reading out of group I/IIget activated. Here, VI -primarily temporal or manner adverbials (e.g. vse bol´še 'more and more', chorošo 'well') -is decisive. Adverbials of cardinality and duration fix VI as retrospective and the reading as out of group III. The rule for this line of interpretation can be stated as follows (adopted from Vazov/Lapalme 2000): (4) (5) durative reading Ja guljala ot trech to pjati. I go-for-a-walk:PAST:ipf from three:Gen to five:Gen 'From three to five, I went for a walk.' This interpretation can be overridden if VI is turned into a synchronous one by adverbials of the type vsegda ('always') or obyčno ('usually'). (6) a. habitual reading Ja obyčno guljala ot trech to pjati. I usually go-for-walk:PAST:ipf from three:GEN to five:GEN 'I usually went for a walk from three to five.' (7) and (8): (7) (8) a. actual-processual reading V vosem´ časov, ja čitala knigu. at eight o´clock, I read:Past:ipf book:Acc 'At eight o´clock, I was reading a book.' b. inactual reading Ran´še, on rabotal v universitete. before he work:PAST:ipf at university 'He used to work at university.' (= 'He was working as a teacher.') Contrary to what an anonymous reviewer pointend out, this analysis does make the correct predictions about 'He is being silly' meaning 'He is acting silly': the progressive requires a dynamic phase to be present, and this phase is pragmatically induced resulting in the respective interpretation (M-inference, cf. section 4). That we have indeed to distinguish between those three possibilities is indicated by a look at Turkish, which has morphological means to express the respective relation (cf.Sonnenhauser 2003). The readings listed here involve different degrees of context-dependency. J Atlas, Philosophy without Ambiguity. OxfordClarendon PressAtlas, J. (1989), Philosophy without Ambiguity. Oxford, Clarendon Press. Aspect, Mood and Time in Belhare. B Bickel, Zürich, ASASBickel, B. (1996), Aspect, Mood and Time in Belhare. Zürich, ASAS. Thoughts and Utterances. R Carston, Oxford, BlackwellCarston, R. (2002), Thoughts and Utterances. Oxford, Blackwell Semantičeskije Tipy Vidovych Protivopostavlenija Russkogo Glagola. M J Glovinskaja, Moskva, NaukaGlovinskaja, M. J. (1982), Semantičeskije Tipy Vidovych Protivopostavlenija Russkogo Glagola. Moskva, Nauka. P Grice, Studies in the Way of Words. Cambridge. Harvard University PressGrice, P. (1989), Studies in the Way of Words. Cambridge, Harvard University Press. Presumptive Meanings. Cambridge. S Levinson, MIT PressLondonLevinson, S. (2000), Presumptive Meanings. Cambridge, London, MIT Press. Temporal ontology and temporal reference. M Moens, M Steedman, Computational Linguistics. 142Moens, M., Steedman, M. (1988), Temporal ontology and temporal reference. Computational Linguistics, Vol. 14/2, pp. 29-43. E V Padučeva, Semantičeskije Issledovanija. Moskva, «Jazyki russkoj kul´tury. Padučeva, E. V. (1996), Semantičeskije Issledovanija. Moskva, «Jazyki russkoj kul´tury». Aspectual shift as type coercion. S Pulman, Transactions of the Philological Society. 952Pulman, S. (1997), Aspectual shift as type coercion. Transactions of the Philological Society, Vol. 95/2, pp. 279-317. Aspect and the semantics-pragmatics interface. B Sonnenhauser, Proceedings of RANLP 03. RANLP 03BorovetsSonnenhauser, B. (2003), Aspect and the semantics-pragmatics interface. Proceedings of RANLP 03, Borovets Aspect shift and type coercion. H De Swart, Natural Language and Linguistic Theory. 162de Swart, H. (1998), Aspect shift and type coercion. Natural Language and Linguistic Theory, Vol. 16/2, pp. 347-385 Bidirectional interpretation of tense and aspect. J Thomas, S Pulman, Proceedings of the 3 rd International Workshop on Computational Semantics. Bunt, H. et al.the 3 rd International Workshop on Computational SemanticsTilburgThomas, J., Pulman, S. (1999), Bidirectional interpretation of tense and aspect. Bunt, H. et al. (eds), Proceedings of the 3 rd International Workshop on Computational Semantics. Tilburg, pp. 247-263. Are the temporal structures of texts algorithms. N Vazov, G Lapalme, Proceedings of the 7 th International Conference on Principles of Knowledge Representation and Reasoning. the 7 th International Conference on Principles of Knowledge Representation and ReasoningBreckenridgeVazov, N., Lapalme, G. (2000), Are the temporal structures of texts algorithms?. Proceedings of the 7 th International Conference on Principles of Knowledge Representation and Reasoning. Breckenridge, pp. 79-86.
6,225,233
Coping with Extragrarnmaticality
1Practical natural language interfaces must exhibit robust bei~aviour in the presence of extragrammaticat user input. This paper classifies different types of grammatical deviations and related phenomena at the lexical and sentential levels, discussing recovery strategies tailored to specific phenomena in the classification. Such strategies constitute a tool chest of computationally tractable methods for coping with extragrammaticality in restricted domain natural language. Some of the strategies have been tested and proven viable in existing parsers.
[ 16742497, 8710339, 7681159 ]
Coping with Extragrarnmaticality Jalme G Carbonell Computer Science Department Carnegie-Mellon University Pittsburgh 15213PAUSA Philip J Hayes Computer Science Department Carnegie-Mellon University Pittsburgh 15213PAUSA Coping with Extragrarnmaticality 1Practical natural language interfaces must exhibit robust bei~aviour in the presence of extragrammaticat user input. This paper classifies different types of grammatical deviations and related phenomena at the lexical and sentential levels, discussing recovery strategies tailored to specific phenomena in the classification. Such strategies constitute a tool chest of computationally tractable methods for coping with extragrammaticality in restricted domain natural language. Some of the strategies have been tested and proven viable in existing parsers. Introduction Any robust natural language interface must be capable of processing input utterances that deviate from its grammatical and semantic expectations. Many researchers have made this observation and have taken initial steps towards coverage of certain classes of extragrammatical constructions. Since robust parsers must deal primarily with input that does meet their expectations, the various efforts at coping with extragrammaticality have generally been structured as extensions to existing parsing methods. Probably the most popular approach has been to extend syntactically.oriented parsing techniques employing Augmented Transition Networks (ATNs) [21,24,25,29]. Other researchers have attempted to deal with ungrammatical input through network-based semantic grammar techniques [19. 20j. through extensions to pattern matching parsing in which partial pattern matching is allowed [16], through conceptual case frame instantiafion [12,22], and through approaches involving multiple cooperating parsing strategies [7,9,18]. Given the background of existing work, this paper focuses on two major objectives: 1. to create a taxonomy of grammatical deviations covering a broad range of extragrammaticalities, 2. to outline strategies for processing many of these deviations, 3. to assess how easily these strategies can be employed in conjunction with existing parsing methods. The overall result should be a synthesis of different parse. recovery strategies organized by the grammatical phenomena they address (or violate), an evaluation of how well the strategies • integrate with existing approaches to parsing extragrammatical 1This research was sponsored in part by the Air Force Office of Scientific Research under Contract AFOSR-82-0219 and in part by Digital Equipment Corporation as part of the XCALIBUR project. input, and a set of characteristics desirable in any parsing process dealing with extragrammatical input. We hope this will aid researchers designing robust natural language interfaces in two ways: t.by providing a tool chest of computationally effective approaches to cope with extragrammaticality; 2. by assisting in the selection of a basic parsing methodology in which to embed these recovery techniques. In assessing the degree of compatibility between recovery techniques and various approaches to parsing, we will avoid the issue of whether a given recovery technique can be used with a specific approach to parsing. The answer to such a question is almost always affirmative. Instead, we will be concerned with how naturally the recovery strategies fit with the various parsing approaches. In particular, we will consider the computational tractability of the recovery strategies and how easily they can obtain the information they need to operate in the context of different parsing approaches. Extragrammaticalities include patently ungrammatical constructions, which may nevertheless be semantically comprehensible, as well as lexical difficulties (e.g. misspellings), violations of semantic constraints, utterances that may be grammatically acceptable but are beyond the syntactic coverage of the system, ellipsed fragments and other dialogue phenomena, and any other difficulties that may arise in parsing individual utterances• An extragrammaticality is thus defined with respect to the capabilities of a particular system, rather than with respect to an absolute external competence model of the ideal speaker. Extragrammaticality may arise at various levels: lexical, sentential, and dialogue. This paper addresses the first two categories; the third is discussed in [8,11]. Our discussions are based on direct experience with various working parsers: FLEXP, CASPAR and DYPAR [7,8,16]. Lexical Level Extragrammaticalities One of the most frequent parsing problems is finding an unrecognizable word in the input stream. The following sections discuss the underlying reasons for the presence of unrecognizable words and describe suitable recovery strategies. The unknown word problem The word is a legitimate lexeme but is not in the system's dictionary. There are three reasons for this: • The word is outside the intended coverage of the interface (e.g. There is no reason why a natural language interface to an electronic mail system should know words like "chair" or "sky", which cannot be defined in terms of concepts in its semantic domain). o The word refers to a legitimate domain concept or combination of domain concepts, but was not included in the dictionary. (e.g. A word like "forward" [a message] can be defined as a command verb, its action can be clearly specified, and the objects upon which it operates-an old message and a new recipient --are already well-formed domain concepts.) • The word is a proper name or a unique identifier, such as a catalogue part name/number, not heretofore encountered by the system, but recognizable by a combination of contextual expectations and morphological or orthographic features (e.g., capitalization). In the first situation, there is no meaningful recovery strategy other than focused interaction [15] to inform the user of the precise difficulty. In the third, little action is required beyond recognizing the proper name and recording it appropriately for future reference. The second situation is more complicated; three basic recovery strategies are possible: 1. Follow the KLAUS [14] approach where the system temporarily wrests initiative from the user and plays a well designed "twenty questions" game, classifying the unknown term syntactically, and relating it semantically to existing concepts encoded in an inheritance hierarchy. This method has proven successful for verbs, nouns and adjectives, but only when they turn out to be instances of predefined general classes of objects and actions in the domain model. 2. Apply the project and integrate method [6] to infer the meaning and syntactic category o.f the word from context. This method has proven useful for nouns and adjectives whose meaning can be viewed as a recombination of features present elsewhere in the input. Unlike the KLAUS method, it operates in the background, placing no major run-time burden on the user. However, it remains highly experimental and may not prove practical without user confirmation. 3. Interact with the user in a focused manner to provide a paraphrase of the segment of input containing the unknown word. If this paraphrase results in the desired action, it is stored and becomes the meaning of the new word in the immediate context in which it appeared. The LIFER system [20] had a rudimentary capacity for defining synonymous phrases. A more general method would distinguish between true synonymy and functional equivalence in order to classify the new word or phrase in different semantic contexts. Misspellings Misspellings arise when an otherwise recognizable lexeme has letters omitted, substituted, transposed, or spuriously inserted. Misspellings are the most common form of extragrammaticality encountered by natural language interfaces. Usually, a word is misspell into an unrecognizable character string. But, occasionally a word is misspelt into another word in the dictionary that violates semantic or syntactic expectations. For instance: Copy the flies from ~he accounts direc!ory to my airectory Although "flies" may be a legitimate word in the domain of a particular interface (e.g., the files coulcJ consist of statistics on med-flv infestation in California). it is obvious to the human reader that there is a misspelling in the sentence above. There are well-known algorithms for matching a misspelt word against a set of possible corrections [13]. and the simplest recovery strategy is to match unknown words against the set of all words in an interface's dictionary. However, this obviously produces incorrect results when a word is misspell into a word already in the dictionary, and can produce unnecessary ambiguities in other cases. Superior results are obtained by making the spelling correction sensitive to the parser's syntactic and semantic expectations. In the following example: Add two fixed haed dual prot disks to the order "haed" can be corrected to: "had", "head", "hand:', "heed", and "hated". Syntactic expectations rule two of these out, and domain semantics rule out two others, leaving "fixed [lead disk" as the appropriate correction. Com[;utationally, there are two ways to organize this. One can either match parser expectations against all possible corrections in the parser:s current vocabulary, and rule out spurious corrections, or one can use the parse expectations to generate a set of possible words that can be recognized at the present point and use this as input to the spelling correction algorithm. The latter, when it can be done, is clearly the preferable choice on efficiency criteria. Generating all possible corrections with a 10,080 word dictionary, only to rule out all but one or two, is a computationally-intensive process, whereas exploiting fully-indexed parser expectations is far more constrained and less likely to generate ambiguity. For the example abcve, "pror' has 16 possible corrections in a small online dictionary. However, domain semantics allow only one word in the same position as "pror', so correction is most effective if the list of possible words is generated first. Interaction of morphology and misspelling Troublesome side.effects of spelling correction can arise with parsers that have an initial morphological analysis phase to reduce words to their root form. For instance, a parser might just store the root form of 'directory' and reduce 'directories' to 'directory' plus a plural marker as part of its initial morphological phase. This process is triggered by failing to recognize the inflected form as a wind that is present in the dictionary. It operates by applying standard morphological rules (e.g. -tes => +,y) to derive a root from the inflected form. It a simple matter to check first for inflected forms and then for misspellings. However, if a word is both inflected and misspelt, the expectation-based spelling correcter must be invoked from within the morphological decomposition routines on potentially misspelt roots or inflexions. Incorrect segmentation Input typed to a natural language interface is segmented into words by spaces and punctuation marks. Both kinds of segmenting markers, especially the second, can be omitted or inserted speciously. Incorrect segmentation at the lexical level results in two or more words being run together, as in "runtogether", or a single word being split up into two or more segments, as in "tog ether" or (inconveniently) "to get her", or combinations of these effects as in "runto geth er". In all these cases, it is possible to deal with such errors by extending the spelling correction mechanism to be able to recognize target words as initial segments of unknown words, and vice.versa. Compound errors, however, present some difficulties. For instance consider the following example where we have both a missing and a spurious delimiter: Add two du alport disks to the order After failing in the standard recovery methods, one letter at a time would be stripped off the beginning of the second unrecognizable word ("alporr') and added at the end of the first unrecognizable word ("du"). This process succeeds only if at some step both words are recognizable and enable the parse to continue. Migrating the delimiter (the space) backwards as well as forwards should also be attempted between a pair of unknown words, stopping if both words become recognizable. Of course, additional compounding of multi.hie iexical deviations (e.g., misspellings, run-on words and split words in the same segment) requires combinatorially inefficient recovery strategies. Strong parser expectabons can reduce the impact of this problem, but at some point tradeoffs must be made between resilience and efficiency in compound error recovery. Sentential Level Extragrammaticalities We examine ungrammaticalities at the sentential level in five basic categories: missing words, spurious words or phrases, out of order constituents, agreement violations, and semantic constraint violations. Missing constituents It is not uncommon for the use; of a natural language interface to omit words from his input. The degree of recovery possible from such ungrammaticalities is, of course, dependent on which words were left out. In practice, words whose contribution to the sentence is redundant are often omitted in an attempt to be cryptic or "computer-like" (as in "Copy new files my directory"). This suggests that techniques that fill in the structural gaps on semantic grounds are more likely tobe successful than strategies which do not facilitate the application of oor,~ain semantics. A parsing process postulates a missing word error when its eYpectations (syntactic or semantic) of what should go at a certain place in the input utterance are violated. To discover that the problem is in fact a missing word, and to find the parse structure corresponding to the user's intention, the parsing process must "step back" and examine the context of the parse as a whole. It needs to ignore temporarily the unfulfilled expectations and their contribution to the overall structure while it tries to fulfil some of its other expectations through parsing other parts of the input and integrating them with already parsed constituents. More specifically, the parser needs to delimit the gap in the input utterance, correlate it with a gap in the parse structure (filling in that ga~ if it is uniquely determined), and realign the parsing mechanism as though the gap did not exist. Such a realignment can be done top-down by predicting the other constituents from the parse structure already obtained and attempting to find them in the input stream. Alternatively, realignment can be done bottom-up by recognizing as yet unparsed elements of the input, and either fitting them into an existing parse structure, or finding a larger structure to subsume both them and the existing structure. This latter approach is essential when the structuring words are missing or garbled. Spurious and unrecognizable constituents Words in an input utterance that are spurious to a parse can arise from a variety of sources: • legitimate phrases that the parser cannot deal with: It is not uncommon for the user of a restricted domain interface to say things that the interface cannot understand because of either conceptual or grammatical limitations. Sometimes, spurious verbosity or politeness is involved: Add if you would be so kind two fixed head and if possible dual ported disks to my order. Or the user may offer irrelevant (to the system) explanations or justifications, as observed in preparatory experiments for the GUS system [4], e.g. / think / need more storage capacity, so add two fixed head dual ported disks to my order. Some common phrases of politeness can be recognized explicitly, but in most cases, the only reasonable response is to ignore the unknown phrases, realign the parse on the recognizable input, and if a semantically and syntactically complete structure results, postulate that the ignored segment was indeed redundant. Isolating certifiable noise phrases in the same way as truly spurious input provides the advantage that they can then be recognized at any point in the input without having to clutter the parser's normal processing with expectations about where they might occur. • broken-off and restarted utterances: These occur when people start to say one thing, change their mind, and say another: Add I mean remove a disk from my order Utterances in this form are more likely to occur in spoken input but a similar effect can arise in typed input when a user forgets to hit the erase line or erase character key: Add remove a disk from my order Add a single ported dual ported disk from my order Again the best tactic is to discard the broken-off fragment, but identifying and delineating the superseded fragment requares strategies such as the one discussed below. • unknown words filling a known grammatical role: Sometimes the user will generate an incomprehensible phrase synonymous with a constituent the system is perfectly capable of understanding: Add a dual ported rotating mass storage device to my order Here the system might not know that "rotating mass storage device" is synonymous with "disk". This phenomenon will result in missing words as well as spurious words. If the system has a unique expectation for what should go in the gap, it should (with appropriate confirmation from the user) record the unknown words as synonymous with what it expected. If the system has a limited set of expectations for what might go in the gap, it could ask the user which one (if any) he meant and again record the synonym for future reference. In cases where there are no strong expectations, tile system would ask for a paraphrase of the incomprehensible fragment. If this proved comprehensible, it would then postulate the synonymy relation, ask the user for confirmation, and again store the results for future reference. As for missing constituents, recovery from spurious interjections generally requires "stepping back" and examining the context of the parse as a whole. In this case however, violations of the parser's expectations should result in skipping over the troublesome segments, and attempting to fulfill the expectations by parsing subsequent segments of tile input. If this results in a complete parse, the skipped segment may well be spurious. On the other hand, if a gap in the parse strdcture remains, it can be correlated with the skipped segments to postulate possible constituents an• synonomy relations as illustrated above. In the case of broken-off utterances, there are some more specific methods that allow the spurious part of the input to be detected: • If a sequence of two constituents of identical syntactic and semantic type is found where only one is permissible, simply ignore the first constituent. Two main command verbs in sequence (e.g., in the "Add remove ..." example above), instantiate the identical sentential case I~eader role in a case frame parser, enabling the former to be ignored. Similarly, two ,lstantiations of the same prencminal case for the "disk" case frame would be recognized as mutually incompatible and the former again ignored. Other parsing strategies can be extended to recognize equivalent constituent repetition, but case frame instantiation seems uniquely well suited to it. • Recognize explicit corrective phrases and if the constituent to the right is of equivalent syntactic and semantic type as the constituent at the left, substitute the right constituent for the left constituent and continue the parse. This strategy recovers from utterances such as "Add I mean remove ...", if "1 mean" is recognized as a corrective phrase. • Select the minimal constituent for all substitutions. For instance the most natural reading of: Add a nigh speed tape drive, that's disk drive, to the order is to substitute "disk drive" for "tape drive", and not for the larger phrase "high speed tape drive", which also forms a legitimate constituent of like semantic and syntactic type. Out of order constituents and fragmentary input Sometimes, a user will employ non-standard word order. There are a variety of reasons why users violate expected constituent ordering relations, including unwillingness to change what has already been typed, especially when extensive retyping would be required: Two fixed head dual ported disk drives add to the order or a belief that a computer will understand a clipped pseudomilita,~/style more easily than standard usage: two disk drives fixed head du~/ ported to my order add Similar myth~ about what computers understand best can lead to a very fragmented and cryptic style in which all function words are eliminated: Add disk drive order instead of "add a disk drive to my order". These two phenomena, out of order constituents and fragmentary input, are grouped together because they are similar from the parsing point of view. The parser's problem in each case is to put together a group of recognizable sentence fragments without the normal syntactic glue of function words or position cues to indicate how the fragments should be combined. Since this syntactic information is not present, semantic considerations have to shoulder the burden alone. Hence, parsers which make it easy for semantic information to be brought to bear are at a considerable advantage. Both bottom-up and top.down recovery strategies are possible for detecting and recovering from missing and spurious constituents. In the bottom-up approach, all the fragments are recognized independently, and purely semantic constraints are used to assemble them into a single framework meaningful in terms of the domain of discourse. When the domain is restricted enough, the semantic constraints can be such that they always produce a unique result. This characteristic was exploited to good effect in the PLANES system [23] in which an input utterance w~s recognized as a sequence of fragments which were then assembled into a meaningful whole on the basis of semantic considerations alone. A top-clown approach to fragment recognition requires that the top-level or organizing concept in the utterance ("add" in the above examples) be located, if it can be, the predictions obtainable from it about what else might appear in the utterance can be used to guide and constrain the recognition of the other fragments. As a final point, note that in the case of out of order constituents, a parser relying on a strict left-to-right scan will have much greater difficulty than one with more directional freedom. In out of order input, there may be no meaningful set of left-to-right expectations, even allowing for gaps or extra constituents, that will fit the input. For instance, a case frame parser that scans for the head of a case frame, and subsequently attempts to instantiate the individual cases from surrounding input, is far more amenable to this type of recovery than one whose expectations are expressed as word order constraints. Syntactic and semantic constraint violations Input to a natural language system can violate both syntactic and semantic constraints. The most.common form of syntactic constraint violation is agreement failure between subject and verb or determiner and head noun: Do the order include a disk drives? Semantic constraint violations can occur because the user has conceptual problems: Add a floating head tape drive to the order or because he is imprecise in his language, using a related object in place of the object he really means. For instance, if he is trying to decide on the amount of memory to include in an order he might say: Can you connect a video disk drive to the two megabytes? When what he-really means is "... to the computer with two megabytes of memory?.". These different kinds of constraint violation require quite different kinds of treatment. In general, the syntactic agreement violations can be ignored; cases in which agreement or lack of it distinguishes between two otherwise valid readings of an input are rare. However, one problem that sometimes arises is knowing whether a noun phrase is singular or plural when the determiner or quantifier disagrees with the head noun. Semantic constraint violations due to a user's conceptual problems are harder to deal with. Once detected, the only solution is to inform the user of his misconcepLion and let him take it from there. The actual detection of the problem, however, can cause some difficulty for a parser re!ymg heavily on semantic constraints to guide its parse. The constraint violation miOht cause it to assume there was some oth~r problem such as out of order or spurious constituents, and look for (and perhaps even find) some alternative and unintended way of putting all the pieces together. This is one case where syntactic considerations should come to the fore. Semantic constraint violations based on the mention of a related object instead of the entity actually intended by the user will manifest themselves in the same way as the semantic constraint violations based on misconceptions, but their processing needs to be quite different. The violation can be resolved if the system can look at objects related to the one the user mentioned and find one that satisfies the constraints. In the example above, this means going from the memory size to the machine that has that amount of memory. Clearly, the semantic distance and the type of relationship over which this kind of substitution is allowed needs to be controlled fairly carefully --m a restricted domain everything is eventually related to everything e!se. Preference rules are needed to control the kind of substitutions that are allowed. In the above example, it might be that a part ~s allowed to substitute for a whole (metonymy), especially if, as we assumed, the part had been used earlier in the dialogue to distinguish between different instances of the whole. Support for recovery strategies by various parsing approaches We now turn to the question of incorporating recovery strategies into some of the approaches to parsing found in the literature. We consider three basic classes: transition network approaches (including syntactic ATNs and network-based semantic grammars), pattern matching approaches, and approaches based on case frame instantiation. These classes cover the majority of current catsing systems for restricted domain languages. All three approaches are able to cope with lexical level problems satisfactorily. However, as we have seen, the application of semantic constraints often makes the correction of lexical problems more efficient and less prone to ambiguity. So parsers that employ semantic constraints (e.g. semantic grammars [20,5] or case frame instantiation [12,17]) are more effective in recovery at the lexical level than parsers whose only expectations are syntactic (e.g., purely syntactic ATNs [28]). At the sentential level, however, differences in the abilities of the three approaches to cope naturally with extragrammaticality are far more pronounced. We will examine each approach in turn from this point of view. Recovery strategies and transition network parsers Althou~jh attempts have been made to incorporate sentential level recovery strategies into network-based parsers including beth syntactically-based ATNs [21,24,25,29] and semantic grammar networks [20], the network paradigm itself is not well suited to the kinds of recovery strategaes discussed in the preceding sections. These strategies generally require an interpretive abdity to "step back" and take a broad view of the situation when a parser's expectations are violated, and this is very hard to do when using networks. The underlying problem is that a significant amount of state information during the parse is implicitly encoded by the position in the network; in the case of AThls, other aspects of the state are contained in the settings of scattered registers. As demonstrated by the recta-rule approach to diagnosing parse failures described by Weischedel and Sondheimer [24]. these and other difficulties elaborated below do not make recovery from extragrammaticality impossible. However, they do make it difficult and often impractical, since much of the implicitly encoded state must be made declarative and explicit to the recovery strategies. Often an ATN parse will continue beyond the point where the grammatical deviation, say an omitted word, occurred and reach a node in the network fiom which it can make no further progreSS (i.e., no arcs can be traversed). At this point, the parser cannot ascertain the source of th.~. ' error by examining its internal state even if the state is accessible --the parser may have popped from embedded subnets, or followed a totally spurious sequence of arcs before blocking. If these problems can be overcome and the source of the error determined precisely, a major problem still remains: in order to recover, and parse input that does not accord with the grammar, while remaining true to the network formalism, the parser must modify the network dynamicall) and temporarily, and use the modified network to proceed through the present difficulties. Needless to say, this is at best a very complex process, one whose computational tractability is open to question in the most general case (though see [21]). It is perhaps not surprising that in one of the most effective recovery mechanisms developed for network-based parsing, the LIFER system's ellipsis handling routine [20], the key step operates completely outside the network formalism. As we have seen, semantic constraints are very important in recovering from many types of ungrammatical input, and these are by definition unavailable in a purely syntactic ATN parser. However, semantic information can be brought to bear on network based parsing, either through the semantic grammar approach in which joint semantic and syntactic categories are used directly in the ATN, or by allowing the tests on ATN arcs to depend on semantic criteria [2,3]. In the former technique, the appropriate semantic information for recovery can be applied only if the correct network node can be located --a sometimes difficult task as we have seen. In the latter technique, sometimes known as cascaded ATNs [27], the syntactic and semantic parts of the grammar are kept separate, thus giving the potential for a higher d~gree of interpretivem:ss in using the semantic information. However, semantic information represented in this fashion is generally only used to confirm or disconfirm parses arrived at on syntactic grounds and does not participate directly in the parsing process. A further disadvantage of the network approach for implementing flexible recovery strategies is that networks naturally operate in a top-down left-to-right mode. As we have seen, a bottom.up capability is essential for many recovery strategies, and directional flexibility often enables easier and more efficient operation of the strategies. Of course, the top.down left-to-right mode of operation is a characteristic of the network interpreter, not of the network formalism itself, and an attempt [29] has been made to operate an ATN in an "island" mode, i.e. bottom-up, center-out. This experiment was done in the context of a speech parser where the low-level recognition of many of the input words was uncertain, though the input as a whole was assumed to be grammatical. In that situation, there were clear advantages to starting with islands of relative lexicar certainty, and working out from them. Problems, however, arise during leftward expansion from an island when it is necessary to run the network backwards. The admissibility of ATN transitions can depend on tests which access the values of registers which would have been set earlier when traversing the network forwards, but which cannot have been set when traversing backwards. This leads at best to an increase in non-determinism, and at worse to blocking the traversal completely. Recovery strategies and pattern matching parsers A pattern matching approach to parsing provides a better framework to recover from some sentential level deviations than a network-based approach. In parttcular, the definition of what constitutes a pattern match can be relaxed to allow for missing or spurious constituents. For mis.~ing constituents, patterns which match some, but not all, of their components can be counted temporarily as complete matches, and spurious constituents can be ignored so long as they are embedded in a pattern whose other components do match. In these cases, the patterns taken as a whole provide a basis on which to perforrn the kind of "stepping back" discussed above as being vdal for flexible recovery. In addition, when pattern elements are defined semantically instead of lexically, as with Wilks' machine translation system [26], semantic constraints can easily be brought to bear on the recognition. However, dealing with out of order constituents is not so easy for a pattern-based approach since constituent order is built into a pattern in a rigid way, similarly to a network. It is possible to accept any permutation of elements of a pattern as a match, but this provides so much flex;bility that many spurious recognitions are likely to be obtained as well as the correct ones (see [16]). An underlying problem here is that there is no natural way to make the distinctions about the relative importance or difference in role between one word and another. For instance, parsing many of our examples might have involved use of a pattern like: (~.determiner> ~disk-drive-attribute,~" ~disk-drive,~) which specifies a determiner, followed by zero or more attributes of a disk drive, followed by a phrase synonymous with "disk drive". So this pattern would recognize phrases like "a dual ported disk" or "the disk drive". Using the method of dealing with missing constituents mentioned above, "the" would constitute just as good a partial match for this pattern as "disk drive", a clearly undesirable result. The problem is that there is no way to tell the flexible matcher which components of the pattern are discriminating from the point of view of recognition and which are not. Another manifestation of the same problem is that different words and constituents may be easier or harder to recognize (e.g. prepositions are easier to recognize than the noun phrases they introduce), and thus may be more or less worthwhile to look for in an attempt to recover from a grammatical deviation. The underlying problem is the uniformity of the grammar representation and the method of applying it to the input. Any uniformly represented grammar, whether based on patterns or networks, will have trouble representing and using the kinds of distinctions just outlined, and thus is poorly equipped to deal with many grammatical deviations in an efficient and discriminating manner. See [18] for a fuller discussion of this point. Recovery strategies and case frame parsers Recursive case frame instantiation appears to provide a better framework for recovery from missing words than approaches based on either network traversal or pattern matchil~g. There are several reasons: • Case frame instantiation is inherently a highly interpretive process. Case frames provide a high-level set of syntactic and semantic expectations that can be applied to the input in a variety of ways. They also provide an overall framework that can be used to realize the notion of "stepping back" to obtain a broad view of a parser's expectations. o Case frame instantiation is a good vehicle for bringing semantic and pragmatic information to bear in order to help determine the appropriate parse in the absence of expected syntactic constituents. If a preposition is omitted (as commonly happens when dealing with cryptic input from hunt-and-peck typists), the resulting sentence is syntactically anomalous. However, semantic case constraints can be sufficiently strong to attach each noun phrase to the correct structure. Suppose, for instance, the following sentence is typed to an elec',ronic mail system interface: Send message John Smith The missing determiner presents few problems, but the missing preposition can be more serious. Do we mean to send a message "to John Smith", "about John Smith", "with John Smith", "for John Smith", "from John Smith", "in John Smith", "of John Smith", etc.? The domain semantics of the case frame rule out the latter three possibilities and others like them as nonsensical. However, pragmatic knowledge is required to select "to John Smith" as the preferred reading (possibly subject to user confirmation) --the destination case of the verb is required for the command to be effective, whereas the other cases, if present, are optional. This knowledge of the underlying action must be brought to bear at parse time to disambiguate the cryptic command. In the XCALIBUR system case frame encoding [10], pragmatic knowledge of this kind is represented as oreference constraints (cf. [26]) on case fi!lers. This allows XCALIBUR to overcome problems created by the absence of expected case markers through the application of the appropriate domain knowledge. • The propagation of semantic knowledge through a case frame (via attached procedures such as those of KRL [1] or SRL [30]), can fiil in parser defaults and allow the internal completion of phrases such as "dual disks" to mean "dual ported disks". This process is also responsible for noticing when information is either missing or ambiguously determined, thereby initiating a focused clarificational dialogue [15]. • The representation of case frames is inherently non-uniform. Case fillers, case markers, and case headers are all represented separately, and thi$ distinction can be used by the parser interpretively mstantiating the case frame. For instance, if a case frame accounts for the non-spurious part of an input containing spurious constituents, a recovery strategy can skip over the unrecognizable words by scanning for case markers as opposed to case fillers which typically are much harder to find and parse. This ability to exploit non-uniformity goes a long way to overcoming the problems with uniform parsing methods outlined in the previous section on pattern matching. Dialogue Level Extragrammaticality The underlying causes of many extragrammaticalities detected at the sentential level are rooted in dialogue phenomena. For instance, ellipses and other fragmentary inputs are patently ungrammatical at the sentential level, but can be understood in the context of a dialogue. Viewed at this more global level, ellipsis is not ungrammatical. Nevertheless, the same computational mechanisms required to recover from lexioal and (especially) sentential problems are neces.~ary to detect ellipsis and parse the fragments correctly for incorporation into a larger structure. In general, many dialogue phenomena can be classified pragmatically as extragrammaticalities. In addition to addressing dialogue level extragrammaticalities, any robust parsing system must engage the user in dialogue for cooperative resolution of parsing problems too difficult for automatic recovery. Interaction with the user is also necessary for a cooperative parser to confirm any assumptions it makes in interpreting extragrammatical input and to resolve any ambiguities it cannot overcome on its own. We have referred several times in our discussions to the principle of tocused interaction, and stated that practical recovery dialogues should be focused as tightly as possible on the specific problem at hand. Because of space limitations, this paper does not discuss details the automated resolution of dialogue level extragrarnmaticalities or the use of dialogue to engage the user in cooperative resolution. The interested reader is referred to [8]. Concluding Remarks Any practical natural language interface must be capable of dealing with a wide range of extragrammatical input. This paper has proposed a partial taxonomy of extragrammatica!!ties that arise in spontaneously generated input to a restricted-domain natural language interface and has presented recovery strategies for handhng many of the categories. We also discussed how well three widely employed approaches to parsing --network-based parsing, pattern matching, and case frame instantation --could support the recovery strategies, and concluded that case frame instantiation provided the best basis The reader is referred to [8] for a more complete presentation, including a more complete taxonomy and additional recovery strategies, particularly at the dialogue level. Based on the set of recovery strategies we have examined and the problems that arise in trying to integrate them with techniques for parsing grammatical input, we offer the following set of desiderata for a parsing process that has to deal with extragrammatical input: = The parsing process should be as interpretive as possible. We have seen several times the need for a parsing process to "stand back" and look at the broad picture of the set of expectations (or grammar) it is applying to the input when an ungrammaticality arises. The more interpretive a parser is, tbe better able it is to do this. A highly interpretive parser is also better able to apply its expectations to the input in more than one way, which may be crucial if the standard way does not work in the face of an ungrammaticality. • The parsing process should make it easy to apply semantic information. As we have seen, semantic information is often very important in resolving ungrammaticalities. = The parsing process should be able to take advantage of non-uniformity in language like that identified in Section 4.2. As we have seen, recovery can be much more efficient and reliable if a parser is able to make use of variations in ease of recognition or discriminating power between different constituents. Th~s kind of "opportunism" can be built into recovery strategies. = The parsing process should be capable of operating top. down as well as bottom-up. We have seen examples where both of these modes are essential. We believe that case frame mstantiation provides a better basis for parsing extragrammatical input than network-based parsing or pat!ern matching precisely because it satisfies these desiderata better than the other two approaches. We also believe that it is possible do even better than case frame instantiation by using a multi-strategy approach in which case frame instantiation is just one member (albeit a very important one) of a whole array of parsiag and recovery strategies. We argue this claim in detail in [8,] and support it by discussion of three experimental parsers that in varying degrees adopt the multi-strategy approach. An Overview of KRL, a Knowledge Reprusentation Language. . D G Bobro~, T Winogred, Cognitive Science. 11BBN ReportBobro~.,,. D.G. and Winogred, T., "An Overview of KRL, a Knowledge Reprusentation Language," Cognitive Science, Vol. 1, No. 1, 1977, pp. 3-46, 2. Bobrow. R.J., "The RUS System," BBN Report 3878, Bolt, Beranek, and Newman, 1978. Knowledge Representation for Syntactic/Semantic Processing. R J Bobrow, Webber, Prec. National Conference of the American Association IorArtilicial Intelligence. Stanford UniversityBobrow, R.J. and Webber. B, "Knowledge Representation for Syntactic/Semantic Processing," Prec. National Conference of the American Association IorArtilicial Intelligence. Stanford University, August 1980. GUS: a Frame.Driven Dialogue System. D G Bobrow, R M Kaplan, M Kay, D A Norman, H Thompson, T Winograd, Artificial Intelligence. 8Bobrow, D.G., Kaplan, R.M., Kay, M., Norman D.A., Thompson, H., and Winograd, T., 'GUS: a Frame.Driven Dialogue System," Artificial Intelligence, VOL 8, 1977, pp. 155-173. Multiple Representations o! Knowledge for Tutorial Reasoning. J S Brown, R R Burton, Representation and Understanding. Bobrow, D. G. and Collins, A., ed.Academic PressNew YorkBrown, J.S. and Burton. R.R.. "Multiple Representations o! Knowledge for Tutorial Reasoning," in Representation and Understanding. Bobrow, D. G. and Collins, A., ed., Academic Press, New York, 1975, pp. 311-349. Towards a Self-Extending Parser. J G Carbonell, Proceedings of the 171h Meeting ot the Association for Computational Linguistics. the 171h Meeting ot the Association for Computational LinguisticsCarbonelL J. G., "Towards a Self-Extending Parser," Proceedings of the 171h Meeting ot the Association for Computational Linguistics, 1979, pp. 3-7. Robust Parsing Us!ng Multiple Construction-Specific Strategies. J G Carbonell, P J Hayes, Natural Language Parsing Systems, L. BOIc. Springer-VerlagCarbonell, J. G. and Hayes, P. J., "Robust Parsing Us!ng Multiple Construction- Specific Strategies," in Natural Language Parsing Systems, L. BOIc, ed., Springer-Verlag, 1984. Recovery Strategies for Parsing Extragrammatical Language. J G Carbonell, P J Hayes, VOI. 10Journal of Computational Linguistics. publication forthcomingCarbonell, J.G. and Hayes, P.J., "Recovery Strategies for Parsing Extragrammatical Language," Journal of Computational Linguistics, VOI. 10, 1984, (publication forthcoming). The XCALIBUR Project, A Natural Language Interlace to Expert Systems. J G Carbonell, W M Boggs, M L Mauldin, P G Anick, Proceedings of the Eighth International Joi. ~t Conlerence on Artificial intelligenceCarbonell, J.G., Boggs, W.M., Mauldin, M.L. and Anick, P.G., "The XCALIBUR Project, A Natural Language Interlace to Expert Systems," Proceedings of the Eighth International Joi,'~t Conlerence on Artificial intelligence, 1983. . J G Carbonell, Carbonell, J.G.. XCALIBUR Progress Report #1: First Steps Towards an Integrated Natural Language Interface. W M Boggs, M L Mauldin, P G Anick, Carneg~e-Mellon University, Computer Science DepartmentTech. reportBoggs, W. M., Mauldin, M L. and Anick, P.G., "XCALIBUR Progress Report #1: First Steps Towards an Integrated Natural Language Interface," Tech. report, Carneg~e-Mellon University, Computer Science Department, 1983. Discourse Pragmatics in Task-Oriented Natural Language Interlaces. ! Carbonell, J G , Proceedings of the 21st annual meeting of the Association tot Computalional Linguistics. the 21st annual meeting of the Association tot Computalional Linguistics! Carbonell, J G., "Discourse Pragmatics in Task-Oriented Natural Language Interlaces," Proceedings of the 21st annual meeting of the Association tot Computalional Linguistics, 1983. Skimming Stories in Real-Time, PhD dissertation. G Dejong, Computer Science Dept., Yale University, Dejong, G.. Skimming Stories in Real-Time, PhD dissertation. Computer Science Dept., Yale University. 1979. Spelling Correction in User Interfaces. I Durham, D D Lamb, J B Saxe, Comm. ACM. 26Durham. I., Lamb, D.D.. and Saxe, J.B., "Spelling Correction in User Interfaces," Comm. ACM, Vol. 26, 1983. Learning by Being Told: Acquiring Knowledge tot Infoll,~ahon Management. N Haas, G G Hendrix, Machine Learning, An Artificial Intelligence 4pproach. R. S. Michalski, J.G. C.arbonell and T.M. MitchellPale Alto, CATioga PressHaas. N. and Hendrix, G. G., "Learning by Being Told: Acquiring Knowledge tot Infoll,~ahon Management," in Machine Learning, An Artificial Intelligence 4pproach, R. S. Michalski, J.G. C.arbonell and T.M. Mitchell, eds., Tioga Press, Pale Alto, CA, 1983. A Construction Specific Approach to Focused Interaction in Flexible Parsing. P J Hayes, Prec. el 191h Annual Meeting of the Assoc. for Comput. Ling. Hayes P.J., "A Construction Specific Approach to Focused Interaction in Flexible Parsing," Prec. el 191h Annual Meeting of the Assoc. for Comput. Ling., June 1981, pp. 149.152. Flexible Parsing. P J Hayes, G V Mouradian, American Journal of Computational Linguistics. 74Hayes, P.J. and Mouradian, G.V., "Flexible Parsing," American Journal of Computational Linguistics. VoL 7, No. 4, 1981, pp. 232-241. Multi Strategy Construction-Specific Parsing for Flexible Data Base Query and Update. P J Hayes, J G Carbonell, VancouverPrec. Seventh Int. JtHayes, P. J. and Carbonell, J. G., "Multi Strategy Construction-Specific Parsing for Flexible Data Base Query and Update," Prec. Seventh Int. Jt. Conf. on Artificia//nte//igence, Vancouver, August 1981, pp. 432.439. Multi-Strategy Parsing and its Role in Robust Man-Machine Communication. P J Hayes, J G Carbonell, CMU-CS-81-118Carnegie. Mellon Umversity, Computer Science DepartmentTech. reportHayes, P. J. and Carbonell, J. G., "Multi-Strategy Parsing and its Role in Robust Man-Machine Communication," Tech. report CMU-CS-81-118, Carnegie. Mellon Umversity, Computer Science Department, May 1981. Developing a Natural Language Interface m Complex Data. G G Hendrix, E D Sacerdoti, J Slocum, Tech. report Artificial Intelligence Canter., SRI International. Hendrix, G.G., Sacerdoti, E.D. and Slocum, J., "Developing a Natural Language Interface m Complex Data," Tech. report Artificial Intelligence Canter., SRI International, 1976. Human Engineering for Applied Natural Language Processing. G Hendrix, Prec. Fillh Int. Jt. Conf. on Artiliciel Intelligence. Hendrix, G.G, "Human Engineering for Applied Natural Language Processing," Prec. Fillh Int. Jt. Conf. on Artiliciel Intelligence. 1977, pp. 183-191. Relaxalion Techniques for Parsing Grammatically IlI-Folmed Ioput in Natural Language Understanding Systems. S C Kwasny, N K Sondheimer, American Journal o1 Computational Lmguistics. 72Kwasny. S.C. and Sondheimer, N K., "Relaxalion Techniques for Parsing Grammatically IlI-Folmed Ioput in Natural Language Understanding Systems," American Journal o1 Computational Lmguistics. Vol. 7, No. 2, 1981, pp. 99-108. An Integrated Undemtander. R C S~.Hank, M Lebowitz, E Bimbaum, American Journal of Computational Linguistics. 61S~.hank, R C., Lebowitz, M, Bimbaum, E., "An Integrated Undemtander," American Journal of Computational Linguistics. Vol. 6, NO. 1, 1980, pp, 13-30. An English Language Question Answering System for a Large Relational Data Base. D L Waltz, Comm. ACM. 217Waltz, D.L., "An English Language Question Answering System for a Large Relational Data Base," Comm. ACM, Vol. 21 ,'No. 7, 1978, pp. 526-539. Me;a-Rules as a Basis for Processing Ill-formed Input. R M Weischedel, N K Sondheimer, Computational Linguistics. 10Weischedel, R.M. and Sondheimer, N K., "Me;a-Rules as a Basis for Processing Ill-formed Input," Computational Linguistics, VoL 10, 1984. Responding to Potentially Unparseable Sentences. R M Wemchedel, J Black, American Journal of Computational Linguistics. 6Wemchedel, R.M. and Black, J., "Responding to Potentially Unparseable Sentences," American Journal of Computational Linguistics, Vol, 6, 1980, pp. 97-109. Preference Semantics. Y A Wilks, Formal Semantics of Natural Language, Keenan. Cambridge University PressWilks, Y.A., "Preference Semantics," in Formal Semantics of Natural Language, Keenan, ed., Cambridge University Press, 1975. Cascaded ATN Grammars. W A Woods, American Journal of Ccmput=tional Linguistics. 61Woods, W.A., "Cascaded ATN Grammars," American Journal of Ccmput=tional Linguistics. Vol. 6, No. 1, August 1980, pp. 1-12. The Lunar Sciences Language System; Final Report. W A Woods, R M Kaplan, B Nash-Webber, Beranek Bolt, Inc Newman, Canlbridge, Mass, 2378lech. reportWoods, W.A., Kaplan, R.M., and Nash-Webber, B., "The Lunar Sciences Language System; Final Report," lech. report 2378, Bolt, Beranek, and Newman, inc., Canlbridge, Mass., 1972. W A Woods, M Bates, G Brown, B Bruce, C Cook, J Klovatad, J Makhoul, B Nash-Webber, R Schwartz, J Wolf, V Zue, Speech UndeL'~tandmg Systems • Final Technical Report. Newman, Inc3438Tech. reportWoods, W. A., Bates, M., Brown, G., Bruce, B,, Cook, C., Klovatad, J., Makhoul, J., Nash-Webber, B., Schwartz., R., Wolf, J., and Zue, V., "Speech UndeL'~tandmg Systems • Final Technical Report," Tech. report 3438, Bolt, Beranek, and Newman, Inc.. . Mass Cambridge, Cambridge, Mass., 1976. The SRL User3 Manual. K Wright, M Fox, Robotic,= institute. Carnegie-Mellon UniversityTech. reportWright, K. and Fox, M, "The SRL User3 Manual," Tech. report, Robotic,= institute, Carnegie-Mellon University, 1983.
3,037,733
Active Sample Selection for Named Entity Transliteration
This paper introduces a new method for identifying named-entity (NE) transliterations within bilingual corpora. Current state-of-theart approaches usually require annotated data and relevant linguistic knowledge which may not be available for all languages. We show how to effectively train an accurate transliteration classifier using very little data, obtained automatically. To perform this task, we introduce a new active sampling paradigm for guiding and adapting the sample selection process. We also investigate how to improve the classifier by identifying repeated patterns in the training data. We evaluated our approach using English, Russian and Hebrew corpora.
[ 669616, 8424232 ]
Active Sample Selection for Named Entity Transliteration June 2008 Dan Goldwasser goldwas1@uiuc.edu Department of Computer Science University of Illinois Urbana 61801ColumbusOhio, ILUSA Dan Roth danr@uiuc.edu Department of Computer Science University of Illinois Urbana 61801ColumbusOhio, ILUSA Active Sample Selection for Named Entity Transliteration June 2008Proceedings of ACL-08: HLT, Short Papers (Companion Volume), pages 53-56, This paper introduces a new method for identifying named-entity (NE) transliterations within bilingual corpora. Current state-of-theart approaches usually require annotated data and relevant linguistic knowledge which may not be available for all languages. We show how to effectively train an accurate transliteration classifier using very little data, obtained automatically. To perform this task, we introduce a new active sampling paradigm for guiding and adapting the sample selection process. We also investigate how to improve the classifier by identifying repeated patterns in the training data. We evaluated our approach using English, Russian and Hebrew corpora. Introduction This paper presents a new approach for constructing a discriminative transliteration model. Our approach is fully automated and requires little knowledge of the source and target languages. Named entity (NE) transliteration is the process of transcribing a NE from a source language to a target language based on phonetic similarity between the entities. Figure 1 provides examples of NE transliterations in English Russian and Hebrew. Identifying transliteration pairs is an important component in many linguistic applications such as machine translation and information retrieval, which require identifying out-of-vocabulary words. In our settings, we have access to source language NE and the ability to label the data upon request. We introduce a new active sampling paradigm that aims to guide the learner toward informative samples, allowing learning from a small number of representative examples. After the data is obtained it is analyzed to identify repeating patterns which can be used to focus the training process of the model. Previous works usually take a generative approach, (Knight and Graehl, 1997). Other approaches exploit similarities in aligned bilingual corpora; for example, (Tao et al., 2006) combine two unsupervised methods. (Klementiev and Roth, 2006) bootstrap with a classifier used interchangeably with an unsupervised temporal alignment method. Although these approaches alleviate the problem of obtaining annotated data, other resources are still required, such as a large aligned bilingual corpus. The idea of selectively sampling training samples has been wildly discussed in machine learning theory (Seung et al., 1992) and has been applied successfully to several NLP applications (McCallum and Nigam, 1998). Unlike other approaches,our approach is based on minimizing the distance between the feature distribution of a comprehensive reference set and the sampled set. Training a Transliteration Model Our framework works in several stages, as summarized in Algorithm 1. First, a training set consisting of NE transliteration pairs (w s , w t ) is automatically generated using an active sample selection scheme. The sample selection process is guided by the Sufficient Spanning Features criterion (SSF) introduced in section 2.2, to identify informative samples in the source language.An oracle capable of pairing a NE in the source language with its counterpart in the target language is then used. Negative training samples are generated by reshuffling the terms in these pairs. Once the training data has been collected, the data is analyzed to identify repeating patterns in the data which are used to focus the training process by assigning weights to features corresponding to the observed patterns. Finally, a linear model is trained using a variation of the averaged perceptron (Freund and Schapire, 1998) algorithm. The remainder of this section provides details about these stages; the basic formulation of the transliteration model and the feature extraction scheme is described in section 2.1, in section 2.2 the selective sampling process is described and finally section 2.3 explains how learning is focused by using feature weights. select a set C ⊆ N E S randomly 3 w s = argmin w∈C distance(R, D S ∪ {w s }) 4 D = D ∪ {W s , O(W s )} 5 until distance(R,D S ∪ {W s }) ≥ distance(R,D S ) ; 6 Determining Features Activation Strength 7 Define W:f → s.t. foreach feature f ={f s , f t } 8 W (f) = (fs,ft) (f s ) × (fs,ft) (f t ) 9 Use D to train M; 10 Algorithm 1: Constructing a transliteration model. Transliteration Model Our transliteration model takes a discriminative approach; the classifier is presented with a word pair (w s , w t ) , where w s is a named entity and it is asked to determine whether w t is a transliteration Figure 2: Features extraction process of the NE in the target language. We use a linear classifier trained with a regularized perceptron update rule (Grove and Roth, 2001) as implemented in SNoW, (Roth, 1998). The classifier's confidence score is used for ranking of positively tagged transliteration candidates. Our initial feature extraction scheme follows the one presented in (Klementiev and Roth, 2006), in which the feature space consists of n-gram pairs from the two languages. Given a sample, each word is decomposed into a set of substrings of up to a given length (including the empty string). Features are generated by pairing substrings from the two sets whose relative positions in the original words differ by one or less places; first each word is decomposed into a set of substrings then substrings from the two sets are coupled to complete the pair representation. Figure 2 depicts this process. Guiding the Sampling Process with SSF The initial step in our framework is to generate a training set of transliteration pairs; this is done by pairing highly informative source language candidate NEs with target language counterparts. We developed a criterion for adding new samples, Sufficiently Spanning Features (SSF), which quantifies the sampled set ability to span the feature space. This is done by evaluating the L-1 distance between the frequency distributions of source language word fragments in the current sampled set and in a comprehensive set of source language NEs, serving as reference. We argue that since the features used for learning are n-gram features, once these two distributions are close enough, our examples space provides a good and concise characterization of all named entities we will ever need to consider. A special care should be given to choosing an appropriate reference; as a general guideline the reference set should be representative of the testing data. We collected a set R, consisting of 50,000 NE by crawling through Wikipedia's articles and using an English NER system available at -http://L2R.cs.uiuc.edu/ cogcomp. The frequency distribution was generated over all character level bi-grams appearing in the text, as bi-grams best correlate with the way features are extracted. Given a reference text R, the n-grams distribution of R can be defined as follows -D R (ng i ) = ng i j ng j ,where ng is an n-gram in R. Given a sample set S, we measure the L 1 distance between the distributions: distance (R,S) = ng∈R | D R (ng)−D S (ng) | Samples decreasing the distance between the distributions were added to the training data. Given a set C of candidates for annotation, a sample w s ∈ C was added to the training set, if - w s = argmin w∈C distance(R, D S ∪ {w s }). A sample set is said to have SSF, if the distance remains constant as more samples are added. Transliteration Oracle Implementation The transliteration oracle is essentially a mapping between the named entities, i.e. given an NE in the source language it provides the matching NE in the target language. An automatic oracle was implemented by crawling through Wikipedia topic aligned document pairs. Given a pair of topic aligned documents in the two languages, the topic can be identified either by identifying the top ranking terms or by simply identifying the title of the documents. By choosing documents in Wikipedia's biography category we ensured that the topic of the documents is person NE. Training the transliteration model The feature extraction scheme we use generates features by coupling substrings from the two terms. Ideally, given a positive sample, it is desirable that paired substrings would encode phonetically similar or a distinctive context in which the two scripts correlate. Given enough positive samples, such features will appear with distinctive frequency. Taking this idea further, these features were recognized by measuring the co-occurrence frequency of substrings of up to two characters in both languages. Each feature f=(f s , f t ) composed of two substrings taken from English and Hebrew words was associated with weight. , in which, as we empirically tested, the highest ranking weights were assigned to features that preserve the phonetic correlation between the two languages. W (f) = (f s ,f t ) (fs) × (f s ,f t )( To improve the classifier's learning rate, the learning process is focused around these features. Given a sample, the learner is presented with a real-valued feature vector instead of a binary vector, in which each value indicates both that the feature is active and its activation strength -i.e. the weight assigned to it. Evaluation We evaluated our approach in two settings; first, we compared our system to a baseline system described in (Klementiev and Roth, 2006). Given a bilingual corpus with the English NE annotated, the system had to discover the NE in target language text. We used the English-Russian news corpus used in the baseline system. NEs were grouped into equivalence classes, each containing different variations of the same NE. We randomly sampled 500 documents from the corpus. Transliteration pairs were mapped into 97 equivalence classes, identified by an expert. In a second experiment, different learning parameters such as selective sampling efficiency and feature weights were checked. 300 English-Russian and English-Hebrew NE pairs were used; negative samples were generated by coupling every English NE with all other target language NEs. Table 1 presents the key results of these experiments and compared with the baseline system. 3.1 Using SSF directed sampling Table 2 describes the effect of directed sampling in the English-Russian news corpora NE discovery task. Results show that models trained using selective sampling can outperform models trained with more than twice the amount of data. Table 3 describes the effect training the model with weights.The training set consisted of 150 samples extracted using SSF directed sampling. Three variations were tested -training without feature weights, using the feature weights as the initial network weights without training and training with weights. Training using feature weights The results clearly show that using weights for training improve the classifier's performance for both Russian and Hebrew. It can also be observed that in many cases the correct pair was ranked in any of the top five places. Conclusions and future work In this paper we presented a new approach for constructing a transliteration model automatically and efficiently by selectively extracting transliteration samples covering relevant parts of the feature space and focusing the learning process on these features. We show that our approach can outperform systems requiring supervision, manual intervention and a considerable amount of data. We propose a new measure for selective sample selection which can be used independently. We currently investigate applying it in other domains with potentially larger feature Table 3: The proportion of correctly identified transliteration pairs with/out using weights and training. The top one and top five results columns describe the proportion of correctly identified pairs ranked in the first place and in any of the top five places, respectively. The results demonstrate that using feature weights improves performance for both target languages. space than used in this work. Another aspect investigated is using our selective sampling for adapting the learning process for data originating from different sources; using the a reference set representative of the testing data, training samples, originating from a different source , can be biased towards the testing data. Acknowledgments Partly supported by NSF grant ITR IIS-0428472 and DARPA funding under the Bootstrap Learning Program. Figure 1 : 1NE in English, Russian and Hebrew. Input: Bilingual, comparable corpus (S, T ), set of named entities N E S from S, Reference Corpus R S , Transliteration Oracle O, Training Corpora D=D S ,D T Output: Transliteration model M ( f s , f t ) is the number of occurrences of that feature in the positive sample set, and (f L ) is the number of occurrences of an individual substring, in any of the features extracted from positive samples in the training set. The result of this process is a weight table Table 2 : 2Comparison of correctly identified English- Russian transliteration pairs in news corpus. The model trained using selective sampling outperforms models trained using random sampling, even when trained with twice the data. The top one and top two results columns describe the proportion of correctly identified pairs ranked in the first and top two places, respectively. Large margin classification using the perceptron algorithm. Y Freund, R E Schapire, COLT. Y. Freund and R. E. Schapire. 1998. Large margin clas- sification using the perceptron algorithm. In COLT. A Grove, D Roth, Linear concepts and hidden variables. ML. 42A. Grove and D. Roth. 2001. Linear concepts and hidden variables. ML, 42. Weakly supervised named entity transliteration and discovery from multilingual comparable corpora. A Klementiev, D Roth, ACL. A. Klementiev and D. Roth. 2006. Weakly supervised named entity transliteration and discovery from multi- lingual comparable corpora. In ACL. Machine transliteration. K Knight, J Graehl, EACL. K. Knight and J. Graehl. 1997. Machine transliteration. In EACL. Employing EM in pool-based active learning for text classification. D K Mccallum, K Nigam, ICML. D. K. McCallum and K. Nigam. 1998. Employing EM in pool-based active learning for text classification. In ICML. Learning to resolve natural language ambiguities: A unified approach. D Roth, AAAI. D. Roth. 1998. Learning to resolve natural language am- biguities: A unified approach. In AAAI. Query by committee. H S Seung, M Opper, H Sompolinsky, COLT. H. S. Seung, M. Opper, and H. Sompolinsky. 1992. Query by committee. In COLT. Unsupervised named entity transliteration using temporal and phonetic correlation. T Tao, S Yoon, A Fister, R Sproat, C Zhai, EMNLP. T. Tao, S. Yoon, A. Fister, R. Sproat, and C. Zhai. 2006. Unsupervised named entity transliteration using tem- poral and phonetic correlation. In EMNLP.
6,156,475
Large scale testing of a descriptive phrase finder
This paper describes an evaluation of an existing technique that locates sentences containing descriptions of a query word or phrase. The experiments expand on previous tests by exploring the effectiveness of the system when searching from a much larger document collection. The results showed the system working significantly better than when searching over smaller collections. The improvement was such, that a more stringent definition of what constituted a correct description was devised to better measure effectiveness. The results also pointed to potentially new forms of evidence that might be used in improving the location process.
[ 8155707, 9371149 ]
Large scale testing of a descriptive phrase finder Hideo Joho h.joho@sheffield.ac.uk Department of Information Studies Department of Information Studies University of Sheffield Western Bank SheffieldS10 2TN, +44 (0)114, 222 2675UK Ying Ki Liu Department of Information Studies University of Sheffield Western Bank SheffieldS10 2TNUK Mark Sanderson m.sanderson@sheffield.ac.uk University of Sheffield Western Bank Sheffield+44 (0)114S10 2TN, 222, 2648UK Large scale testing of a descriptive phrase finder Information retrievaldescriptive phrasesWWW This paper describes an evaluation of an existing technique that locates sentences containing descriptions of a query word or phrase. The experiments expand on previous tests by exploring the effectiveness of the system when searching from a much larger document collection. The results showed the system working significantly better than when searching over smaller collections. The improvement was such, that a more stringent definition of what constituted a correct description was devised to better measure effectiveness. The results also pointed to potentially new forms of evidence that might be used in improving the location process. INTRODUCTION Retrieving descriptions of the words and phrases, which are not often found in dictionaries, has potential benefits for a number of fields. The Descriptive Phrase Finder (DPF) is a system that retrieves descriptions of a query term from free text. The system only uses simple pattern matching to detect a description, and ranks the sentences that hold the descriptive phrases based on within document and cross document term occurrence information. The system does not attempt to extract descriptions from text, it simply locates sentences that are hopefully relevant to a user. It is assumed that users are able to read a sentence and locate any description within it. The advantage of using such an approach is that the DPF is much simplified and does not require parsing to find the exact location of the phrase. Due to its simplicity, it achieves a level of domain independence. The DPF was implemented and succeeded in The DPF was implemented and succeeded in retrieving sentences holding descriptive phrases (DPs) of a wide range of proper nouns. Initial testing on a collection of LA Times articles from the TREC Collection showed that 90% of the queries had at least one correct DP in the top 5 ranked sentences and 94% in the top 10 ( [3]). It was shown that the effectiveness of the system was in part due to the large amount of free text being searched. What was not shown by the experiment was if performance could be further improved by searching an even larger text. Consequently, a larger scale experiment was conducted, searching for phrases from the World Wide Web (WWW) using the output of a commercial Web search engine to locate candidate documents that were then processed locally by the DPF. In addition to increasing the number of documents searched, more queries were tested and different definitions of relevance were tried. The rest of this short paper explains the system and shows the results of the expanded experiment, followed by pointers to future work. THE SYSTEM The Web-based DPF was composed of two parts: a front-end to an existing Web search engine, which fetched documents; and the system that located sentences holding descriptive phrases. The Web front end simply routed queries to a Web search engine (Google), and the text of the top 600 documents returned by the engine was fetched, split into sentences (using a locally developed sentence splitter), and those sentences holding the query term were passed onto the DPF. It ranked sentences on a score calculated from multiple sources of evidence. A detailed description of the DPF is found in [3]. The primary clue to there being a descriptive phrase in a sentence was the presence of a key phrase within it. An example key phrase was "such as", which may be found in the sentence: "He used several search engines such as AltaVista, HotBot and WebTop to compare the performance". If such a sentence were returned to a user who entered the query "WebTop", they would determine it was a search engine. Specifically, the DPF is searching for the key phrase in proximity to a query noun (qn) to locate a descriptive phrase (dp) e.g. • . .. dp such as qn ... other key phrases used, some suggested by [2], were • ... such dp as qn ... • ... qn (and | or) other dp ... • ... dp (especially | including) qn ... • ... qn (dp) ... • ... qn is a dp ... • .. qn, (a | the) dp, ... The phrases form the key part of the DPF as they identify well sentences likely to contain descriptions of qn. While the number of times a particular qn appears in a sentence with a key phrase are small, by searching a large corpus, like the Web, the chances of finding a few (accurately identified) descriptions of qn in the form required are high. Based on results from a testing phase, certain key phrases were found more accurate at locating a descriptive phrase than others. Consequently, when ranking matching sentences, different scores were assigned depending on the accuracy of the key phrase found within. Since unfamiliar words tend to be explained or rephrased at the early part of a document, sentence position was also a factor in the rank score, with earlier sentences given preference. Finally, cross-document information was taken into account. Across all the matching sentences for a particular query, the occurrence of all the terms within the sentences was noted. It was anticipated that terms occurring more frequently within the set of sentences were likely to belong to descriptions. Consequently, sentences holding a high number of commonly occurring words were given further preference in the ranking. The last two pieces of information not only improved the accuracy of ranking, but also enabled the system to produce reasonable results when no key phrases were matched. A training phase where the optimum balance between the sources of information was run on existing training data created from the LA Time corpus described in [3]. It may be reasonable to question why such a simple approach to extracting information from free-text sources be taken when more principled NLP-based techniques are well-established (e.g. [4], [5]). There are a number of reasons: • Any simple approach is likely to be much faster than one that requires operations such as parsing. • We believe that the use of simple but accurate methods searching over very large corpora provides a new means of determining lexical relations from corpora that are worthy of further exploration. INITIAL STUDY A pilot study was conducted, searching ten queries using the top hundred documents returned by Google. Of the ten queries, six had the best description located in the top two ranked sentences, two more queries had a good description in the top two. For all queries, a sentence holding a descriptive phrase was returned in the top five ranked sentences. DEFINING RELEVANCE In this and the previous evaluation described in [3], relevance was defined as a sentence that told the user anything about the query term: a liberal view of relevance (described here as binary relevance). The results from the pilot, under this interpretation, showed the system performed well. Consequently a more stringent form of relevance was devised. A sample answer for each query was solicited from users: for example, "the Prime Minister of Great Britain" for Tony Blair. Those key answers were taken as an acceptable criterion of highly relevant descriptive phrases. Sentences ranked by the system were then compared to the key answer. Correctness of DPs is not enough for this aim. Only a DP that described a query as well as a key answer was regarded as relevant. To illustrate, the sentence "Tony Blair is the current Prime Minister of the United Kingdom." was regarded as relevant, but "Tony Blair is a political leader" was not. THE MAIN EXPERIMENT A total of 146 queries were tested in the main experiment: 50 of which were evaluated based on key answers; 96 using binary evaluation. In the binary test, the DPF returned a relevant (descriptive) sentence in the top twenty sentences for all 96 queries. On average sixteen of the sentences returned were relevant to each query. The minimum number of relevant was six and maximum was twenty. Across the 96 queries, at least one relevant sentence was found in the top five for every tested query. This is a significant improvement over the previously reported experimental results where 90% of queries were answered in the top five. Using more stringent key answer based relevance, the system succeeded in retrieving at least one relevant sentence in the top five for 66% of the queries, at least one in the top ten for 82%, and one in the top twenty for 88%. These results show that the DPF searching the Web (1 billion documents) works dramatically better than the previous experiment using LA Times (100,000 documents). As was shown in previous work, the size of the collection impacts on the effectiveness of the system. This is because by searching a larger collection, there is a better chance of locating a relevant descriptive phrase in the format of one of the searched for key phrases. However in the previous work, there appeared to be an upper bound on the accuracy of the descriptive phrases alone. By searching a much larger collection it is speculated that the cross document term occurrence statistics used contributed significantly to improving the effectiveness of the system. CONCLUSION An existing descriptive phrase system was adapted to work with a Web search engine to locate phrases describing query words. The system was found to be highly effective at locating good descriptions: finding at least one high quality descriptive phrase in the top 10 returned sentences for 82% of test queries. FUTURE WORK We plan to undertake a number of further experiments, examining through tests, the ability of people to locate descriptions within the retrieved sentences. In addition, it was notable that the results of the full experiment were not as good as those from the pilot study. One difference between the two tests was the number of web documents examined: 100 top-ranked documents in the pilot; 600 for the expanded experiment. Given that a search engine generally retrieves more relevant documents in the higher ranks, there is likely to be more noise lower down. It is also significant that the search engine used was Google, which uses the page rank authority measure ( [1]) to enhance its ranking. Therefore, we speculate that use of an authority measure can be used to further improve the quality of our DPF. This will be investigated in future work. The Anatomy of a Large-Scale Hypertextual Web Search Engine. S Brin, L Page, Proceedings of the 7th International WWW Conference. the 7th International WWW ConferenceBrisbane, AustraliaBrin, S., Page, L. The Anatomy of a Large-Scale Hypertextual Web Search Engine, in Proceedings of the 7th International WWW Conference, April 1998, Brisbane, Australia. Automated Discovery of WordNet Relations, in WordNet: an electronic lexical database. M A Hearst, C. FellbaumMIT PressHearst, M.A. Automated Discovery of WordNet Relations, in WordNet: an electronic lexical database, C. Fellbaum (ed.), MIT Press, 131-151, 1998. Retrieving Descriptive Phrases from Large Amounts of Free Text. H Joho, M Sanderson, Proceedings of the 9th ACM CIKM Conference. the 9th ACM CIKM ConferenceMcLean, VAJoho, H., Sanderson, M. Retrieving Descriptive Phrases from Large Amounts of Free Text, in Proceedings of the 9th ACM CIKM Conference, November 2000, McLean, VA, 180-186. Building a Generation Knowledge Source using Internet-Accessible Newswire. D R Radev, K R Mckeown, Proceedings of the 5th ANLP Conference. the 5th ANLP ConferenceWashington, D.CRadev, D.R., McKeown, K.R. Building a Generation Knowledge Source using Internet-Accessible Newswire, in Proceedings of the 5th ANLP Conference, March 1997, Washington, D.C., 221-228. A Question Answering System Supported by Information Extraction. R &amp; Srihari, W Li, Proceedings of the 8th ANLP Conference. the 8th ANLP ConferenceSeattle, WashingtonSrihari, R & Li, W. A Question Answering System Supported by Information Extraction, in Proceedings of the 8th ANLP Conference, April-May 2000, Seattle, Washington.
17,617,272
Hidden Softmax Sequence Model for Dialogue Structure Analysis
We propose a new unsupervised learning model, hidden softmax sequence model (HSSM), based on Boltzmann machine for dialogue structure analysis. The model employs three types of units in the hidden layer to discovery dialogue latent structures: softmax units which represent latent states of utterances; binary units which represent latent topics specified by dialogues; and a binary unit that represents the global general topic shared across the whole dialogue corpus. In addition, the model contains extra connections between adjacent hidden softmax units to formulate the dependency between latent states. Two different kinds of real world dialogue corpora, Twitter-Post and AirTicketBooking, are utilized for extensive comparing experiments, and the results illustrate that the proposed model outperforms sate-ofthe-art popular approaches.
[ 2717698, 13556518, 11818253, 507598, 5679344, 1941969, 2175582 ]
Hidden Softmax Sequence Model for Dialogue Structure Analysis Association for Computational LinguisticsCopyright Association for Computational LinguisticsAugust 7-12, 2016. 2016 Zhiyang He Department of Electronic Engineering Tsinghua University BeijingChina Xien Liu xeliu@mail.tsinghua.edu.cn Tsinghua-iFlytek Joint Laboratory for Speech Technology BeijingChina Ping Lv Tsinghua-iFlytek Joint Laboratory for Speech Technology BeijingChina Ji Wu Department of Electronic Engineering Tsinghua University BeijingChina Hidden Softmax Sequence Model for Dialogue Structure Analysis Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics the 54th Annual Meeting of the Association for Computational LinguisticsBerlin, GermanyAssociation for Computational LinguisticsAugust 7-12, 2016. 2016 We propose a new unsupervised learning model, hidden softmax sequence model (HSSM), based on Boltzmann machine for dialogue structure analysis. The model employs three types of units in the hidden layer to discovery dialogue latent structures: softmax units which represent latent states of utterances; binary units which represent latent topics specified by dialogues; and a binary unit that represents the global general topic shared across the whole dialogue corpus. In addition, the model contains extra connections between adjacent hidden softmax units to formulate the dependency between latent states. Two different kinds of real world dialogue corpora, Twitter-Post and AirTicketBooking, are utilized for extensive comparing experiments, and the results illustrate that the proposed model outperforms sate-ofthe-art popular approaches. Introduction Dialogue structure analysis is an important and fundamental task in the natural language processing domain. The technology provides essential clues for solving real-world problems, such as producing dialogue summaries (Murray et al., 2006;Liu et al., 2010), controlling conversational agents (Wilks, 2006), and designing interactive dialogue systems (Young, 2006;Allen et al., 2007) etc. The study of modeling dialogues always assumes that for each dialogue there exists an unique latent structure (namely dialogue structure), which consists of a series of latent states. 1 Some past works mainly rely on supervised or semi-supervised learning, which always involve extensive human efforts to manually construct latent state inventory and to label training samples. Cohen et al. (2004) developed an inventory of latent states specific to E-mail in an office domain by inspecting a large corpus of e-mail. Jeong et al. (2009) employed semi-supervised learning to transfer latent states from labeled speech corpora to the Internet media and e-mail. Involving extensive human efforts constrains scaling the training sample size (which is essential to supervised learning) and application domains. In recent years, there has been some work on modeling dialogues with unsupervised learning methods which operate only on unlabeled observed data. Crook et al. (2009) employed Dirichlet process mixture clustering models to recognize latent states for each utterance in dialogues from a travel-planning domain, but they do not inspect dialogues' sequential structure. Chotimongkol (2008) proposed a hidden Markov model (HMM) based dialogue analysis model to study structures of task-oriented conversations from indomain dialogue corpus. More recently, Ritter et al. (2010) extended the HMM based conversation model by introducing additional word sources for topic learning process. Zhai et al. (2014) assumed words in an utterance are emitted from topic models under HMM framework, and topics were shared across all latent states. All these dialogue structure analysis models are directed generative models, in which the HMMs, language models and topic models are combined together. In this study, we attempt to develop a Boltzmann machine based undirected generative model for dialogue structure analysis. As for the document modeling using undirected generative model, Hinton and Salakhutdinov (2009) proposed a general framework, replicated soft-max model (RSM), for topic modeling based on restricted Boltzmann machine (RBM). The model focuses on the document-level topic analysis, it cannot be applied for the structure analysis. We propose a hidden softmax sequence model (HSSM) for the dialogue modeling and structure analysis. HSSM is a two-layer special Boltzmann machine. The visible layer contains softmax units used to model words in a dialogue, which are the same with the visible layer in RSM (Hinton and Salakhutdinov, 2009). However, the hidden layer has completely different design. There are three kinds of hidden units: softmax hidden units, which is utilized for representing latent states of dialogues; binary units used for representing dialogue specific topics; and a special binary unit used for representing the general topic of the dialogue corpus. Moreover, unlike RSM whose hidden binary units are conditionally independent when visible units are given, HSSM has extra connections utilized to formulate the dependency between adjacent softmax units in the hidden layer. The connections are the latent states of two adjacent utterances. Therefore, HSSM can be considered as a special Boltzmann machine. The remainder of this paper is organized as follows. Section 2 introduces two real world dialogue corpora utilized in our experiments. Section 3 describes the proposed hidden softmax sequence model. Experimental results and discussions are presented in Section 4. Finally, Section 5 presents our conclusions. Data Set Two different datasets are utilized to test the effectiveness of our proposed model: a corpus of post conversations drawn from Twitter (Twitter-Post), and a corpus of task-oriented human-human dialogues in the airline ticket booking domain (AirTicketBooking). Twitter-Post Conversations in Twitter are carried out by replying or responding to specific posts with short 140-character messages. The post length restriction makes Twitter keep more chat-like interactions than blog posts. The style of writing used on Twitter is widely varied, highly ungrammatical, and often with spelling errors. For example, the terms "be4", "b4", and "bef4" are always appeared in the Twitter posts to represent the word "before". Here, we totally collected about 900, 000 raw Twitter dialogue sessions. The majority of conversation sessions are very short; and the frequencies of conversation session lengths follow a power law relationship as described in (Ritter et al., 2010). For simplicity , in the data preprocessing stage non-English sentences were dropped; and non-English characters, punctuation marks, and some non-meaning tokens (such as "&") were also filtered from dialogues. We filtered short Twitter dialogue sessions and randomly sampled 5,000 dialogues (the numbers of utterances in dialogues rang from 5 to 25) to build the Twitter-Post dataset. AirTicketBooking The AirTicketBooking corpus consists of a set of task-oriented human-human mandarin dialogues from an airline ticket booking service center. The manual transcripts of the speech dialogues are utilized in our experiments. In the dataset, there is always a relative clear structure underlying each dialogue. A dialogue often begins with a customer's request about airline ticket issues. And the service agent always firstly checks the client's personal information, such as name, phone number and credit card numberm, etc. Then the agent starts to deal with the client's request. We totally collected 1,890 text-based dialogue sessions obtaining about 40,000 conversation utterances with length ranging from 15 to 100. We design an undirected generative model based on Boltzmann machine. As we known, dialogue structure analysis models are always based on an underlying assumption: each utterance in the dialogues is generated from one latent state, which has a causal effect on the words. For instance, an utterance in AirTicketBooking dataset, "Tomorrow afternoon, about 3 o'clock" corre-sponds to the latent state "Time Information". However, by carefully examining words in dialogues we can observe that not all words are generated from the latent states (Ritter et al., 2010;Zhai and Williams, 2014). There are some words relevant to a global or background topic shared across dialogues. For example, "about" and "that" belong to a global (general English) topic. Some other words in a dialogue may be strongly related to the dialogue specific topic. For example, "cake", "toast" and "pizza" may appear in a Twitter dialogue with respect to a specific topic, "food". From the perspective of generative model, we can also consider that words in a dialogue are generated by the mixture model of latent states, a global/background topic, and a dialogue specific topic. Therefore, there are three kinds of units in the hidden layer of our proposed model, which are displayed in Figure 1. h φ is a softmax unit, which indicates the latent state for a utterance. h ψ and h ξ represent the general topic, and the dialogue specific topic, respectively. For the visible layer, we utilize the softmax units to model words in each utterance, which is the same with the approach in RSM (Hinton and Salakhutdinov, 2009). In Section 3.2, We propose a basic model based on Boltzmann machine to formulate each word in utterances of dialogues. A dialogue can be abstractly viewed as a sequence of latent states in a certain reasonable order. Therefore, formulating the dependency between latent states is another import issue for dialogue structure analysis. In our model, we assume that each utterance's latent state is dependent on its two neighbours. So there exist connections between each pair of adjacent hidden softmax units in the hidden layer. The details of the model will be presented in Section 3.3. consists of three types of hidden units: softmax hidden units used for representing latent states, a binary stochastic hidden unit used for representing the dialogue specific topic, and a special binary stochastic hidden unit used for representing corpus general topic. Upper: The model for a dialogue session containing three utterances. Connection lines in the same color related to a latent state represent the same weight matrix. Lower: A different interpretation of the Hidden Softmax Model, in which D r visible softmax units in the r th utterance are replaced by one single multinomial unit which is sampled D r times. Table 1 summarizes important notations utilized in this paper. Before introducing the ultimate learning model for dialogue structure analysis, we firstly discuss a simplified version, Hidden Softmax Model (HSM), which is based on Boltzmann machine and assumes that the latent variables are independent given visible units. HSM has a twolayer architecture as shown in Figure 2. The energy of the state {V, h φ , h ψ , h ξ } is defined as follows: HSM: Hidden Softmax Model E(V, h φ , h ψ , h ξ ) =Ē φ (V, h φ ) +Ē ψ (V, h ψ ) +Ē ξ (V, h ξ ) + C(V),(1)whereĒ φ (V, h φ ),Ē ψ (V, h ψ ) andĒ ξ (V, h ξ ) are sub-energy functions related to hidden variables h φ , h ψ , and h ξ , respectively. C(V) is the shared visible units bias term. Suppose K is the dictionary size, D r is the r th utterance size (i.e. the number of words in the r th utterance), and R is the number of utterances in the a dialogue. For each utterance v r (r = 1, .., R) in the dialogue session we have a hidden variable vector h φ r (with size of J ) as a latent state of the utterance, the sub-energy functionĒ φ (V, h φ ) is defined bȳ E φ (V, h φ ) = − R r=1 J j=1 Dr i=1 K k=1 h φ rj W φ rjik v rik − R r=1 J j=1 h φ rj a φ rj ,(2) where v rik = 1 means the i th visible unit v ri in the r th utterance takes on k th value, h φ rj = 1 means the r th softmax hidden units takes on j th value, and a φ rj is the corresponding bias. W φ rjik is a symmetric interaction term between visible unit v ri that takes on k th value and hidden variable h φ r that takes on j th value. The sub-energy functionĒ ψ (V, h ψ ), related to the global general topic of the corpus, is defined byĒ ψ (V, h ψ ) = − R r=1 Dr i=1 K k=1 h ψ W ψ rik v rik − h ψ a ψ . (3) The sub-energy functionĒ ξ (V, h ξ ) corresponds to the dialogue specific topic, and is defined bȳ E ξ (V, h ξ ) = − R r=1 Dr i=1 K k=1 h ξ W ξ rik v rik − h ξ a ξ .(4) W ψ rik in Eq. (3) and W ξ rik in Eq. (4) are two symmetric interaction terms between visible units and the corresponding hidden units, which are similar to W φ rjik in (2); a ψ and a ξ are the corresponding biases. C(V) is defined by C(V) = − R r=1 Dr i=1 K k=1 v rik b rik ,(5) where b rik is the corresponding bias. The probability that the model assigns to a vis- ible binary matrix V = {v 1 , v 2 , ..., v D } (where D = R r=1 D r is the dialogue session size) is P (V) = 1 Z h φ , h ψ ,h ξ exp(−E(V, h φ , h ψ , h ξ )) Z = V h φ , h ψ ,h ξ exp(−E(V, h φ , h ψ , h ξ ),(6) where Z is known as the partition function or normalizing constant. In our proposed model, for each word in the document we use a softmax unit to represent it. For the sake of simplicity, assume that the order of words in an utterance is ignored. Therefore, all of these softmax units can share the same set of weights that connect them to hidden units, thus the visible bias term C(V) and the sub-energy func- tionsĒ φ (V, h φ ),Ē ψ (V, h ψ ) andĒ ξ (V, h ξ ) in Eq. (1) can be redefined as follows: E φ (V, h φ ) = − R r=1 J j=1 K k=1 h φ rj W φ jkv rk − R r=1 (Dr J j=1 h φ rj a φ j ) (7) E ψ (V, h ψ ) = − K k=1 h ψ W ψ kv k − Dh ψ a ψ (8) E ξ (V, h ξ ) = − K k=1 h ξ W ξ kv k − Dh ξ a ξ (9) C(V) = − K k=1v k b k ,(10) wherev rk = Dr i=1 v rik denotes the count for the k th word in the r th utterance of the dialogue,v k = R r=1v rk is the count for the k th word in whole dialogue session. D r and D (D = R r=1 D r ) are employed as the scaling parameters, which can make hidden units behave sensibly when dealing with dialogues of different lengths (Hinton and . The conditional distributions are given by softmax and logistic functions: P (h φ rj = 1|V) = exp( K k=1 W φ jkv rk + Dra φ j ) J j =1 exp( K k=1 W φ j kv rk + Dra φ j )(11)P (h ψ = 1|V) = σ( K k=1 W ψ kv k + Da ψ )(12)P (h ξ = 1|V) = σ( K k=1 W ξ kv k + Da ξ )(13)P (v rik = 1|h φ , h ψ , h ξ ) = exp( J j=1 h φ rj W φ jk + h ψ W ψ k + h ξ W ξ k + b k ) K k =1 exp( J j=1 h φ rj W φ jk + h ψ W ψ k + h ξ W ξ k + b k ) ,(14) where σ(x) = 1/(1 + exp(−x)) is the logistic function. HSSM: Hidden Softmax Sequence Model In this section, we consider the dependency between the adjacent latent states of utterances, and extend the HSM to hidden softmax sequence model (HSSM), which is displayed in Figure 3. We define the energy of the state {V, h φ , h ψ , h ξ } in HSSM as follows: E(V, h φ , h ψ , h ξ ) =Ē φ (V, h φ ) +Ē ψ (V, h ψ ) +Ē ξ (V, h ξ ) + C(V) +ĒΦ(h φ , h φ ),(15) where C(V),Ē φ (V, h φ ),Ē ψ (V, h ψ ) and E ξ (V, h ξ ) are the same with that in HSM. The last termĒ Φ (h φ , h φ ) is utilized to formulate the dependency between latent variables h φ , which is defined as follows: EΦ(h φ , h φ ) = − J q=1 h φ s F s q h φ 1q − J q=1 h φ Rq F e q h φ e − R−1 r=1 J j=1 J q=1 h φ rj Fjqh φ r+1,q ,(16) where h φ s and h φ e are two constant scalar variables (h φ s ≡ 1, h φ e ≡ 1), which represent the virtual beginning state unit and ending state unit of a dialogue. F s is a vector with size J, and its elements measure the dependency between h φ s and the latent softmax units of the first utterance. F e also contains J elements, and in contrast to F s , F e represents the dependency measure between h φ e and the latent softmax units of the last utterance. F is a symmetric matrix for formulating dependency between each two adjacent hidden units pair (h φ r , h φ r+1 ), r = 1, ..., R − 1. Utterance 1 Utterance 2 Utterance 3 Figure 3: Hidden softmax sequence model. A connection between each pair of adjacent hidden softmax units is added to formulate the dependency between the two corresponding latent states. Parameter Learning Exact maximum likelihood learning in the proposed model is intractable. "Contrastive Divergence" (Hinton, 2002) can be used for HSM's learning, however, it can not be utilized for HSSM, because the hidden-to-hidden interaction term, {F, F s , F e }, result in the intractability when obtaining exact samples from the conditional distribution P (h φ rj = 1|V), r = [1, R], j ∈ [1, J]. We use the mean-field variational inference (Hinton and Zemel, 1994;Neal and Hinton, 1998;Jordan et al., 1999) and a stochastic approximation procedure (SAP) (Tieleman, 2008) to estimate HSSM's parameters. The variational learning is utilized to get the data-dependent expectations, and SAP is utilized to estimate the model's expectation. The log-likelihood of the HSSM has the following variational lower bound: (17) Q(h) can be any distribution of h in theory. θ = {W φ , W ψ , W ξ , F, F s , F e } (the bias terms are omitted for clarity) are the model parameters. h = {h φ , h ψ , h ξ } represent all the hidden variables. H(·) is the entropy functional. In variational learning, we try to find parameters that minimize the Kullback-Leibler divergences between Q(h) and the true posterior P (h|V; θ). A naive mean-field approach can be chosen to obtain a fully factorized distribution for Q(h): log P (V; θ) ≥ h Q(h) log P (V, h; θ) + H(Q).Q(h) = R r=1 q(h φ ) q(h ψ ) q(h ξ ),(18) where q(h φ rj = 1) = µ φ rj , q(h ψ = 1) = µ ψ , q(h ξ = 1) = µ ξ . µ = {µ φ , µ ψ , µ ξ } are the parameters of Q(h). Then the lower bound on the log-probability log P (V; θ) has the form: log P (V; θ) ≥ −Ē φ (V, µ φ ) −Ē ψ (V, µ ψ ) −Ē ξ (V, µ ξ ) − C(V) −ĒΦ(µ φ , µ φ ) − log Z,(19)whereĒ φ (V, µ φ ),Ē ψ (V, µ ψ ),Ē ξ (V, µ ξ ) , and E Φ (µ φ , µ φ ) have the same forms, by replacing µ with h, as Eqs. (7), (8), (9), and (16), respectively. We can maximize this lower bound with respect to parameters µ for fixed θ, and obtain the meanfield fixed-point equations: µ φ rj = exp( K k=1 W φ jkv rk + Dra φ j + D j prev + D j next − 1) J j =1 exp( K k=1 W φ j kv rk + Dra φ j + D j prev + D j next − 1) ,(20)µ ψ = σ( K k=1 W ψ kv k + Da ψ ) (21) µ ξ = σ( K k=1 W ξ kv k + Da ξ ),(22) where D j prev and D j next are two terms relevant to the derivative of the RHS of Eq. (19) with respect to µ φ rj , defined by D j prev = F s j , r = 1 J q=1 µ φ r−1,q Fqj, r > 1 D j next = J q=1 Fjqµ φ r+1,q , r < R. F e j , r = R The updating of µ can be carried out iteratively until convergence. Then, (V, µ) can be considered as a special "state" of HSSM, thus the SAP can be applied to update the model's parameters, θ, for fixed (V, µ). Experiments and Discussions It's not easy to evaluate the performance of a dialogue structure analysis model. In this study, we examined our model via qualitative visualization and quantitative analysis as done in (Ritter et al., 2010;Zhai and Williams, 2014). We implemented five conventional models to conduct an extensive comparing study on the two corpora: Twitter-Post and AirTicketBooking. Conventional models include: LMHMM (Chotimongkol, 2008), LMH-MMS (Ritter et al., 2010), TMHMM, TMHMMS, and TMHMMSS (Zhai and Williams, 2014). In our experiments, for each corpus we randomly select 80% dialogues for training, and use the rest 20% for testing. We select three different number (10, 20 and 30) of latent states to evaluate all the models. In TMHMM, TMHMMS and TMH-MMSS, the number of "topics" in the latent states and a dialogue is a hyper-parameter. We conducted a series of experiments with varying numbers of topics, and the results illustrated that 20 is the best choice on the two corpora. So, for all the following experimental results of TMHMM, TMHMMS and TMHMMSS, the corresponding topic configurations are set to 20. The number of estimation iterations for all the models on training sets is set to 10,000; and on held-out test sets, the numver of iterations for inference is set to 1000. In order to speed-up the learning of HSSM, datasets are divided into minibatches, each has 15 dialogues. In addition, the learning rate and momentum are set to 0.1 and 0.9, respectively. Qualitative Evaluation Dialogues in Twitter-Post always begin with three latent states: broadcasting what they (Twitter users) are doing now ("Status"), broadcasting an interesting link or quote to their followers ("Reference Broadcast"), or asking a question to their followers ("Question to Followers"). 2 We find that structures discoverd by HSSM and LMHMMS with 10 latent states are most reasonable to interpret. For example, after the initiating state ("Status", "Reference Broadcast", or "Question to Followers"), it was often followed a "Reaction" to "Reference Broadcast" (or "Status"), or a "Comment" to "Status", or a "Question" to "Status" ( "Reference Broadcast", or "Question to Followers"') etc. Compared with LMHMMS, besides obtaining similar latent states, HSSM exhibits powerful ability in learning sequential dependency relationship between latent states. Take the following simple Twitter dialogue session as an example: : rt i like katy perry lt lt we see tht lol LMHMMS labelled the second utterance ("lol gd morning ") and the third utterance ("lol good morning how u " ) into the same latent state, while HSSM treats them as two different latent states (Though they both have almost the same words). The result is reasonable: the first "gd morning" is a greeting, while the second "gd morning" is a response. For AirTicketBooking dataset, the statetransition diagram generated with our model under the setting of 10 latent states is presented in Figure 4. And several utterance examples corresponding to the latent staes are also showed in Table 2. In general, conversations begin with sever agent's short greeting, such as "Hi, very glad to be of service.", and then transit to checking the passenger's identity information or inquiring the passenger's air ticket demand; or it's directly interrupted by the passenger with booking demand which is always associated with place information. After that, conversations are carried out with other booking related issues, such as checking ticket price or flight time. The flowchart produced by HSSM can be reasonably interpreted with knowledge of air ticket booking domain, and it most consistent with the agent's real workflow of the Ticket Booking Corporation 3 compared with other models. We notice that conventional models can not clearly distinguish some relevant latent states from each other. For example, these baseline models always confound the latent state "Price Info" with the latent state "Reservation", due to certain words assigned large weights in the two states, such as "打折 (discount)", and "信用卡 (credit card)" etc. Furthermore, Only HSSM and LMHMMS have dialogue specific topics, and experimental results illustrate that HSSM can learn much better than LMHMMS which always mis-recognize corpus general words as belonging to dialogue specific topic (An example is presented in Table 3). Quantitative Evaluation For quantitative evaluation, we examine HSSM and traditional models with log likelihood and an ordering task on the held-out test set of Twitter-Post and AirTicketBooking. 3 We hide the corporation's real name for privacy reasons. Log Likelihood The likelihood metric measures the probability of generating the test set using a specified model. The likelihood of LMHMM and TMHMM can be directed computed with the forward algorithm. However, since likelihoods of LMHMMS, TMHMMS and TMHMMSS are intractable to compute due to the local dependencies with respect to certain latent variables, Chibstyle estimating algorithms (Wallach et al., 2009) are employed in our experiments. For HSSM, the partition function is a key problem for calculating the likelihood, and it can be effectively estimated by Annealed Importance Sampling (AIS) (Neal, 2001;Salakhutdinov and Murray, 2008). Figure 5 presents the likelihood of different models on the two held-out datasets. We can observe that HSSM achieves better performance on likelihood than all the other models under different number of latent states. On Twitter-Post dataset our model slightly surpasses LMHMMS, and it performs much better than all traditional models on AirTicketBooking dataset. Ordering Test Following previous work ( Barzilay and Lee, 2004;Ritter et al., 2010;Zhai and Williams, 2014), we utilize Kendall's τ (Kendall, 1938) as evaluation metric, which measures the similarity between any two sequential data and ranges from −1 (indicating a reverse ordering) to +1 (indicating an identical J = 10 J = 20 J = 30 ordering). This is the basic idea: for each dialogue session with n utterances in the test set, we firstly generate all n! permutations of the utterances; then evaluate the probability of each permutation, and measure the similarity, i.e. Kendall's τ , between the max-probability permutation and the original order; finally, we average τ values for all dialogue sessions as the model's ordering test score. As pointed out by Zhai et al. (2014), it's however infeasible to enumerate all possible permutations of dialogue sessions when the number of utterances in large. In experiments, we employ the incrementally adding permutation strategy, as used by Zhai et al. (2014), to build up the permutation set. The results of ordering test are presented in Figure 6. We can see that HSSM exhibits better performance than all the other models. For the conventional models, it is interesting that LMHMMS, TMHMMS and TMHMMSS achieve worse performances than LMHMM and TMHMM. This is likely because the latter two models allow words to be emitted only from latent states (Zhai and Williams, 2014), while the former three models allow words to be generated from additional sources. This also implies HSSM's effectiveness of modeling distinct information uderlying dialogues. Discussion The expermental results illustrate the effectiveness of the proposed undirected dialogue structure analysis model based on Boltzmann machine. The conducted experiments also demonstrate that undirected models have three main merits for text modeling, which are also demonstrated by Hinton and Salakhutdinov (2009), Srivastava et al. (2013) through other tasks. Boltzmann machine based undirected models are able to generalize much better than traditional directed generative model; and model learning is more stable. Besides, an undirected model is more suitable for describing complex dependencies between different kinds of variables. We also notice that all the models can, to some degree, capture the sequential structure in the dialogues, however, each model has a special characteristic which makes itself fit a certain kind of dataset better. HSSM and LMHMMS are more appropriate for modeling the open domain dataset, such as Twitter-Post used in this paper, and the task-oriented domain dataset with one relatively concentrated topic in the corpus and special information for each dialogue, such as AirTicket-Booking. As we known, dialogue specific topics in HSSM or LMHMMS are used and trained only within corresponding dialogues. They are crucial for absorbing certain words that have important meaning but do not belongs to latent states. In addition, for differet dataset, dialogue specific topics may have different effect to the modeling. Take the Twitter-Post for an example, dialogue specific topics formulate actual themes of dialogues, such as a pop song, a sport news. As for the AirTicketBooking dataset, dialogue specific topics always represent some special information, such as the personal information, including name, phone number, birthday, etc. In summary, each dialogue specific topic reflects special information which is different from other dialogues. The three models, TMHMM, TMHMMS and TMHMMSS, which do not include dialogue specific topics, should be utilized on the task-oriented domain dataset, in which each dialogue has little special or personnal information. For example, the three models perform well on the the BusTime and TechSupport datasets (Zhai and Williams, 2014), in which name entities are all replaced by different semantic types (e.g. phone numbers are replaced by "<phone>", E-mail addresses are replaced by "<email>", etc). Conclusions We develope an undirected generative model, HSSM, for dialogue structure analysis, and examine the effectiveness of our model on two different datasets, Twitter posts occurred in open-domain and task-oriented dialogues from airline ticket booking domain. Qualitative evaluations and quantitative experimental results demonstrate that the proposed model achieves better performance than state-of-the-art approaches. Compared with traditional models, the proposed HSSM has more powerful ability of discovering structures of latent states and modeling different word sources, including latent states, dialogue specific topics and global general topic. According to recent study (Srivastava et al., 2013), a deep network model exhibits much benefits for latent variable learning. A dialogue may actually have a hierarchy structure of latent states, therefore the proposed model can be extended to a deep model to capture more complex structures. Another possible way to extend the model is to consider modeling long distance dependency between latent states. This may further improve the model's performance. Figure 1 : 1Hidden layer that consists of different types of latent variables Figure 2 : 2Hidden Softmax Model. The bottom layer are softmax visible units and the top layer : lol gd morning : lol gd morning how u : i'm gr8 n urself : i'm good gettin ready to head out : oh ok well ur day n up its cold out here ... Figure 4 : 4Transitions between latent states on AirTicketBooking generated by our HSSM model under the setting of J = 10 latent states. Transition probability cut-off is 0.10. Figure 5 : 5Negative log likelihood (smaller is better) on held-out datasets of Twitter-Post (upper) and AirTicketBooking (lower) under different number of latent states J. Figure 6 : 6Average Kendall's τ measure (larger is better) on held-out datasets of Twitter-Post (upper) and AirTicketBooking (lower) under different number of latent states J. Table 1 : 1Definition of notations.Words of utterance 1 ... ... ... Words of utterance 2 Words of utterance 3 Utterance 1 Utterance 2 Utterance 3 Table 2 : 2Utterance examples of latent states discovered by our model.Model Top Words HSSM 十点, 李东, 福州, 厦门, 上航, ... ten o'clock, Dong Li (name), Fuzhou (city), Xiamen (city), Shanghai Airlines, ... LMHMMS 有, 十点, 额, 李东, 预留, ... have, ten o'clock, er, Dong Li (name), reserve, ... Table 3 : 3One example of dialogue specific topic learned on the same dialogue session with HSSM and LMHMMS, respectively. Also called dialogue acts or speech acts in some past work. In this paper, for simplicity we will only use the term "latent state" to describe the sequential dialogue structure. For simplicity and readability in consistent, we follow the same latent state names used in(Ritter et al., 2010) AcknowledgmentsWe are grateful to anonymous reviewers for their helpful comments and suggestions. We would like to thank Alan Ritter for kindly providing the raw Twitter dataset.This work is supported in part by the National Natural Science Funds of China under Grant 61170197 and 61571266, and in part by the Electronic Information Industry Development Fund under project "The R&D and Industrialization on Information Retrieval System Based on Man-Machine Interaction with Natural Speech". Plow: A collaborative task learning agent. James Allen, Nathanael Chambers, George Ferguson, Lucian Galescu, Hyuckchul Jung, Mary Swift, William Taysom, Proceedings of the National Conference on Artificial Intelligence. the National Conference on Artificial IntelligenceMenlo Park, CA; Cambridge, MALondon221514James Allen, Nathanael Chambers, George Ferguson, Lucian Galescu, Hyuckchul Jung, Mary Swift, and William Taysom. 2007. Plow: A collaborative task learning agent. In Proceedings of the National Con- ference on Artificial Intelligence, volume 22, page 1514. Menlo Park, CA; Cambridge, MA; London; Catching the drift: Probabilistic content models with applications to generation and summarization. Regina Barzilay, Lillian Lee, proceedings of HLT-NAACL 2004. HLT-NAACL 2004Regina Barzilay and Lillian Lee. 2004. Catching the drift: Probabilistic content models with applications to generation and summarization. In proceedings of HLT-NAACL 2004, pages 113-120. Learning the structure of task-oriented conversations from the corpus of indomain dialogs. Ananlada Chotimongkol, Ph.D. thesis, SRI InternationalAnanlada Chotimongkol. 2008. Learning the structure of task-oriented conversations from the corpus of in- domain dialogs. Ph.D. thesis, SRI International. Learning to classify email into"speech acts. William W Cohen, R Vitor, Tom M Carvalho, Mitchell, EMNLP. William W Cohen, Vitor R Carvalho, and Tom M Mitchell. 2004. Learning to classify email into"speech acts". In EMNLP, pages 309-316. Unsupervised classification of dialogue acts using a dirichlet process mixture model. Nigel Crook, Ramon Granell, Stephen Pulman, Proceedings of the SIGDIAL 2009 Conference: The 10th Annual Meeting of the Special Interest Group on Discourse and Dialogue. the SIGDIAL 2009 Conference: The 10th Annual Meeting of the Special Interest Group on Discourse and DialogueAssociation for Computational LinguisticsNigel Crook, Ramon Granell, and Stephen Pulman. 2009. Unsupervised classification of dialogue acts using a dirichlet process mixture model. In Proceed- ings of the SIGDIAL 2009 Conference: The 10th An- nual Meeting of the Special Interest Group on Dis- course and Dialogue, pages 341-348. Association for Computational Linguistics. Replicated softmax: an undirected topic model. E Geoffrey, Hinton, R Ruslan, Salakhutdinov, Advances in neural information processing systems. Geoffrey E Hinton and Ruslan R Salakhutdinov. 2009. Replicated softmax: an undirected topic model. In Advances in neural information processing systems, pages 1607-1614. Autoencoders, minimum description length, and helmholtz free energy. E Geoffrey, Richard S Hinton, Zemel, Advances in neural information processing systems. Geoffrey E Hinton and Richard S Zemel. 1994. Autoencoders, minimum description length, and helmholtz free energy. Advances in neural informa- tion processing systems, pages 3-3. Training products of experts by minimizing contrastive divergence. E Geoffrey, Hinton, Neural computation. 148Geoffrey E Hinton. 2002. Training products of experts by minimizing contrastive divergence. Neural com- putation, 14(8):1771-1800. Semi-supervised speech act recognition in emails and forums. Minwoo Jeong, Chin-Yew Lin, Gary Geunbae Lee, Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing. the 2009 Conference on Empirical Methods in Natural Language ProcessingAssociation for Computational Linguistics3Minwoo Jeong, Chin-Yew Lin, and Gary Geunbae Lee. 2009. Semi-supervised speech act recognition in emails and forums. In Proceedings of the 2009 Con- ference on Empirical Methods in Natural Language Processing: Volume 3-Volume 3, pages 1250-1259. Association for Computational Linguistics. An introduction to variational methods for graphical models. Zoubin Michael I Jordan, Tommi S Ghahramani, Lawrence K Jaakkola, Saul, Machine learning. 372Michael I Jordan, Zoubin Ghahramani, Tommi S Jaakkola, and Lawrence K Saul. 1999. An intro- duction to variational methods for graphical models. Machine learning, 37(2):183-233. A new measure of rank correlation. G Maurice, Kendall, Biometrika. 301Maurice G Kendall. 1938. A new measure of rank correlation. Biometrika, 30(1/2):81-93. Dialogue-oriented review summary generation for spoken dialogue recommendation systems. Jingjing Liu, Stephanie Seneff, Victor Zue, Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics. Association for Computational LinguisticsJingjing Liu, Stephanie Seneff, and Victor Zue. 2010. Dialogue-oriented review summary generation for spoken dialogue recommendation systems. In Hu- man Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 64-72. Association for Computational Linguistics. Incorporating speaker and discourse features into speech summarization. Gabriel Murray, Steve Renals, Jean Carletta, Johanna Moore, Proceedings of the main conference on Human Language Technology Conference of the North American Chapter of the Association of Computational Linguistics. the main conference on Human Language Technology Conference of the North American Chapter of the Association of Computational LinguisticsAssociation for Computational LinguisticsGabriel Murray, Steve Renals, Jean Carletta, and Jo- hanna Moore. 2006. Incorporating speaker and discourse features into speech summarization. In Proceedings of the main conference on Human Lan- guage Technology Conference of the North Amer- ican Chapter of the Association of Computational Linguistics, pages 367-374. Association for Com- putational Linguistics. A view of the em algorithm that justifies incremental, sparse, and other variants. M Radford, Geoffrey E Neal, Hinton, Learning in graphical models. SpringerRadford M Neal and Geoffrey E Hinton. 1998. A view of the em algorithm that justifies incremental, sparse, and other variants. In Learning in graphical models, pages 355-368. Springer. Annealed importance sampling. M Radford, Neal, Statistics and Computing. 112Radford M Neal. 2001. Annealed importance sam- pling. Statistics and Computing, 11(2):125-139. Unsupervised modeling of twitter conversations. Alan Ritter, Colin Cherry, Bill Dolan, Alan Ritter, Colin Cherry, and Bill Dolan. 2010. Un- supervised modeling of twitter conversations. On the quantitative analysis of deep belief networks. Ruslan Salakhutdinov, Iain Murray, Proceedings of the 25th international conference on Machine learning. the 25th international conference on Machine learningACMRuslan Salakhutdinov and Iain Murray. 2008. On the quantitative analysis of deep belief networks. In Proceedings of the 25th international conference on Machine learning, pages 872-879. ACM. Modeling documents with deep boltzmann machines. Nitish Srivastava, R Ruslan, Geoffrey E Salakhutdinov, Hinton, UAINitish Srivastava, Ruslan R Salakhutdinov, and Geof- frey E Hinton. 2013. Modeling documents with deep boltzmann machines. UAI. Training restricted boltzmann machines using approximations to the likelihood gradient. Tijmen Tieleman, Proceedings of the 25th international conference on Machine learning. the 25th international conference on Machine learningACMTijmen Tieleman. 2008. Training restricted boltz- mann machines using approximations to the likeli- hood gradient. In Proceedings of the 25th interna- tional conference on Machine learning, pages 1064- 1071. ACM. Evaluation methods for topic models. Iain Hanna M Wallach, Ruslan Murray, David Salakhutdinov, Mimno, Proceedings of the 26th Annual International Conference on Machine Learning. the 26th Annual International Conference on Machine LearningACMHanna M Wallach, Iain Murray, Ruslan Salakhutdinov, and David Mimno. 2009. Evaluation methods for topic models. In Proceedings of the 26th Annual In- ternational Conference on Machine Learning, pages 1105-1112. ACM. Artificial companions as a new kind of interface to the future internet. Yorick Wilks, Yorick Wilks. 2006. Artificial companions as a new kind of interface to the future internet. Using pomdps for dialog management. J Steve, Young, SLT. Steve J Young. 2006. Using pomdps for dialog man- agement. In SLT, pages 8-13. Discovering latent structure in task-oriented dialogues. Ke Zhai, Jason D Williams, ACL (1). Ke Zhai and Jason D Williams. 2014. Discovering latent structure in task-oriented dialogues. In ACL (1), pages 36-46.
14,089,470
Rule Based Urdu Stemmer
This paper presents Rule based Urdu Stemmer. In this technique rules are applied to remove suffix and prefix from the inflected words. Urdu is well spoken language all over the world but less work has been done on Urdu stemming. Stemmer helps us to find the root of the inflected word. Various possibilities of inflected words like ‫وں‬ (vao+noon-gunna), ‫ے‬ (badi-ye), ‫یاں‬ (choti-ye+alif+noon-gunna) etc. have been identified and appropriate rules have been developed for them.
[ 62627145, 16238434 ]
Rule Based Urdu Stemmer December 2012 Rohit Kansal rohitkansal87@yahoo.co.invishal.pup@gmail.com Vishal Goyal G S Lehal gslehal@yahoo.com Department of Computer Science Department of Computer Science Punjabi University Patiala Punjabi University Patiala Rule Based Urdu Stemmer Proceedings of COLING 2012: Demonstration Papers COLING 2012: Demonstration PapersMumbaiDecember 2012267Urdu StemmerStemmerUrduRules This paper presents Rule based Urdu Stemmer. In this technique rules are applied to remove suffix and prefix from the inflected words. Urdu is well spoken language all over the world but less work has been done on Urdu stemming. Stemmer helps us to find the root of the inflected word. Various possibilities of inflected words like ‫وں‬ (vao+noon-gunna), ‫ے‬ (badi-ye), ‫یاں‬ (choti-ye+alif+noon-gunna) etc. have been identified and appropriate rules have been developed for them. Introduction Stemming is the process in which inflected words are reduced to find stem or root. There are various inflected words that can be reduced to stem. e.g. In English language : 1) Act can have inflected words like actor, acted, acting etc. 1) Rule Based approach -This approach applies a set of transformation rules to inflected words in order to cut prefixes or suffixes. E.g. if the word ends in 'ed', remove the 'ed'. 2) Statistical approach -The major drawback of Rule Based approach is that it is dependent on database. Statistical algorithms overcome this problem by finding distributions of root elements in a database. There is no need to maintain the database. 3) Hybrid approach -It is combination of both Affix removal and Statistical approach. Stemming is useful in Natural Language Processing problems like search engine, word processing problems and information retrieval. In this stemmer we have applied Rule Based Approach in which we apply rules on various possibilities of inflected words to remove suffixes or prefixes. In Urdu, the only stemmer available to us is Assas-Band developed by NUCES, Pakistan which maintains an Affix Exception List and works according to the algorithm to remove inflections. Background and Related Work The Urdu Stemmer An attempt has been made to develop Urdu Stemmer using Rule Based Approach in which we have developed rules to remove various prefixes and suffixes. We have designed Rule Based Approach Urdu Stemmer which helps us to find stem of various inflected words. For this we have developed a graphical user interface in which we can enter the input directly or we can also browse files. Database has been maintained of root words along with their frequencies. The collection of 101,483 unique words has been done. The flowchart of Urdu Stemmer is given below which explains the system step by step in detail. Figure 1 Flowchart of Urdu Stemmer Algorithm The algorithm of Urdu stemmer is explained in detail below:-i) Tokenization and Normalization-In tokenization process the input text is tokenized word by word by using delimiter as space. In normalization special characters like ?,',",@ etc are eliminated. ii) Postfix/Prefix Rules-After the normalization process postfix/prefix rules are applied on the word. If appropriate rules are found that can be applied then break the word and generate the list of various possibilities of the word. In some cases if appropriate rules are not found then system returns the same word as root word. The possibilities list is matched against the database to find frequencies. Then the frequencies are compared and word corresponding to the greatest frequency is returned as root. The word corresponding to the greatest frequency is returned as root because the word that occurs most frequently has the highest probability of being the root. Results and Discussion We have tested this system on different Urdu news documents of 20,583 words to evaluate the performance of this system. The accuracy of this system is 85.14%. The news document consists of sports, national, international news. We have tried to cover different domains in order to find different types of inflected words. Test set 1 covers sports and business news. Test set 2 covers articles, short stories etc. Test set 3 covers news relating to health and science. Following evaluation metrics are used to calculate the accuracy. Recall (R) = Correct answers given by system / Total possible correct answers Precision (P) = Correct answers / Answers produced F-Measure= (β 2 + 1) PR / β 2 R + P β is the weighting between precision and recall typically β=1. F-measure is called F1-Measure. F1Measure= 2PR / (P+R). Table 3 Accuracy of different test cases The overall accuracy of the system is 85.15%.The overall performance of the system is good. In test cases we have observed that some rules are more used than other rules. Rule 1 and Rule 2 cover most of the inflected words. So these rules are applied more than other rules. Errors are due to dictionary error or syntax error. Dictionary error means word is not present in the database. When we apply rules and find the various possibilities but these possibilities may not be present in the database. If appropriate rule is not found but the word is inflected it can also give rise to error. The probability of dictionary error is very less because we have extracted unique words from corpus of 11.53 million words. We assume that such a large corpus cover most of the inflected words. The error is mainly due to syntax error. There is no standardization in Urdu which means there is more than one way of writing a particular word. Although we have tried to cover all the possibilities of writing a word but error may occur. Absence of Airaabs in most of the Urdu text increases the error rate. When Airaabs are not present in Urdu text, it becomes difficult to understand the word. The different rules give different accuracy because some rules are more frequently used more than other rules. The rules that are more frequently used are shown below and with their accuracy which helps to find which rule occur most. The rule that occur more frequently show that the inflection corresponding to that particular word occur most. Table 4 Accuracy of mostly common applied rules Conclusion and Future Work In this paper Urdu stemmer has been discussed using Rule Based Approach which removes suffixes and prefixes from the inflected word. Various possibilities like ‫وں‬ (vao+noon-gunna), ‫ے‬ (badi-ye), ‫یاں‬ (choti-ye+alif+noon-gunna) etc. have been identified and appropriate rules have been developed to remove inflections and find the root. The data collection is the main problem because text data in Urdu available to us is rare. The limitation can be handled by increasing the database in future to achieve more accurate results. Error can also occur due to spelling variations because there is no particular way of writing a word. There can be more than one way of writing a particular word in Urdu. Although we have tried to put all the possibilities of writing a word but still error may occur. Statistical approach can be applied to Urdu Stemmer in future. 2 ) 2Words like fishing, fished and fisher can be reduced to root word fish. Similarly in Urdu various possibilities have been identified and rules have been A corpus of 11.56 million words is used and 1,01,483 words are extracted from the corpus as unique words. These words are stored in the database along with their frequencies. The frequency of a word means how many times it repeats in the corpus.Some of the postfix rules applied are-: Rule 1-If word ends with ‫وں‬ (vao+noon-gunna) then remove ‫وں‬ (vao+noon-gunna) from end. For example-‫رنگ‬ -‫رنگوں‬ (raṅgōṃ) (raṅg) Rule 2-If word ends with ‫ے‬ (badi-ye) then remove ‫ے‬ (badi-ye) from end and replace with ‫ا‬ (alif) . - If word ends with ‫یوں‬ (choti-ye +vao+noon-gunna) then remove ‫یوں‬ (choti-ye+vao+noon-gunna) from end and replace with ‫ی‬ (choti-ye). word ends with ‫ؤں‬ (vao-hamza+noon-gunna) then remove ‫ؤں‬ (vao-hamza+noongunna) from end. For example-‫چاچاؤں‬ -‫چاچا‬ (cācāōṃ) (cācā) Rule 5-If word ends with ‫یاں‬ (choti-ye+alif+noon-gunna) then remove ‫یاں‬ (choti-ye+alif+noon-gunna) from end and replace with ‫ی‬ (choti-ye). For example - example‫کوٹیاں‬ -‫کوٹی‬ (kōṭīyāṃ) (kōṭī) Rule 6-If word ends with ‫یں‬ (choti-ye+noon-gunna) then remove ‫یں‬ (choti-ye + noon-gunna) from end. - If word ends with ‫ئیں‬ (hamza + choti-ye+noon-gunna) then remove ‫ئیں‬ (hamza+choti-ye+noon-gunna) from end.For example-‫ماالئیں‬ -‫ماال‬ (mālāēṃ) (mālā)The rules above are some of the rules that are used in the Urdu stemmer. Similarly there are other postfix rules that can be applied which helps to find root in the system.iii)Prefix Rules-Some of the prefix rules are applied to find the root word are given below:-Rule 1-If word starts with ‫بد‬ (bay+daal) then remove ‫بد‬ (bay+daal) from beginning. For example-‫بدصورت‬ -‫صورت‬ (badsūrat) (sūrat) Rule 2-If word starts with ‫بے‬ (bay+badi-ye)then remove ‫بے‬ (bay+badi-ye) from beginningFor example-‫بکدر‬ ‫کدر‬ -(bēkdar) (kadar) So there are 32 postfix and prefix rules in total that we have used to develop this system. Table 1 1Examples of Urdu Stemmer1.1 ApproachesStemming algorithms are classified under three categories-Rule Based, Statistical and Hybrid. only Stemmer available to us in Urdu is Assas-Band developed by NUCES, Pakistan which maintains an Affix Exception List and works according to the algorithm to remove inflections. In this system optimal split position is obtained by taking all the possible splits of the word and selecting the split position which occur maximum. It gives an accuracy of 67.8 %. Dinesh Kumar et al.2011 developed a Stemmer for Punjabi using Brute Force Technique. It employs a look up table which contains relation between root forms and inflected forms. To stem a word, table is queried to find a matching inflection. If a matching inflection is found associated root word is returned. It achieves accuracy of 81.27 %.Sandeep Sarkar et al.2008 developed Rule Based Stemmer for Bengali which achieve an accuracy of 89 %. Ananthakrishnan Ramanathan et al. developed a lightweight stemmer for Hindi using suffix removal method. Suffix removal does not require a look up table. It achieves an accuracy of 88 %.Vishal Gupta et al.2011 developed stemmer for nouns and proper names for Punjabi language using Rule based approach. Various possibilities of suffixes have been identified and various rules have been generated. The efficiency of this system is 87.37 %.It has been developed by Qurat-ul-Ain-Akram et al. (2009) using Rule based approach. Urdu word is composed of sequence of prefix, stem and postfix. A word can be divided into prefix-stem- postfix. First the prefix is removed from the word which returns stem-postfix sequence. Then postfix is removed and stem is extracted. This system gives an accuracy of 91.2 %.This system worked as a base paper for our system. It gave an idea that how Urdu words should be handled and what are the challenges faced in handling them. We have also used Rule Based Approach but it is different from Assas-Band. In 1968 Julie Beth Lovins developed the first English Stemmer. Then Martin Porter developed Porter Stemming Algorithm which is most widely used technique for stemming in English. Other work related to Indian Languages are like Pratik kumar popat et al.2010 developed Stemmer for Gujarati using Hybrid Approach. Table 2 2Different test casesTest Set no. Area covered No. of Words Test Set 1 Sports, Business news 7261 Test Set 2 Articles, Short stories 6239 Test Set 3 Health, Scientific news 7083 Assas -Band, an Affix-Exception list based Urdu Stemmer In. Asma Qurat-Ul-Ain-Akram, Sarmad Naseer, Hussain, the Proceedings of the 7th Workshop on Asian Language Resources, ACL-IJCNLP. Suntec, SingaporeQurat-ul-Ain-Akram, Asma Naseer, Sarmad Hussain.(2009). Assas -Band, an Affix- Exception list based Urdu Stemmer In the Proceedings of the 7th Workshop on Asian Language Resources, ACL-IJCNLP , Suntec, Singapore, pp. 40-47. Stemming of Punjabi Words by using Brute Force Technique. Dinesh Kumar, Prince Rana, International Journal of Engineering Science and Technology (IJEST). 32Dinesh Kumar, Prince Rana.(2011).Stemming of Punjabi Words by using Brute Force Technique In International Journal of Engineering Science and Technology (IJEST) Vol. 3 No. 2 , pp. 1351-1357. Design of a Rule-based Stemmer for Natural Language Text in Bengali. Sandipan Sarkar, Sivaji Bandyopadhyay, the Proceedings of the IJCNLP-08 Workshop on NLP for Less Privileged Languages. Hyderabad, IndiaAsian Federation of Natural Language ProcessingSandipan Sarkar, Sivaji Bandyopadhyay.(2008). Design of a Rule-based Stemmer for Natural Language Text in Bengali, In the Proceedings of the IJCNLP-08 Workshop on NLP for Less Privileged Languages, Hyderabad, India, Asian Federation of Natural Language Processing, pp. 65-72. Punjabi Language Stemmer for nouns and proper name. Vishal Gupta, Gurpreet Singh Lehal, ; , Ijcnlp , Chiang Mai, the Proceedings of the 2nd Workshop on South and Southeast Asian Natural Language Processing (WSSANLP). ThailandVishal Gupta, Gurpreet Singh Lehal.(2011). Punjabi Language Stemmer for nouns and proper name, In the Proceedings of the 2nd Workshop on South and Southeast Asian Natural Language Processing (WSSANLP), IJCNLP , Chiang Mai, Thailand, pp. 35-39. Hybrid inflectional Stemmer and Rule-based Derivational Stemmer for Gujarati. Katik Suba, Dipti Jiandani, Pushpak Bhattacharyya, ; , Ijcnlp , Chiang Mai, the Proceedings of the 2nd Workshop on South and Southeast Asian Natural Language Processing (WSSANLP). ThailandKatik Suba, Dipti Jiandani, Pushpak Bhattacharyya.(2011).Hybrid inflectional Stemmer and Rule-based Derivational Stemmer for Gujarati, In the Proceedings of the 2nd Workshop on South and Southeast Asian Natural Language Processing (WSSANLP), IJCNLP , Chiang Mai, Thailand, November 8, 2011, pp. 1-8. Hybrid Stemmer for Gujarati. the Proceedings of the 1st Workshop on South and Southeast Asian Natural Language Processing (WSSANLP), the 23rd International Conference on Computational Linguistics (COLING). Pratikkumar Patel, Kashyap Popat.BeijingPratikkumar Patel, Kashyap Popat.(2010). Hybrid Stemmer for Gujarati" In the Proceedings of the 1st Workshop on South and Southeast Asian Natural Language Processing (WSSANLP), the 23rd International Conference on Computational Linguistics (COLING), Beijing, pp. 51-55. Stemmer For Hindi, National Centre for Software Technology. Ananthakrishnan Ramanathan, D Durgesh, A Rao, Lightweight, the Workshop on Computational Linguistics for South-Asian Languages, EACL. Ananthakrishnan Ramanathan, Durgesh D Rao A Lightweight Stemmer For Hindi, National Centre for Software Technology, In the Workshop on Computational Linguistics for South-Asian Languages, EACL. pp. 42-48. An algorithm for suffix stripping. M F Porter, Program. 143M.F. Porter, (1980). An algorithm for suffix stripping, Program, 14(3) pp. 130-137. Kashif Riaz, Challenges in Urdu Stemming (A Progressive Report In BCS IRSG Symposium: Future Directions in Information Access (FDIA. Kashif Riaz.(2007). Challenges in Urdu Stemming (A Progressive Report In BCS IRSG Symposium: Future Directions in Information Access (FDIA 2007).
37,608,217
Using 2D Formant Distribution to Build Speaker Models and Its Application in Speaker Verification
[]
Using 2D Formant Distribution to Build Speaker Models and Its Application in Speaker Verification Using 2D Formant Distribution to Build Speaker Models and Its Application in Speaker Verification 以二維共振峰分布建立語者音色模型及其在語者驗證上之應用 呂嘉穀 1 ,蕭志濱 2 ,李明慶 2 ,蒲長恩 3 ,吳家隆 2,* 1 國立台北大學資訊工程學系 2 法務部調查局鑑識科學處, 3 法務部調查局通訊監察處16 左右。在這個取樣率之下,音訊中的頻寬大約保留到 5K Hz,其中共振峰的數目因字音而異,大約 有五到六個。本研究在建立語者音色模型時,將會利用到前四個共振峰,即 F1 到 F4。在推導這 p 個 LPC 係數方面我們所用的方法是常見的 Levinson-Durbin 演算法。這個演算法首先自一音框求出 p 個 autocorrelation function 的值,然後再藉由一個遞迴式的演算法解出模型的 p 個係數值來。在對一個音框 求出一組係數值之後,我們會再將音框之音訊值帶入模型,並計算出預測值與實際值之誤差。倘若誤差 值過大,則表示其所找出的共振峰並不準確,或是該音框的共振仍為不佳,或是該音框受到了較大的雜 訊。當有此情形發生時,我們就會略過這些音框不用。一般而言在此一階段會被淘汰的音框大約占有聲 字音的 5%以內。 2.3、建立語者之音色模型 自一語者之語料找出有聲字音之音框,並推算出其中的共振峰之後,我們就可開始建立該語者之音色模 型。如果是使用於相同語詞式的語者驗證工作,無論語料多寡,我們都可以建立起音色模型,只是在語 料量少時,所建立起的音色模型也是與語詞內容相關。而當使用於相異語詞式的語者驗證時,我們就需 要有較多的語料才能建立起一個較為完整的音色模型。 在我們所建立的語者模型中,第一部分就是共振峰分布的情形。在上一個小節中我們提到,我們自每一 個音框找出其前四道的共振峰(F1-F4)。在此我們將自語料產生出三個二維(2D)的共振峰分布圖,分別 是 F1 對 F2,F2 對 F3,以及 F3 對 F4。每一個有效的音框將會對應到這些圖中的一個點,語料的時間 越長,則分布圖中的點也會越多。因為每個語者的音色有所不同,即使是在發出同一個音,其共振峰的 位置也會有所差異,反映在這些二維的分布圖上,就是這些點的集中位置會有所不同。 在圖二中我們顯示了兩位不同男性語者在約 60 秒的相同語詞語料所呈現出的 F1-F2 共振峰分布圖。從 圖中我們可以很清楚的看出兩位語者的 F1 對 F2 在分布上的差異。因為是依據相同的語詞,所以這裡 所反映出的差異主要是來自於二人在音色上的不同。圖四及圖五分別顯示出此二位語者之 F2-F3 與 F3-F4 的分布圖。 一是語詞識別(speech recognition) ,一是語者識別(speaker recognition)[1-4]。若是要分辨某一個語音 樣本是否來自某一個特定的語者,則又稱為語者驗證(speaker verification 或 speaker authentication)。 語者驗證又可細分為限定語詞(text dependent)與非限定語詞(text independent)兩種方式[5 表一、將男性語者之 F1-F2 平面分為 10 個區域 區域編號 F1 範圍 (Hz) F2 範圍 (Hz) 所對應之單韻母音 Nolan 與 Grigoras 在 2005 年的一篇論文中報告,紀錄語音中前四共振峰的長時間分布,在語者 德國學者 Moos 對 71 位男性語者的行動電話錄音語料進行 LTF 分析[10],他發現 F2 與 F3 合用時有優 良的語者鑑別效果。他同時發現,F3 較 F2 有著更好的穩定性,也就是對同一個語者其變異性較低。在 文章中也指出,LTF 也具有一些其他良好的特性,例如不易受到說話速度快慢及音調高低等因素的影 響。中國學者 Xu 與 Kong 在 2012 年的一篇文章中報告他們以 LTF 分析進行跨語言的語者驗證[11]。他 們以前四個共振峰的分布之 peak, kurtosis, 與 skewness 作為特徵值,發現能夠成功的以三種不同語言 本研究聲音樣本的取樣率(sampling rate)為 11,025 Hz,依據文獻的建議,階數 p 的設定大約是在 14 至鑑識實案上十分有效[7]。在後續的研究中他們進一步報告各共振峰的長時間分布多呈現出不對稱 (skewed)的情形,並且其分布最高點(mode)之位置在鑑識上的重要性超過其平均位置[8]。歐洲學者 Becker、Jessen、及 Grigoras 在 2008 年提出將長時間共振峰分析所得之參數值套用到高斯混合模型 (Gaussian mixture model)來進行語者識別[9]。他們假定各共振峰的長時間分布為高斯分布,並自各段語 料估計出高斯分布的平均值與標準差值,以進行 likelihood 計算。他們對 68 位男性語者的語料,以前 三個共振峰的位置及頻寬為參數(共六個),達到了 EER 為 0.03 的驗證成績。 (中、英、韓)的語料進行跨語言的語者驗證。歐洲學者 Jessen 與 Becker 在 2010 也曾報告他們對德語、 俄語、及阿爾巴尼亞語所進行的實驗,也有著相似的結論[12]。 前述學者所提出的長時間共振峰分析,多係對個別共振峰的分布一一的來進行,也就是屬於一個維度的 分析。本論文提出的方法是將前幾個共振峰做成對的分析,也就是求得二維的共振峰分布來進行分析。 又因為前二共振峰的分布與幾個主要單音韻母有很明確的對應,我們進一步將 F1-F2 平面分割為若干區 域,並分別分析落在這些區域中的音框以建立更細緻的語者音色模型。在下一節中我們將詳細介紹本研 究所提出建立音色模型的方法。在第三部分中我們會將這個音色模型應用到語者驗證的實驗上。 2、研究方法 本論文所提出的方法大致可分為以下幾個步驟。首先我們找出一段語音中具有共振的部分,也就是其中 的有聲字音(voiced sounds)部分。其次我們以線性預測方法,逐一分析這些有聲字音的音框,找出其中 的共振峰。再根據所找出的共振峰的分布建立起該位語者的音色模型。最後,我們藉比對兩組共振峰分 布的相似度,來比對兩段語料之音色相似度。這些步驟分別敘述如下。 2.1、找出語料中之有聲字音部分 因為本方法是要找出語料中的共振峰分布,以建立起一語者之音色模型,所以首先我們就要找出語料中 具有明顯共振的部分,即是語料中的有聲字音。在本研究中,我們先將語料切割為 20ms 大小的音框, 相鄰的音框有 10ms (即為 50%)的重疊。我們對每一個音框計算出一個音量大小值,以及求取其 autocorrelation function (ACF)曲線,並找出其在合理之週期範圍所能達到的最高值。如果一個音框具有 足夠的音量大小以及夠大的 ACF 峰值,我們就接受此一音框為一個有聲字音的音框。在語料量足夠的 情形下,上述兩項門檻值可以做較嚴格之設定,以確保所找出的音框均有不錯的共振品質。 2.2、以線性預測法(LPC)找出音框之共振峰 從上表中可以看到,所有特徵值的驗證 EER 值都有明顯的上升。但其中部分的參數,尤其是 P2 與 P3, 表現相對穩定。因此之故,P7 與 P9 也有著較佳的表現。在這個部份我們看到對男聲的正確率略為優於 女聲,一個可能的原因就是男聲相對受到隨時間變化因素的影響較小。 C、以同次之電話錄音中之不同段落進行語者驗證 電話錄音之頻寬為 3.5k Hz 左右,相當程度低於數位錄音之約 5.5k Hz 的頻寬。這個減少的頻寬對於語 者的音色會產生一定程度的影響,這樣的影響也些微的反映在驗證的 EER 之上。 表四、 以同次電話錄音中之相異段落進行語者驗證所得之驗證等錯誤率 EER (%) 是,P1-P3 的表現仍是優於 P4-P6,但是當我們將 P4-P6 綜合起來用(即為 P8),仍然是有著不錯的正確 將上表與表四比較,我們可以發現驗證之 EER 有所上升。從 P9 可以看出來,女聲部分大約上升了 六個百分點,但是男聲則上升約十個百分點。值得注意的是女聲部分 P8 的表現超越 P7,這表示出當錄 音之線路及裝置多元時,共振峰分布之特徵比頻譜特徵有著較佳的表現。此外男聲部分 P4 的表現也相 對較佳。P6 在電話錄音的部分表現較差的原因是,共振峰 F4 常常是高於截止頻。所以找到的 F4 經常 並非真正的 F4。不同次(相隔兩個月)的電話錄音,又是在非實驗室的環境之下錄製,能夠達到 90%以1 258 -387 580 -1010 ㄨ 2 258 -387 1010 -1913 3 258 -387 1913 -2687 一 4 387 -688 580 -1075 ㄛ 5 387 -688 1075 -1720 ㄜ 6 387 -688 1720 -2472 ㄟ 7 688-1075 580 -1075 8 688-1075 1075 -1483 ㄚ 9 688-1075 1483 -2257 0 其他頻率 其他頻率 屬錯誤情形 再接下來我們就對落在每一個區域內的音框進行分析,並擷取出一些描述語者聲音特色的參數。計有以 下各項: A、各區域內音框之 FFT 平均頻譜 我們利用 LPC 頻譜來找出 F1 及 F2 共振峰,藉以將音框分群,其原因為 LPC 頻譜的長處就是在於 找出主要之共振峰。但是聲音中除了共振峰外尚有其他的特徵及變化,FFT 頻譜就有較全面的紀錄。在 此我們先求出各音框的 FFT 頻譜,再將這些頻譜作平均,以得到一個平均 FFT 頻譜。因為音框已經大 致依照韻母經過分類,所以落在同一區域中的音框,其發音大致相近,頻譜也會相似。當我們將這些頻 譜加以平均時,其個別之差異將會淡化,而共有之特徵將會得到增強。未來在比對兩個語者模型時,我 們會將對應區域中之平均頻譜加以比對,以計算其相似度。 B、各區域內音框之 LPC 平均頻譜 LPC 頻譜係由線性預測之係數所推出,其特色在於較能顯出共振峰之位置。在比對語者之音色時,共 振峰仍為最主要之資訊,因為共振峰位置是由發聲器官及發音習慣所決定。我們在此將一個區域內所有 音框之 LPC 頻譜加以平均,以得到一個平均頻譜。這個頻譜在高峰的部分與 FFT 平均頻譜十分相近, 但是在波谷的部分則有較大的差異。我們可藉比對 LPC 頻譜特別來檢視兩位語者在共振峰位置上的相 似程度。 C、各區域內音框之共振峰的累加曲線 我們做完 LPC 分析之後,對於每個音框我們得到了一組的共振峰。目前我們是找出前五至六個,就是 F1 到 F6。不過在有效的頻寬範圍中我們可能只會看到四個共振峰,此時第五個共振峰就經常在有效頻 寬之外,在比對時可忽略不看。在這裡我們將一個區域內,所有音框的所有共振峰,全部投到同一個頻 率軸線上。因為區域本身就是按 F1 與 F2 來劃分的,自然這些音框在 F1 與 F2 的頻率範圍會有相當高 的一致性。但是我們發現這些來自同一語者的聲音,在 F3, F4, 甚至於 F5 也會有著相當好的一致性。 這一項特徵參數也是用來反映出一位語者在發出不同字音時,其共振峰分布的情形。但是與前一項不同 之處是,在前一項中,我們是對 LPC 頻譜之強度值做平均,所以一個音強的音框,會較一個音弱的音 框有著更大的影響。但是在這裡我們對每一個音框的共振峰都是以同一強度值紀錄,所以在意義上略有 不同。 2.4、藉比對音色模型決定二段語料之音色相似度 在前面的部分我們說明了如何自一段語料建立起其語者的音色模型,一個模型中包含了三個二維的共振 峰分布圖,以及將 F1-F2 平面分為十個區域後,在每個區域所累積出的 FFT 平均頻譜、LPC 平均頻譜、 以及共振峰分布累加曲線。在比,我們將比對兩個音色模型的內容,以估計兩段語料之語者在音色上的 相似程度。 我們進行比對兩個音色模型的基本方法為計算其相關係數值。在比對二維共振峰分布時,我們分別就三 對的二維共振峰分布(F1-F2, F2-F3, F3-F4)兩兩計算出其間之相關係數值。在比對兩個音色模型中對應之 FFT 頻譜、LPC 頻譜、及一維之共振峰累加曲線時,我們則兩兩計算其間之一維相關係數。在完成以 上之計算後,我們將得到六個相關係數值。因為這些相關係數均具有不同的特性,之後我們可以依語料 的特性顯選擇使用,或是將這些相關係數做加權平均,以得到一個綜合相似指標值。 3、實驗與結果 在以相異語詞進行語者驗證之實驗部分,我們分別針對了男聲及女聲,並以數位錄音及電話錄音兩種方 式進行實驗。每一種的錄音方式又可分為比對同次錄音中之不同語句,以及比對不同次錄音中之不同語 句兩種方式。在實驗中,我們使用了 72 人的語音樣本,其中有男生 38 人及女生 34 人,均為 18 歲以上 之成年人。採樣分兩次進行,時間上的間隔為兩個月。實驗中所用到的國語語句每組的句數均是六十句, 每句有六至十個字不等。 每次的錄音因語者說話速度快慢不同,大約有三分鐘的長度。我們再將每份語料分為前後兩段,每段的 長度大約在 90 秒左右,其中包含一句與一句之間停頓的時間。倘若扣除掉語句間停頓所花的時間,每 段錄音的長度約為 60 秒左右。因為前段與後段有著不同的語詞內容,所以我們可以用同次錄音中的前 段與後段進行相異語詞之語者驗證。因為是取自於同一次的錄音,無論是錄音裝置、錄音環境、或是語 者的生理狀況都會極為相似,所以我們預期比對的結果(即驗證正確率)將會較好。 我們也將利用不同次錄音中的前後段落進行交叉的比對,例如將第一次錄音中的前半段,與第二次錄音 中的後半段進行比對;或是將第一次錄音的後半段與第二次錄音的前半段進行比對。這種的比對方式不 單是語詞不相同,就連錄音的裝置、錄音的環境、通訊的線路、以及語者的生理狀況都有可能不同。我 們預期驗證的正確率也將會下降。 在驗證所用的參數部分,如在前面一節中所述,我們有以下幾個,分別給予編號 P1 到 P9: P1 九個特徵區域中音框之 FFT 平均頻譜 P2 九個特徵區域中音框之 LPC 平均頻譜 P3 九個特徵區域中音框之共振峰分布曲線 P4 全域之 F1-F2 分布 P5 全域之 F2-F3 分布 P6 全域之 F3-F4 分布 P7 P1-P3 之綜合 P8 P4-P6 之綜合 P9 P1-P6 之綜合 其中 P1 到 P6 是個別的參數曲線或是分布圖,P7 是把前三個特徵曲線(P1-P3)加以綜合的結果。而 P8 是將 P4-P6 這三個分布圖加以綜合的結果,而 P9 則是再進一步把 P7 和 P8 加以綜合。接下來我們就依 序表列不同條件下所得之驗證正確率。 A、以同一次數位錄音中之不同段落進行語者驗證 相對於電話錄音,數位錄音有著較大的頻寬,保存語者音色的能力較佳。又因為比對用的語料取自同一 次的錄音,在錄音裝置、錄音環境、通訊線路、以及語者生理狀態等多方面均為最相近,所以驗證的準 確率最高。 表二、以同次數位錄音中之相異段落進行語者驗證所得之驗證等錯誤率 EER (%) 參數 男聲 女聲 P1 0.3 0.0 P2 0.3 0.0 P3 1.3 0.1 P4 0.3 0.7 P5 5.4 4.3 P6 5.6 4.5 P7 0.2 0.1 P8 1.8 0.1 P9 1.1 0.0 從上表中可以看到,無論是男聲或是女聲,我們都達到了相當高的正確率。這就反映出這些特徵值確實 能夠掌握到一個語者的音色特徵。仔細的比較,我們可以發現 P1-P3 的表現略為優於 P4-P6,但是 P4 仍然是有著相當高的驗證正確率(EER 1%以下)。女聲的正確率略優於男聲,這也是因為女聲所使用的 頻域較男聲為寬,其音色可有較大的差異度。 B、以不同次之數位錄音中之不同段落進行語者驗證 如前所述,兩次錄音之間間隔了兩個月的時間,在裝置及人員方面均有所變化,所得到的語者驗證正確 率也就有所下降。 表三、 以不同次數位錄音中之不同段落進行語者驗證所得之驗證等錯誤率 EER (%) 參數 男聲 女聲 P1 15.1 14.3 P2 6.2 9.2 P3 7.0 11.2 P4 19.4 21.3 P5 24.2 16.0 P6 18.8 21.0 P7 8.4 10.5 P8 12.4 11.5 P9 6.9 10.6 參數 男聲 女聲 P1 0.1 0.2 P2 0.3 0.4 P3 0.3 0.1 P4 1.9 2.1 P5 1.8 1.4 P6 3.0 1.5 P7 0.2 0.2 P8 0.2 0.4 P9 0.1 0.2 比較上表與表二,我們看到在男聲的部分差異不大,但是在女聲的部分正確率有略為下降。這個下降的 情形主要是受到頻寬被壓縮的緣故,對女聲音色的影響較比對男聲明顯。與前兩表(表二及表三)相似的 率。 D、以不同次之電話錄音中之不同段落進行語者驗證 在四種組合之中,此一組合之情形最接近鑑識工作的實際情況。此處所利用到的電話線路十分多元,包 括固網、手機、長途等等。語者的發話環境也各為不同,年齡的範圍也較廣。所以這個部分的語料的品 質較為接近實案中的情形。 表五、 以不同次電話錄音中之不同段落進行語者驗證所得之驗證等錯誤率 EER (%) 參數 男聲 女聲 P1 11.5 8.1 P2 12.5 9.0 P3 12.5 8.4 P4 7.7 11.4 P5 16.7 8.9 P6 24.6 15.6 P7 12.2 8.7 P8 15.5 6.3 P9 14.4 6.2 上的驗證正確率,顯示本方法具有實用之潛力。 4、結論 在本論文中我們提出了一種可以以相異語詞語料進行語者驗證的方法。因為用來比對之語料的語詞可能 為不同,我們無法逐句地來進行比對。我們所提出的方法,乃是自語料分析長時間共振峰的分布以建立 起語者音色模型,然後再就兩個音色模型加以比對。這樣的一個音色模型,我們認為至少具有以下兩方 面的優點。第一是頻譜本身的形狀相當容易受到通訊線路或錄音裝置的影響,從而影響到比對的結果, 但是共振峰的位置卻是相對較為不會受到裝置或線路的影響。因為在鑑識實務上,線路及錄音裝置相當 多樣化,也難以取得其頻率特性資料。使用共振峰特徵將有助於提升鑑識的穩定性。第二方面的優點是 來自於我們將 F1-F2 平面分區之後,再就落在各區之中的音框,分別求取其平均頻譜和共振峰分布曲 線,以建立語者音色模型。這些區域大致與不同的單韻母音對應,在語詞不同的情形下,各韻母出現的 次數或為不同,但所建立起的語者音色模型仍屬完備。 An introduction to Speech and Speaker Recognition. R D Peacocke, D H Graf, IEEE Computer MagazineR.D. Peacocke and D.H. Graf, "An introduction to Speech and Speaker Recognition," IEEE Computer Magazine, pp. 26-33, August 1990. Speaker Recognition: A Tutorial. J P Campbell, Proceedings of the IEEE. 85J.P. Campbell, "Speaker Recognition: A Tutorial," Proceedings of the IEEE, Vol. 85, pp. 1437-1462, September 1997. Sadaoki Furui, Digital Speech: Processing, Synthesis, and Recognition, 2 nd . Edition, Marcel Dekker. New York, New YorkSadaoki Furui, Digital Speech: Processing, Synthesis, and Recognition, 2 nd . Edition, Marcel Dekker, New York, New York, 2001 Discrete-Time Speech Signal Processing principles and Practice. Thomas F Quatieri, Prentice HallThomas F. Quatieri, Discrete-Time Speech Signal Processing principles and Practice, Prentice Hall, 2002. Text Dependent Speaker Identification based on Spectrograms. T Dutta, Proceedings of Image and vision Computing New. Image and vision Computing NewZealandT. Dutta, "Text Dependent Speaker Identification based on Spectrograms," Proceedings of Image and vision Computing New Zealand 2007, pp. 238-243, December 2007. A Tutorial on Text-Independent Speaker Verification. F Bimbot, J.-F Bonastre, C Fredouille, G Gravier, I Magrin-Chagnolleau, S Meignier, T Merlin, J Ortega-Garcıa, D Petrovska-Delacretaz, D A Reynolds, EURASIP Journal on Applied Signal Processing. 4F. Bimbot, J.-F. Bonastre, C. Fredouille, G. Gravier, I. Magrin-Chagnolleau, S. Meignier, T. Merlin, J. Ortega-Garcıa, D. Petrovska-Delacretaz, and D. A. Reynolds, "A Tutorial on Text-Independent Speaker Verification," EURASIP Journal on Applied Signal Processing, Vol. 4, pp. 430-451, 2004. A case for formant analysis in forensic speaker identification. F Nolan, C Grigoras, International Journal of Speech Language and the Law. 122F. Nolan and C. Grigoras, "A case for formant analysis in forensic speaker identification," International Journal of Speech Language and the Law, Vol. 12, No. 2, pp. 143-173, 2005. Voice Similarity and Long Trem Formant Analysis. K Mcdougall, P Harrison, F Nolan, C Kirchhubel, University of Cambridge reportK. McDougall, P. Harrison, F. Nolan, and C. Kirchhubel, "Voice Similarity and Long Trem Formant Analysis," University of Cambridge report. Forensic Speaker Verification Using Formant Features and Gaussian Mixture Model. T Becker, M Jessen, C Grigoras, Proceedings of Interspeech 2008 Special Session: Forensic Speaker Recognition -Traditional and Automatic Approaches. Interspeech 2008 Special Session: Forensic Speaker Recognition -Traditional and Automatic ApproachesBrisbane, Queensland, Australia, SepemberT. Becker, M. Jessen, and C. Grigoras, "Forensic Speaker Verification Using Formant Features and Gaussian Mixture Model," Proceedings of Interspeech 2008 Special Session: Forensic Speaker Recognition -Traditional and Automatic Approaches, Brisbane, Queensland, Australia, Sepember, 2008. Long-Term Formant Distribution (LTF) based on German spontaneous and read speech. A Moos, Proceedings of IAFPA 2008. IAFPA 2008LausanneSwiss Federal Institute of TechnologyA. Moos, "Long-Term Formant Distribution (LTF) based on German spontaneous and read speech," Proceedings of IAFPA 2008, Swiss Federal Institute of Technology, Lausanne, 2008. Vocal tract characteristics on long-term formant distribution. Y Xi, Proceedings of the 2012 International Conference on Computer Science and Network Tecnology. the 2012 International Conference on Computer Science and Network TecnologyY. Xi, "Vocal tract characteristics on long-term formant distribution," Proceedings of the 2012 International Conference on Computer Science and Network Tecnology (2012 ICCSNT), pp. 207-211, Dec. 29-31, 2012. Long-term formant distribution as a forensic-phonetic feature. M Jessen, T Becker, Journal of Acoustical Society of America. 12842378M. Jessen and T. Becker," Long-term formant distribution as a forensic-phonetic feature," Journal of Acoustical Society of America, Vol. 128, No. 4, p. 2378, 2010.
11,072,864
Experiments on Processing Overlapping Parallel Corpora
The number and sizes of parallel corpora keep growing, which makes it necessary to have automatic methods of processing them: combining, checking and improving corpora quality, etc. We here introduce a method which enables performing many of these by exploiting overlapping parallel corpora. The method finds the correspondence between sentence pairs in two corpora: first the corresponding language parts of the corpora are aligned and then the two resulting alignments are compared. The method takes into consideration slight differences in the source documents, different levels of segmentation of the input corpora, encoding differences and other aspects of the task. The paper describes two experiments conducted to test the method. In the first experiment, the Estonian-English part of the JRC-Acquis corpus was combined with another corpus of legislation texts. In the second experiment alternatively aligned versions of the JRC-Acquis are compared to each other with the example of all language pairs between English, Estonian and Latvian. Several additional conclusions about the corpora can be drawn from the results. The method proves to be effective for several parallel corpora processing tasks.
[ 26124282, 38407095 ]
Experiments on Processing Overlapping Parallel Corpora Mark Fishel fishel@ut.ee University of Tartu J. Liivi 2 50409TartuEstonia Heiki-Jaan Kaalep hkaalep@ut.ee University of Tartu J. Liivi 2 50409TartuEstonia Experiments on Processing Overlapping Parallel Corpora The number and sizes of parallel corpora keep growing, which makes it necessary to have automatic methods of processing them: combining, checking and improving corpora quality, etc. We here introduce a method which enables performing many of these by exploiting overlapping parallel corpora. The method finds the correspondence between sentence pairs in two corpora: first the corresponding language parts of the corpora are aligned and then the two resulting alignments are compared. The method takes into consideration slight differences in the source documents, different levels of segmentation of the input corpora, encoding differences and other aspects of the task. The paper describes two experiments conducted to test the method. In the first experiment, the Estonian-English part of the JRC-Acquis corpus was combined with another corpus of legislation texts. In the second experiment alternatively aligned versions of the JRC-Acquis are compared to each other with the example of all language pairs between English, Estonian and Latvian. Several additional conclusions about the corpora can be drawn from the results. The method proves to be effective for several parallel corpora processing tasks. Introduction The number and sizes of available parallel corpora keep growing -e.g. the Europarl corpus (Koehn, 2005) has doubled and the JRC-Acquis corpus (Steinberger et al., 2006) -tripled during 2007; recently a multilingual parallel corpus of movie subtitles was announced as part of the OPUS corpus (Tiedemann and Nygaard, 2004), etc. This suggests increasing necessity for automatic methods of evaluating and combining the available corpora, as well as improving their quality. The aim of the work, described in this article, is to satisfy this necessity. Sometimes the source documents of two independently created parallel corpora overlap. Such situations are additionally troublesome since there's often difference in source document versions, formats, encoding, etc. In addition different levels of alignment exclude the possibility of direct comparison of the sentences of the corpora. On the other hand overlapping parts can be used to automatically detect alignment errors. In case one of the corpora is known to be more accurate, the other one can be proofed against it. Different levels of alignment can be synchronized, so that some units of both corpora get additionally segmented. Here we present a method for processing parallel corpora containing overlapping parts, along with its implementation. Its main objective is to improve parallel corpora quality by detecting alignment errors and to avoid duplicate entries while combining overlapping corpora. We further describe a set of experiments on applying the method to different parts of JRC-Acquis. Overlapping Parallel Corpora The type of corpora that the introduced method is meant for is independently created parallel corpora that share common source documents -either fully or partially. For instance, the Estonian-English part of the Ispra JRC-Acquis corpus 1 and the parallel corpus of the University of Tartu 2 have 2 thousand common source articles (Kaalep and Veskis, 2007). Also the Hunglish corpus (Varga et al., 2005) contains both EU legislation texts (potentially overlapping with JRC-Acquis) and movie subtitles (potentially overlapping with the OPUS corpus). We use the former example for one of our experiments, reserving the latter for future work. Another set of experiments was conducted on JRC-Acquis itself, as it contains two alternative alignment versions: one done with the Vanilla 3 and another with the HunAlign aligner (Varga et al., 2005). In this case the overlapping is almost full -according to (Steinberger et al., 2006) in case the confidence threshold of the aligner was not met, the documents were excluded from the corpus. We used the three language pairs between English, Estonian and Latvian in the experiments. The selection was motivated by the difference of all three and also by the scarcity of resources and experiments on the latter two. In case of UT and JRC-Acquis it is easy to determine the documents included in both corpora as these are augmented with CELEX codes. Nevertheless sentence comparison here is a non-trivial task due to several differences. First of all, the source documents were retrieved at different times for both corpora, which means that files of JRC-Acquis contain several minor corrections. Also the way special characters (e.g. like inõlu, liköör,šņabis, . . . ) are encoded is different in the two corpora. Next, the level of segmentation is different in both corpora: whereas UT is aligned on sentence level, JRC-Acquis is only segmented into paragraphs and these are aligned. Although according to (Steinberger et al., 2006) most of the paragraphs in the corpus consist of only one sentence, it still Figure 1: An example of correspondence between two parallel corpora chunks. The lines 4 to 6 of corpus-1 correspond to lines 3 to 4 of corpus-2, but contain erroneous alignments. Each letter stands for a phrase or a sentence. Solid lines indicate matches and dotted lines -mismatches poses an additional problem for processing the corpora. Also the two corpora were aligned with different methods. UT for example contains several shifts in the alignment; this type of error is more typical for Vanilla, and as a rule doesn't occur when using lexicalized aligners, such as Hu-nAlign (Varga et al., 2005). Finally, several text sections were left out when composing both corpora. Whereas in case of JRC-Acquis the missing parts can be extracted from the separately saved alignment, in case of UT this information is not provided. Therefore the easiest way of unifying the two corpora seems to be treating files of both as a linear input stream of sentences. Method of Processing The aim of the method introduced in this paper is to process two parallel corpora that have common source documents. Finding these common documents is treated as a separate task and is discussed, for instance, in (Kaalep and Veskis, 2007). The method works by finding a correspondence between the sentence pairs of both parallel corpora; see figure 1 for an illustrative example. Having such a correspondence determined, it can be further used to combine the two parallel corpora in the preferred way (whereas repetitions in the resulting combination are avoided), to increase the segmentation level of one corpus on the account of the other, to check the accuracy of one corpus against the other, detect error locations for manually correcting them, etc. Lang-1: Figure 2: The first step towards finding a correspondence between two parallel corpora is to align the language parts separately X Y Z W Q PM N Lang-1: X YZ WQ PM N The following steps are taken to find such a correspondence. First the corresponding language parts are aligned separately with each other: in case of the first example of section 2 that would mean the Estonian parts of UT and JRC-Acquis aligned between themselves and the English parts -between themselves. This includes approximate sentence matching, in order to account for slight differences in the same sentence, coming from version, encoding or other differences. After the two alignments are found, these are compared to reveal mismatches between them. Finally the desired action is applied to the corpora using the comparison results: either a common corpus is generated, mismatch statistics are presented, and so on. Consider the following example. Having the corpora from figure 1, first the lang-1 parts of corpus-1 and corpus-2 are aligned with each other and then the lang-2 parts (figure 2). Here several units of one side can match several on the other side. The alignments themselves are then compared using the same alignment techniques (figure 3), whereas now only 1-to-1 alignments are allowed. As a result, we obtain the correspondence of the sentence pairs of the two corpora, as in figure 1. The main steps are explained in more detail in the following subsections. 1 -1 2,3 -2 4,5 -3 6 -4 7 -5 1 -1 2,3 -2 4 -3 5,6 -4 7 -5 Alignment of Lang-1: Alignment of Lang-2: Figure 3: To find the actual correspondence of two parallel corpora the alignments of the two language parts are compared. In this example the 3rd and 4th lines do not match Alignment of the Corresponding Language Parts The first step is in essence very similar to the original task of bilingual sentence alignment itself. However, whereas the latter means comparing different languages and is therefore computationally difficult, in this case the task is much simpler, since both parts are in the same language and it suffices to compare the sentences directly by characters. The only problem is that instead of strict comparison of the sentences, here approximate comparison is required due to possible slight differences in different corpora. For example, the sentence x in lang-2 of corpus-1 on figure 1 can have a typing error: "this is a shord sentence", and the same x in corpus-2 have the error corrected -"this is a short sentence". Although the two are obviously one and the same sentence, strict comparison would yield a mismatch between them. The aligning task is therefore analogous to the longest common subsequence problem, where corpora units (e.g. sentences) are the elements and are matched approximately. In our implementation the alignment of the two texts is computed in an optimal way using edit distance. The cost of substituting a unit for another equals the distance between the two (which is explained in the next subsection) and the cost of insertion/deletion is always 1. In addition to 1-to-1 substitution all N-to-M pairs are also considered up to a predefined limit (in our implementation -10 by default). This enables detecting matching units even if the segmentation level is very different in the two corpora (e.g. matching together paragraphs and sentences). Approximate Sentence Matching (Kaalep and Veskis, 2007) use Levenshtein distance and check whether the distance between two sentences doesn't exceed 1% of the average length of the two sentences. Other string similarity metrics applied to written text include several from the edit distance family (the Needleman-Wunsch metric, the Smith-Waterman metric, etc), the Jaro metric and others. In the current work we adopt the method from (Kaalep and Veskis, 2007), but use generalized edit distance instead of Levenshtein distance. For instance the weight of replacing/inserting digits is extremely high, so that e.g. sentences "article 3" and "article 5" will not be considered to match with no matter what edit distance percentage threshold. On the other hand operations on empty symbols (spaces, tabs) and punctuation have low weights. This allows to set the percentage threshold higher without adding obvious matching errors. Also character case is ignored during comparison. Comparison of Separate Language Alignments As soon as the language part alignments are obtained, their correspondence is to be determined. Although different language parts are to be compared here, only the alignments between unit numbers are compared, which again enables using direct comparison. In this case is accomplished again using edit distance, but this time with the simple Levenshtein distance of the alignment cells. Thus equality of the alignment elements indicates matching alignments while 1to-1 inequality or 1-to-0/0-to-1 matches indicate mismatching alignments. It is important to note that a mismatch between two alignments doesn't indicate, which of the corpora has an erroneous alignment; instead, it shows a potential spot, where at least one of the corpora has an error. In order to be used in automatic error correction, this setup requires that one of the corpora is preliminarily assumed to be more accurate. Alternatively, the spots can be manually post-processed, thus seeing which of the corpus contains the error and correcting it. On the other hand a match between alignments also merely indicates that the two corpora have matching alignments. This can occur both in case of correct alignments and coinciding erroneous alignments, though the latter is less likely (depending on the used alignment method). Implementation After the sentence pairs get aligned it is still necessary to define the policy for sentence inclusion/exclusion from the resulting combined corpus. In the current implementation it is controlled by the user. It is possible to configure separately, whether to include sentences present in only one of the corpora, and the ones that match in both corpora. In case of errors it is possible to include sentences from either one or the other corpora -in case one of the corpora is preliminarily known to contain less alignment errors. Alternatively it is possible to use one of the language-specific alignments or exclude the location of the error as a whole. Logging of the alignments and error types is also supported. This enables error detection, and thus later human inspection and corpora post-processing. The implementation is done in PERL and is available online 4 . Experiment-1: Combining Partially Overlapping Corpora In this experiment we processed the overlapping parts of Hunaligned JRC-Acquis and UT; we used the older version (2.2) of JRC-Acquis as the newer (3.0) doesn't include the Hun-Align alignments yet, and the Vanilla alignment is much less accurate (Kaalep and Veskis, 2007). The aim here was to obtain a common corpus with maximum size; therefore both the matching and the unique sentences were included from both input corpora. In order to include some sentences from alignment mismatches, it was necessary to decide which corpus was more accurate to use it as an error guideline. According to (Kaalep and Veskis, 2007) the potential error locations in the HunAlign version of JRC-Acquis are the 0 to N alignments, their predecessors and successors; all these were removed from the corpus and enhanced version was used. In addition a second corpus was generated with only the matching sentences included and single and mismatched sentences left out. This would result in a smaller but much more accurate corpus -i.e. maximum accuracy in contrast to the maximum size of the first desired result. Results The two resulting corpora were made available online together with the implementation of the introduced method. The output statistics of the processing results are showed in table 1. Excluding the potentially erroneous alignments from JRC-Acquis reduced the number of sentences to 93% of the original. However, after processing the enhanced JRC-Acquis size was 102% of the original (111% of the reduced corpus). The size of the enhanced UT was 103% of the original. The size of the overlapping part grew to 106% of the UT part and 145% of the JRC-Acquis part. In total the resulting combined corpus size was 193% of UT and 161% of JRC-Acquis. Based on the results in table 1, 60% of the UT sentences match with 82% of the JRC-Acquis sentences. The size of the maximum accuracy corpus is only slightly larger for JRC-Acquis than the matched sentences counted separately, which means that in the majority of cases the segmentation was deeper in UT. It is theoretically possible that the matched sentence pairs include erroneous alignments; however in the current experiment a small randomly selected portion of the output was manually checked and no errors were discovered. Match type Nr. of occurrences 0-1 5621 1-0 30186 1-1 54723 1-2 59 2-1 426 2-2 1 3-1 5 Table 3: Types of sentence pair matches between UT and JRC-Acquis Table 3 summarizes the types of matched alignments in the results (an N-M type means N sentence pairs in UT corresponding to M sentence in JRC-Acquis). These confirm both that the segmentation level in UT is slightly deeper (since there's more N-1 alignments than the other way around) and that the paragraphs of JRC-Acquis often contain only 1 sentence (since 1-1 alignments dominate). Experiment-2: Comparing Different Alignments of the Same Corpus In the second set of experiments the introduced method was applied to different alignments of the same parts of JRC-Acquis. The processed parts included three language pairs: English-Estonian, English-Latvian and Estonian-Latvian (unless otherwise specified, we further refer to these in the given order). The aim was to compare the different alignments and try to get a notion of the corpus accuracy; therefore no common corpora was generated. Results The results are displayed in table 2. The Estonian-Latvian part has a much higher percentage of matching sentences than the other two parts: 98% in both HunAlign and Vanilla versions versus 83% in the HunAlign and 86% -in the Vanilla version. It is possible that the Estonian-Latvian part contains much more coinciding errors, which would also cause the matching part to be larger. However a more desired explanation would be that this part is aligned more accurately. In order to make sure we performed manual proofing of the results by randomly picking some files and checking whether the matching sentences reside in correct align- Match type Nr. of occurrences En-Et En-Lv Et- Lv 0-1 3061 3076 661 1-0 1798 2005 158 1-1 251608 254743 315603 1-2 1 8 10 2-1 94 80 151 Table 4: Types of sentence pair matches between HunAlign and Vanilla versions of JRC-Acquis ments and that mismatching sentences really include an alignment error 5 . None of the manually checked files contained coinciding errors in the Estonian-Latvian parts; in the other two parts mostly some two Estonian or Latvian sentences were erroneously grouped into one. An extract from the corpora (parts of documents with the CELEX number 31965R0079) along with the program output is displayed in figure 4 Table 4 summarizes the types of matching sentence pair alignments in all three experiments. Expectedly, most of the alignments are one-to-one, with rare two-to-one instances. Conclusions and Future Work We presented a method of automatic processing of overlapping parallel corpora. The method enables comparing corpora and finding mismatches in alignments, improving corpora quality both automatically and manually via postprocessing and combining the input into a common corpus without including duplicate entries. The method is insensitive to minor differences in the aligned sentences, or to large sections missing from one of the corpora. It also takes into consideration possible differences in the level of segmentation. A set of experiments of applying the method to the JRC-Acquis corpus was described. In the first experiment the Estonian-English part was combined with the parallel corpus of the University of Tartu. The results show that the latter has a higher level of segmentation but sometimes slightly lower alignment accuracy. Two common corpora were generated, based on the two: one with the maximumsize criterion (193% of the UT corpus and 161% of JRC-Acquis) and another with the maximum-accuracy criterium (60% and 80% of the overlapping parts of the UT and the JRC-Acquis corpora, respectively 1. To meet the needs of the common agricultural policy, there . . . Artikkel 1 1. pants 1. To meet the needs of the common agricultural policy, there . . . Artikkel 1 1. pants Figure 4: Extract from JRC-Acquis with all three languages and two alignment versions. It can be clearly seen even without knowing the used languages that there is an almost direct correspondence between Estonian and Latvian texts. The first and second pairs of Estonian-Latvian sentences in the Vanilla part match the second pair of sentences in the HunAlign part. On the other hand both the Estonian and the Latvian part form an analogical mismatch with the English part. In this case both the HunAlign and the Vanilla versions of English-Estonian and English-Latvian parts contain an alignment errors, however different ones be interesting to process the newer and larger version of the corpus; this however requires the new HunAlign version to be released. Also the OPUS and Hunglish corpora can be experimented with. Also the results of the first experiment can be used to manually post-process the corpus to correct the erroneous alignments. Finding the corpus parts with common source documents is an open issue in the general case. ). In the rest of the experiments the method was applied to the two alternative alignment versions of the JRC-Acquis: the HunAlign and the Vanilla version. Language pairs between three languages were tested: English, Estonian and Latvian. The results show that the Estonian-Latvian part of the corpus has a much higher number of matching sentence pairs (98% of both versions), which indicates good alignment quality. Future work has several possibilities. Since the experiments were applied to the older version of JRC-Acquis, it would Nr. of sentence pairs (·10 3 )English-Estonian English-Latvian Estonian-Latvian HunAlign Vanilla HunAlign Vanilla HunAlign Vanilla Total: 301.6 295.2 304.0 295.7 322.4 321.6 Matched: 251.8 251.7 254.9 254.8 315.9 315.7 Single: 1.8 3.1 2.0 3.1 0.2 0.6 Mismatched alignments: 48.1 40.4 47.1 37.8 6.4 5.2 Table 2 : 2Output statistics of processing UT and JRC-AcquisNr. HunAlign Vanilla English Estonian Latvian English Estonian Latvian 1 CHAPTER 1 Creation of a farm accoun- tancy data network for the European Economic Community ON VASTU VÕTNUD KÄESOLEVA MÄÄRUSE: IR PIEŅĒMUSǏ SO REGULU. CHAPTER 1 Creation of a farm accoun- tancy data network for the European Economic Community I PEATÜKK I NODAĻ A 2 Article 1 I PEATÜKK Euroopa Ma- jandusühenduse põllumajandus- liku raamatupi- damise andme- võrgu loomine I NODAĻ A Eiropas Ekono- miskās kopie- nas lauku saimniecību grāmatvedības datu tīkla izveidošana Article 1 Euroopa Ma- jandusühenduse põllumajandus- liku raamatupi- damise andme- võrgu loomine Eiropas Eko- nomiskās ko- pienas lauku saimniecību grāmatvedības datu tīkla izveidošana 3 further referred to as JRC-Acquis 2 http://www.cl.ut.ee/korpused/paralleel/, further referred to as UT 3 http://nl.ijs.si/telri/Vanilla/ http://ats.cs.ut.ee/smt/paralign/ Special thanks to Zane Fishele for proofing the Estonian-Latvian and English-Latvian parts Comparing parallel corpora and evaluating their quality. H.-J Kaalep, K Veskis, Proceedings of MT Summit XI. MT Summit XICopenhagen, DenmarkH.-J. Kaalep and K. Veskis. 2007. Comparing parallel cor- pora and evaluating their quality. In Proceedings of MT Summit XI, pages 275-279, Copenhagen, Denmark. Europarl: A parallel corpus for statistical machine translation. P Koehn, Proceedings of MT. MTSummit X, Phuket, ThailandP. Koehn. 2005. Europarl: A parallel corpus for statistical machine translation. In Proceedings of MT Summit X, Phuket, Thailand. The jrc-acquis: A multilingual aligned parallel corpus with 20+ languages. R Steinberger, B Pouliquen, A Widiger, C Ignat, T Erjavec, D Tufiş, D Varga, Proceedings of LREC'2006. LREC'2006Genoa, ItalyR. Steinberger, B. Pouliquen, A. Widiger, C. Ignat, T. Er- javec, D. Tufiş, and D. Varga. 2006. The jrc-acquis: A multilingual aligned parallel corpus with 20+ lan- guages. In Proceedings of LREC'2006, pages 2142- 2147, Genoa, Italy. The opus corpusparallel & free. J Tiedemann, L Nygaard, Proceedings of LREC'2004. LREC'2004Lisbon, PortugalJ. Tiedemann and L. Nygaard. 2004. The opus corpus - parallel & free. In Proceedings of LREC'2004, pages 1183-1186, Lisbon, Portugal. Parallel corpora for medium density languages. D Varga, P Halákcsy, A Kornai, V Nagy, L Németh, V Trón, Proceedings of RANLP-05. RANLP-05Borovets, BulgariaD. Varga, P. Halákcsy, A. Kornai, V. Nagy, L. Németh, and V. Trón. 2005. Parallel corpora for medium density lan- guages. In Proceedings of RANLP-05, pages 590-596, Borovets, Bulgaria.
2,988,891
Chinese and Japanese Word Segmentation Using Word-Level and Character-Level Information
In this paper, we present a hybrid method for Chinese and Japanese word segmentation. Word-level information is useful for analysis of known words, while character-level information is useful for analysis of unknown words, and the method utilizes both these two types of information in order to effectively handle known and unknown words. Experimental results show that this method achieves high overall accuracy in Chinese and Japanese word segmentation.
[ 9862757, 1845735, 333513, 8505552, 2776693, 725590, 5651543, 21821146 ]
Chinese and Japanese Word Segmentation Using Word-Level and Character-Level Information Tetsuji Nakagawa nakagawa378@oki.com Corporate Research and Development Center Oki Electric Industry Co Ltd. 2−5−7 Honmachi, Chuo-ku541-0053OsakaJapan Chinese and Japanese Word Segmentation Using Word-Level and Character-Level Information In this paper, we present a hybrid method for Chinese and Japanese word segmentation. Word-level information is useful for analysis of known words, while character-level information is useful for analysis of unknown words, and the method utilizes both these two types of information in order to effectively handle known and unknown words. Experimental results show that this method achieves high overall accuracy in Chinese and Japanese word segmentation. Introduction Word segmentation in Chinese and Japanese is an important and difficult task. In these languages, words are not separated by explicit delimiters, and word segmentation must be conducted first in most natural language processing applications. One of the problems which makes word segmentation more difficult is existence of unknown (out-of-vocabulary) words. Unknown words are defined as words that do not exist in a system's dictionary. The word segmentation system has no knowledge about these unknown words, and determining word boundaries for such words is difficult. Accuracy of word segmentation for unknown words is usually much lower than that for known words. In this paper, we propose a hybrid method for Chinese and Japanese word segmentation, which utilizes both word-level and character-level information. Word-level information is useful for analysis of known words, and character-level information is useful for analysis of unknown words. We use these two types of information at the same time to obtain high overall performance. This paper is organized as follows: Section 2 describes previous work on Chinese and Japanese word segmentation on which our method is based. Section 3 introduces the hybrid method which combines word-level and character-level processing. Section 4 shows experimental results of Chinese and Japanese word segmentation. Section 5 discusses related work, and Section 6 gives the conclusion. Previous Work on Word Segmentation Our method is based on two existing methods for Chinese or Japanese word segmentation, and we explain them in this section. The Markov Model-Based Method Word-based Markov models are used in English part-of-speech (POS) tagging (Charniak et al., 1993;Brants, 2000). This method identifies POStags T = t 1 , . . . , t n , given a sentence as a word sequence W = w 1 , . . . , w n , where n is the number of words in the sentence. The method assumes that each word has a state which is the same as the POS of the word and the sequence of states is a Markov chain. A state t transits to another state s with probability P (s|t), and outputs a word w with probability P (w|t). From such assumptions, the probability that the word sequence W with parts-of-speech T is generated is P (W, T ) = n i=1 P (w i t i |w 0 t 0 . . . w i−1 t i−1 ), n i=1 P (w i |t i )P (t i |t i−1 ),(1) where w 0 (t 0 ) is a special word(part-of-speech) representing the beginning of the sentence. Given a word sequence W , its most likely POS sequenceT can be found as follows: T = argmax T P (T |W ), = argmax T P (W, T ) P (W ) , = argmax T P (W, T ), argmax T n i=1 P (w i |t i )P (t i |t i−1 ). (2) The equation above can be solved efficiently by the Viterbi algorithm (Rabiner and Juang, 1993). In Chinese and Japanese, the method is used with some modifications. Because each word in a sentence is not separated explicitly in Chinese and Japanese, both segmentation of words and identification of the parts-of-speech tags of the words must be done simultaneously. Given a sentence S, its most likely word sequenceŴ and POS sequencê T can be found as follows where W ranges over the possible segments of S (w 1 · · · w n = S): (Ŵ ,T ) = argmax W,T P (W, T |S), = argmax W,T P (W, T, S) P (S) , = argmax W,T P (W, T, S), = argmax W,T P (W, T ), argmax W,T n i=1 P (w i |t i )P (t i |t i−1 ). (3) The equation above can be solved using the Viterbi algorithm as well. The possible segments of a given sentence are represented by a lattice, and Figure 1 shows an example. Given a sentence, this method first constructs such a lattice using a word dictionary, then chooses the best path which maximizes Equation (3). This Markov model-based method achieves high accuracy with low computational cost, and many Japanese word segmentation systems adopt it (Kurohashi and Nagao, 1998;. However, the Markov model-based method has a difficulty in handling unknown words. In the constructing process of a lattice, only known words are dealt with and unknown words must be handled with other methods. Many practical word segmentation systems add candidates of unknown words to Tag Description B The character is in the beginning of a word. I The character is in the middle of a word. E The character is in the end of a word. S The character is itself a word. or statistical word models which predict the probabilities for any strings to be unknown words (Sproat et al., 1996;Nagata, 1999). However, such heuristic rules or word models must be carefully designed for a specific language, and it is difficult to properly process a wide variety of unknown words. The Character Tagging Method This method carries out word segmentation by tagging each character in a given sentence, and in this method, the tags indicate word-internal positions of the characters. We call such tags positionof-character (POC) tags (Xue, 2003) in this paper. Several POC-tag sets have been studied (Sang and Veenstra, 1999;Sekine et al., 1998), and we use the 'B, I, E, S' tag set shown in Table 1 1 . Figure 2 shows an example of POC-tagging. The POC-tags can represent word boundaries for any sentences, and the word segmentation task can be reformulated as the POC-tagging task. The tagging task can be solved by using general machine learning techniques such as maximum entropy (ME) models (Xue, 2003) and support vector machines (Yoshida et al., 2003;Asahara et al., 2003). This character tagging method can easily handle unknown words, because known words and unknown words are treated equally and no other exceptional processing is necessary. This approach is also used in base-NP chunking (Ramshaw and Marcus, 1995) and named entity recognition (Sekine et al., 1998) as well as word segmentation. Word Segmentation Using Word-Level and Character-Level Information We saw the two methods for word segmentation in the previous section. It is observed that the Markov model-based method has high overall accuracy, however, the accuracy drops for unknown words, and the character tagging method has high accuracy for unknown words but lower accuracy for known words (Yoshida et al., 2003;Xue, 2003;Sproat and Emerson, 2003). This seems natural because words are used as a processing unit in the Markov model-based method, and therefore much information about known words (e.g., POS or word bigram probability) can be used. However, unknown words cannot be handled directly by this method itself. On the other hand, characters are used as a unit in the character tagging method. In general, the number of characters is finite and far fewer than that of words which continuously increases. Thus the character tagging method may be robust for unknown words, but cannot use more detailed information than character-level information. Then, we propose a hybrid method which combines the Markov model-based method and the character tagging method to make the most of wordlevel and character-level information, in order to achieve high overall accuracy. A Hybrid Method The hybrid method is mainly based on word-level Markov models, but both POC-tags and POS-tags are used in the same time and word segmentation for known words and unknown words are conducted simultaneously. Figure 3 shows an example of the method given a Japanese sentence " ", where the word " "(person's name) is an unknown word. First, given a sentence, nodes of lattice for known words are made as in the usual Markov model-based method. Next, for each character in the sentence, nodes of POC-tags (four nodes for each character) are made. Then, the most likely path is searched (the thick line indicates the correct path in the example). Unknown words are identified by the nodes with POC-tags. Note that some transitions of states are not allowed (e.g. from I to B, or from any POS-tags to E), and such transitions are ignored. Because the basic Markov models in Equation (1) are not expressive enough, we use the following equation instead to estimate probability of a path in a lattice more precisely: P (W, T ) = n i=1 P (w i t i |w 0 t 0 . . . w i−1 t i−1 ), n i=1 {λ 1 P (w i |t i )P (t i ) +λ 2 P (w i |t i )P (t i |t i−1 ) +λ 3 P (w i |t i )P (t i |t i−2 t i−1 ) +λ 4 P (w i t i |w i−1 t i−1 )}, (λ 1 + λ 2 + λ 3 + λ 4 = 1).(4) The probabilities in the equation above are estimated from a word segmented and POS-tagged corpus using the maximum-likelihood method, for example, P (w i |t i ) =    f (w i ,t i ) w f (w,t i ) (f (w i , t i ) > 0), 0.5 w f (w,t i ) (f (w i , t i ) = 0),(5) where f (w, t) is a frequency that the word w with the tag t occurred in training data. Unseen events in the training data are handled as they occurred 0.5 times for smoothing. λ 1 , λ 2 , λ 3 , λ 4 are calculated by deleted interpolation as described in (Brants, 2000). A word dictionary for a Markov modelbased system is often constructed from a training corpus, and no unknown words exist in the training corpus in such a case. Therefore, when the parameters of the above probabilities are trained from a training corpus, words that appear only once in the training corpus are regarded as unknown words and decomposed to characters with POC-tags so that statistics about unknown words are obtained 2 . In order to handle various character-level features, we calculate word emission probabilities for POC-tags by Bayes' theorem: P (w i |t i ) = P (t i |w i , t i ∈ T P OC )P (w i , t i ∈ T P OC ) P (t i ) , = P (t i |w i , t i ∈ T P OC ) t∈T P OC P (w i , t) P (t i ) ,(6) where T P OC = {B, I, E, S}, w i is a character and t i is a POC-tag. In the above equation, P (t i ) and P (w i , t) are estimated by the maximum-likelihood method, and the probability of a POC tag t i , given a character w i (P (t i |w i , t i ∈ T P OC )) is estimated using ME models (Berger et al., 1996). We use the following features for ME models, where c x is the xth character in a sentence, w i = c i and y x is the character type of c x ( Table 2 shows the definition of character types we used): (1) Characters (c i −2 , c i −1 , c i , c i +1 , c i +2 ) (2) Pairs of characters (c i −2 c i −1 , c i −1 c i , c i −1 c i +1 , c i c i +1 , c i +1 c i +2 ) (3) Character types (y i −2 , y i −1 , y i , y i +1 , y i +2 ) (4) Pairs of character types (y i −2 y i −1 , y i −1 y i , y i −1 y i +1 , y i y i +1 , y i +1 y i +2 ) Parameters of ME are trained using all the words in training data. We use the Generalized Iterative Scaling algorithm (Darroch and Ratcliff, 1972) for parameter estimation, and features that appeared less than or equal to 10 times in training data are ignored in order to avoid overfitting. What our method is doing for unknown words can be interpreted as follows: The method examines all possible unknown words in a sentence, and probability for an unknown word of length k, w i = P (w i t i |h) (7) =      P (c j S|h) (k = 1), P (c j B|h) j+k−2 l=j+1 P (c l I|h)P (c j+k−1 E|h) (k > 1), where h is a history of the sequence. In other words, the probability of the unknown word is approximated by the product of the probabilities of the composing characters, and this calculation is done in the framework of the word-level Markov model-based method. Experiments This section gives experimental results of Chinese and Japanese word segmentation with the hybrid method. The following values are used to evaluate the performance of word segmentation: R : Recall (The number of correctly segmented words in system's output divided by the number of words in test data) P : Precision (The number of correctly segmented words in system's output divided by the number of words in system's output) F : F-measure (F = 2 × R × P/(R + P )) R known : Recall for known words R unknown : Recall for unknown words Experiments of Chinese Word Segmentation We use three Chinese word-segmented corpora, the Academia Sinica corpus (AS), the Hong Kong City University corpus (HK) and the Beijing University corpus (PK), all of which were used in the First International Chinese Word Segmentation Bakeoff (Sproat and Emerson, 2003) at ACL-SIGHAN 2003. The three corpora are word-segmented corpora, but POS-tags are not attached, therefore we need to attach a POS-tag (state) which is necessary for the Markov model-based method to each word. We attached a state for each word using the Baum-Welch algorithm (Rabiner and Juang, 1993) which is used for Hidden Markov Models. The algorithm finds a locally optimal tag sequence which maximizes Equation (1) in an unsupervised way. The initial states are randomly assigned, and the number of states is set to 64. We use the following systems for comparison: Bakeoff-1, 2, 3 The top three systems participated in the SIGHAN Bakeoff (Sproat and Emerson, 2003). Maximum Matching A word segmentation system using the well-known maximum matching method. Character Tagging A word segmentation system using the character tagging method. This system is almost the same as the one studied by Xue (2003). Features described in Section 3.1 (1)-(4) and the following (5) are used to estimate a POC tag of a character c i , where t x is a POC-tag of the xth character in a sentence: (5) Unigram and bigram of previous POCtags (t i −1 , t i −2 t i −1 ) All these systems including ours do not use any other knowledge or resources than the training data. In this experiments, word dictionaries used by the hybrid method and Maximum Matching are constructed from all the words in each training corpus. Statistical information of these data is shown in Table 3. The calculated values of λ i in Equation (4) are shown in Table 4. The results are shown in Table 5. Our system achieved the best F-measure values for the three corpora. Although the hybrid system's recall values for known words are not high compared to the participants of SIGHAN Bakeoff, the recall values for known words and unknown words are relatively well-balanced. The results of Maximum Matching and Character Tagging show the trade-off between the word-based approach and the character-based approach which was discussed in Section 3. Maximum Matching is word-based and has the higher recall values for known words than Character Tagging on the HK and PK corpus. Character Tagging is character-based and has the highest recall values for unknown words on the AS, HK and PK corpus. Experiments of Japanese Word Segmentation We use the RWCP corpus, which is a Japanese word-segmented and POS-tagged corpus. We use the following systems for comparison: ChaSen The word segmentation and POS-tagging system based on extended Markov models (Asahara and Matsumoto, 2000;. This system carries out unknown word processing using heuristic rules. Maximum Matching The same system used in the Chinese experiments. Character Tagging The same system used in the Chinese experiments. As a dictionary for ChaSen, Maximum Matching and the hybrid method, we use IPADIC which is attached to ChaSen. Statistical information of these data is shown in Table 3. The calculated values of λ i in Equation (4) The results are shown in Table 6 3 . Compared to ChaSen, the hybrid method has the comparable Fmeasure value and the higher recall value for unknown words (the difference is statistically significant at 95% confidence level). Character Tagging has the highest recall value for unknown words as in the Chinese experiments. Discussion Several studies have been conducted on word segmentation and unknown word processing. Xue (2003) studied Chinese word segmentation using the character tagging method. As seen in the previous section, this method handles known and unknown words in the same way basing on characterlevel information. Our experiments showed that the method has quite high accuracy for unknown words, but accuracy for known words tends to be lower than other methods. 3 In this evaluation, R known and R unknown are calculated considering words in the dictionary as known words. Words which are in the training corpus but not in the dictionary are handled as unknown words in the calculations. The number of known/unknown words of the RWCP corpus shown in Table 3 is also calculated in the same way. Uchimoto et al. (2001) studied Japanese word segmentation using ME models. Although their method is word-based, no word dictionaries are used directly and known and unknown words are handled in a same way. The method estimates how likely a string is to be a word using ME. Given a sentence, the method estimates the probabilities for every substrings in the sentence. Word segmentation is conducted by finding a division of the sentence which maximizes the product of probabilities that each divided substring is a word. Compared to our method, their method can handle some types of features for unknown words such as "the word starts with an alphabet and ends with a numeral" or "the word consists of four characters". Our method cannot handle such word-level features because unknown words are handled by using a character as a unit. On the other hand, their method seems to have a computational cost problem. In their method, unknown words are processed by using a word as a unit, and the number of candidates for unknown words in a sentence which consists of n characters is equal to n(n + 1)/2. Actually, they did not consider every substrings in a sentence, and limited the length of substrings to be less than or equal to five characters. In our method, the number of POCtagged characters which is necessary for unknown word processing is equal to 4n, and there is no limitation for the length of unknown words. Asahara et al. (2003) studied Chinese word segmentation based on a character tagging method with support vector machines. They preprocessed a given sentence using a word segmenter based on Markov models, and the output is used as features for character tagging. Their method is a characterbased method incorporating word-level information and that is reverse to our approach. They did not use some of the features we used like character types, and our method achieved higher accuracies compared to theirs on the AS, HK and PK corpora (Asahara et al., 2003). Conclusion In this paper, we presented a hybrid method for word segmentation, which utilizes both word-level and character-level information to obtain high accuracy for known and unknown words. The method combines two existing methods, the Markov modelbased method and character tagging method. Experimental results showed that the method achieves high accuracy compared to the other state-of-the-art methods in both Chinese and Japanese word segmentation. The method can conduct POS tagging for known words as well as word segmentation, but tagging identified unknown words is left as future work. Figure 1 : 1Example of Lattice Used in the Markov Model-Based Method Figure 2 : 2Example of the Character Tagging Method: Word boundaries are indicated by vertical lines ('|'). Figure 3 : 3Example of the Hybrid Method Table 1 : 1The 'B, I, E, S' Tag Set the lattice. The candidates of unknown words can be generated by heuristic rules Table 2 : 2Character Types c j · · · c j+k−1 is calculated as: Corpus # of Training Words # of Testing Words# of Words Rate of (known/unknown) in Dictionary Unknown Words AS 5,806,611 11,985 (11,727/ 258) 146,212 0.0215 HK 239,852 34,955 (32,463/2,492) 23,747 0.0713 PK 1,121,017 17,194 (16,005/1,189) 55,226 0.0692 RWCP 840,879 93,155 (93,085/ 70) 315,602 0.0008 Table 3 : 3Statistical Information of Corpora Table 4 : 4Calculated Values of λ i are shown inTable 4.Corpus System R P F R known R unknown Hybrid method 0.973 0.971 0.972 0.979 0.717 Bakeoff-1 0.966 0.956 0.961 0.980 0.364 AS Bakeoff-2 0.961 0.958 0.959 0.966 0.729 Bakeoff-3 0.944 0.945 0.945 0.952 0.574 Maximum Matching 0.917 0.912 0.915 0.938 0.000 Character Tagging 0.962 0.959 0.960 0.966 0.744 Hybrid method 0.951 0.948 0.950 0.969 0.715 Bakeoff-1 0.947 0.934 0.940 0.972 0.625 HK Bakeoff-2 0.940 0.908 0.924 0.980 0.415 Bakeoff-3 0.917 0.915 0.916 0.936 0.670 Maximum Matching 0.908 0.830 0.867 0.975 0.037 Character Tagging 0.917 0.917 0.917 0.932 0.728 Hybrid method 0.957 0.952 0.954 0.970 0.774 Bakeoff-1 0.962 0.940 0.951 0.979 0.724 PK Bakeoff-2 0.955 0.938 0.947 0.976 0.680 Bakeoff-3 0.955 0.938 0.946 0.977 0.647 Maximum Matching 0.930 0.883 0.906 0.974 0.020 Character Tagging 0.932 0.931 0.931 0.943 0.786 Table 5 : 5Performance of Chinese Word SegmentationCorpus System R P F R known R unknown Hybrid method 0.993 0.994 0.993 0.993 0.586 RWCP ChaSen 0.991 0.992 0.991 0.991 0.243 Maximum Matching 0.880 0.918 0.898 0.880 0.100 Character Tagging 0.972 0.968 0.970 0.972 0.629 Table 6 : 6Performance of Japanese Word Segmentation The 'B, I, E, S' tags are also called 'OP-CN, CN-CN, CN-CL, OP-CL' tags(Sekine et al., 1998) or 'LL, MM, RR, LR' tags(Xue, 2003). As described in Equation(5), we used the additive smoothing method which is simple and easy to implement. Although there are other more sophisticated methods such as Good-Turing smoothing, they may not necessarily perform well because the distribution of words is changed by this operation. AcknowledgementsThis work was supported by a grant from the National Institute of Information and Communications Technology of Japan. Extended Models and Tools for High-performance Partof-Speech Tagger. Masayuki Asahara, Yuji Matsumoto, Proceedings of the 18th International Conference on Computational Linguistics. the 18th International Conference on Computational LinguisticsMasayuki Asahara and Yuji Matsumoto. 2000. Ex- tended Models and Tools for High-performance Part- of-Speech Tagger. In Proceedings of the 18th Inter- national Conference on Computational Linguistics, pages 21-27. Combining Segmenter and Chunker for Chinese Word Segmentation. Masayuki Asahara, Chooi Ling Goh, Xiaojie Wang, Yuji Matsumoto, Proceedings of the 2nd SIGHAN Workshop on Chinese Language Processing. the 2nd SIGHAN Workshop on Chinese Language ProcessingMasayuki Asahara, Chooi Ling Goh, Xiaojie Wang, and Yuji Matsumoto. 2003. Combining Segmenter and Chunker for Chinese Word Segmentation. In Pro- ceedings of the 2nd SIGHAN Workshop on Chinese Language Processing, pages 144-147. A Maximum Entropy Approach to Natural Language Processing. Adam L Berger, Stephen A Della Pietra, Vincent J Della Pietra, Computational Linguistics. 221Adam L. Berger, Stephen A. Della Pietra, and Vincent J. Della Pietra. 1996. A Maximum Entropy Approach to Natural Language Processing. Computational Lin- guistics, 22(1):39-71. TnT -A Statistical Partof-Speech Tagger. Thorsten Brants, Proceedings of ANLP-NAACL 2000. ANLP-NAACL 2000Thorsten Brants. 2000. TnT -A Statistical Part- of-Speech Tagger. In Proceedings of ANLP-NAACL 2000, pages 224-231. Equations for Part-of-Speech Tagging. Eugene Charniak, Curtis Hendrickson, Neil Jacobson, Mike Perkowitz, Proceedings of the Eleventh National Conference on Artificial Intelligence. the Eleventh National Conference on Artificial IntelligenceEugene Charniak, Curtis Hendrickson, Neil Jacobson, and Mike Perkowitz. 1993. Equations for Part-of- Speech Tagging. In Proceedings of the Eleventh Na- tional Conference on Artificial Intelligence, pages 784-789. Generalized iterative scaling for log-linear models. The annuals of Mathematical Statistics. J Darroch, D Ratcliff, 43J. Darroch and D. Ratcliff. 1972. Generalized iterative scaling for log-linear models. The annuals of Mathe- matical Statistics, 43(5):1470-1480. Japanese Morphological Analysis System JUMAN version 3.61. Sadao Kurohashi, Makoto Nagao, Department of Informatics, Kyoto University.in JapaneseSadao Kurohashi and Makoto Nagao. 1998. Japanese Morphological Analysis System JUMAN version 3.61. Department of Informatics, Kyoto University. (in Japanese). IPADIC User's Manual version 2.2.4. Nara Institute of Science and Technology. Yuji Matsumoto, Masayuki Asahara, in JapaneseYuji Matsumoto and Masayuki Asahara. 2001. IPADIC User's Manual version 2.2.4. Nara Institute of Sci- ence and Technology. (in Japanese). Morphological Analysis System ChaSen version 2.2.8 Manual. Yuji Matsumoto, Akira Kitauchi, Tatsuo Yamashita, Yoshitaka Hirano, Hiroshi Matsuda, Kazuma Takaoka, Masayuki Asahara, Nara Institute of Science and TechnologyYuji Matsumoto, Akira Kitauchi, Tatsuo Yamashita, Yoshitaka Hirano, Hiroshi Matsuda, Kazuma Takaoka, and Masayuki Asahara. 2001. Morpholog- ical Analysis System ChaSen version 2.2.8 Manual. Nara Institute of Science and Technology. A Part of Speech Estimation Method for Japanese Unknown Words using a Statistical Model of Morphology and Context. Masaki Nagata, Proceedings of the 37th Annual Meeting of the Association for Computational Linguistics. the 37th Annual Meeting of the Association for Computational LinguisticsMasaki Nagata. 1999. A Part of Speech Estimation Method for Japanese Unknown Words using a Statis- tical Model of Morphology and Context. In Proceed- ings of the 37th Annual Meeting of the Association for Computational Linguistics, pages 227-284. Fundamentals of Speech Recognition. R Lawrence, Biing-Hwang Rabiner, Juang, Prentice-HallLawrence R. Rabiner and Biing-Hwang Juang. 1993. Fundamentals of Speech Recognition. PTR Prentice- Hall. Text Chunking using Transformation-Based Learning. Lance Ramshaw, Mitch Marcus, Proceedings of the 3rd Workwhop on Very Large Corpora. the 3rd Workwhop on Very Large CorporaLance Ramshaw and Mitch Marcus. 1995. Text Chunk- ing using Transformation-Based Learning. In Pro- ceedings of the 3rd Workwhop on Very Large Corpora, pages 88-94. Representing Text Chunks. Erik F Tjong, Kim Sang, Jorn Veenstra, Proceedings of 9th Conference of the European Chapter of the Association for Computational Linguistics. 9th Conference of the European Chapter of the Association for Computational LinguisticsErik F. Tjong Kim Sang and Jorn Veenstra. 1999. Rep- resenting Text Chunks. In Proceedings of 9th Confer- ence of the European Chapter of the Association for Computational Linguistics, pages 173-179. A Decision Tree Method for Finding and Classifying Names in Japanese Texts. Satoshi Sekine, Ralph Grishman, Hiroyuki Shinnou, Proceedings of the 6th Workshop on Very Large Corpora. the 6th Workshop on Very Large CorporaSatoshi Sekine, Ralph Grishman, and Hiroyuki Shinnou. 1998. A Decision Tree Method for Finding and Clas- sifying Names in Japanese Texts. In Proceedings of the 6th Workshop on Very Large Corpora, pages 171- 177. The First International Chinese Word Segmentation Bakeoff. Richard Sproat, Thomas Emerson, Proceedings of the Second SIGHAN Workshop on Chinese Language Processing. the Second SIGHAN Workshop on Chinese Language ProcessingRichard Sproat and Thomas Emerson. 2003. The First International Chinese Word Segmentation Bakeoff. In Proceedings of the Second SIGHAN Workshop on Chi- nese Language Processing, pages 133-143. A Stochastic Finite-State Word-Segmentation Algorithm for Chinese. Richard Sproat, Chilin Shih, William Gale, Nancy Chang, Computational Linguistics. 223Richard Sproat, Chilin Shih, William Gale, and Nancy Chang. 1996. A Stochastic Finite-State Word- Segmentation Algorithm for Chinese. Computational Linguistics, 22(3):377-404. The Unknown Word Problem: a Morphological Analysis of Japanese Using Maximum Entropy Aided by a Dictionary. Kiyotaka Uchimoto, Satoshi Sekine, Hitoshi Isahara, Proceedings of the 2001 Conference on Empirical Methods in Natural Language Processing. the 2001 Conference on Empirical Methods in Natural Language ProcessingKiyotaka Uchimoto, Satoshi Sekine, and Hitoshi Isahara. 2001. The Unknown Word Problem: a Morphological Analysis of Japanese Using Maximum Entropy Aided by a Dictionary. In Proceedings of the 2001 Confer- ence on Empirical Methods in Natural Language Pro- cessing, pages 91-99. Chinese Word Segmentation as Character Tagging. Nianwen Xue, International Journal of Computational Linguistics and Chinese. 81Nianwen Xue. 2003. Chinese Word Segmentation as Character Tagging. International Journal of Compu- tational Linguistics and Chinese, 8(1):29-48. Performance Evaluation of Chinese Analyzers with Support Vector Machines. Tatsumi Yoshida, Kiyonori Ohtake, Kazuhide Yamamoto, Journal of Natural Language Processing. 101in JapaneseTatsumi Yoshida, Kiyonori Ohtake, and Kazuhide Ya- mamoto. 2003. Performance Evaluation of Chinese Analyzers with Support Vector Machines. Journal of Natural Language Processing, 10(1):109-131. (in Japanese).
27,773,855
Sequential Dialogue Context Modeling for Spoken Language Understanding
Spoken Language Understanding (SLU) is a key component of goal oriented dialogue systems that would parse user utterances into semantic frame representations. Traditionally SLU does not utilize the dialogue history beyond the previous system turn and contextual ambiguities are resolved by the downstream components. In this paper, we explore novel approaches for modeling dialogue context in a recurrent neural network (RNN) based language understanding system. We propose the Sequential Dialogue Encoder Network, that allows encoding context from the dialogue history in chronological order. We compare the performance of our proposed architecture with two context models, one that uses just the previous turn context and another that encodes dialogue context in a memory network, but loses the order of utterances in the dialogue history. Experiments with a multi-domain dialogue dataset demonstrate that the proposed architecture results in reduced semantic frame error rates.
[ 2570492 ]
Sequential Dialogue Context Modeling for Spoken Language Understanding Association for Computational LinguisticsCopyright Association for Computational LinguisticsAugust 2017. 2017 Ankur Bapna ankurbpn@google.com Dilek Hakkani-Tür Larry Heck larry.heck@ieee.org Sequential Dialogue Context Modeling for Spoken Language Understanding Proceedings of the SIGDIAL 2017 Conference the SIGDIAL 2017 ConferenceSaarbrücken, GermanyAssociation for Computational LinguisticsAugust 2017. 2017Gokhan Tür Google Research, Mountain View Spoken Language Understanding (SLU) is a key component of goal oriented dialogue systems that would parse user utterances into semantic frame representations. Traditionally SLU does not utilize the dialogue history beyond the previous system turn and contextual ambiguities are resolved by the downstream components. In this paper, we explore novel approaches for modeling dialogue context in a recurrent neural network (RNN) based language understanding system. We propose the Sequential Dialogue Encoder Network, that allows encoding context from the dialogue history in chronological order. We compare the performance of our proposed architecture with two context models, one that uses just the previous turn context and another that encodes dialogue context in a memory network, but loses the order of utterances in the dialogue history. Experiments with a multi-domain dialogue dataset demonstrate that the proposed architecture results in reduced semantic frame error rates. Introduction Goal oriented dialogue systems help users with accomplishing tasks, like making restaurant reservations or booking flights, by interacting with them in natural language. The capability to understand user utterances and break them down into task specific semantics is a key requirement for these systems. This is accomplished in the spoken language understanding module, which typically parses user utterances into semantic frames, composed of domains, intents and slots (Tur and De Mori, 2011), that can then be processed by downstream dia- logue system components. An example semantic frame is shown for a restaurant reservation related query in Figure 1. As the complexity of the task supported by a dialogue system increases, there is a need for an increased back and forth interaction between the user and the agent. For example, a restaurant reservation task might require the user to specify a restaurant name, date, time and number of people required for the reservation. Additionally, based on reservation availability, the user might need to negotiate on date, time, or any other attribute with the agent. This puts the burden of parsing in-dialogue contextual user utterances on the language understanding module. The complexity increases further when the system supports more than one task and the user is allowed to have goals spanning multiple domains within the same dialogue. Natural language utterances are often ambiguous, and the context from previous user and system turns could help resolve the errors arising from these ambiguities. In this paper, we explore approaches to improve dialogue context modeling within a Recurrent Neural Network (RNN) based spoken language understanding system. We propose a novel model architecture to improve dialogue context modeling for spoken language understanding on a multi-domain dialogue dataset. The proposed architecture is an extension of Hierarchical Recurrent Encoder Decoders (HRED) , where we combine the query level encodings with a representation of the current utterance, before feeding it into the session level encoder. We compare the performance of this model to a RNN tagger injected with just the previous turn context and a single hop memory network that uses an attention weighted combination of the dialogue context Weston et al., 2014). Furthermore, we describe a dialogue recombination technique to enhance the complexity of the training dataset by injecting synthetic domain switches, to create a better match with the mixed domain dialogues in the test dataset. This is, in principle, a multi-turn extension of (Jia and Liang, 2016). Instead of inducing and composing grammars to synthetically enhance single turn text, we combine single domain dialogue sessions into multi-domain dialogues to provide richer context during training. Related Work The task of understanding a user utterance is typically broken down into 3 tasks: domain classification, intent classification and slot-filling (Tur and De Mori, 2011). Most modern approaches to Spoken language understanding involve training machine learning models on labeled training data (Young, 2002;Hahn et al., 2011;Wang et al., 2005, among others). More recently, recurrent neural network (RNN) based approaches have been shown to perform exceedingly well on spoken language understanding tasks (Mesnil et al., 2015;Kurata et al., 2016, among others). RNN based approaches have also been applied successfully to other tasks for di-alogue systems, like dialogue state tracking (Henderson, 2015;Henderson et al., 2014;Perez and Liu, 2016, among others), policy learning and system response generation (Wen et al., , 2016. In parallel, joint modeling of tasks and addition of contextual signals has been shown to result in performance gains for several applications. Modeling domain, intent and slots in a joint RNN model was shown to result in reduction of overall frame error rates . Joint modeling of intent classification and language modeling showed promising improvements in intent recognition, especially in the presence of noisy speech recognition (Liu and Lane, 2016). Similarly, models incorporating more context from dialogue history or semantic context from the frame (Dauphin et al., 2014;Bapna et al., 2017) tend to outperform models without context and have shown potential for greater generalization on spoken language understanding and related tasks. (Dhingra et al., 2016) show improved performance on an informational dialogue agent by incorporating knowledge base context into their dialogue system. Using dialogue context was shown to boost performance for end to end dialogue (Bordes and Weston, 2016) and next utterance prediction (Serban et al., 2015). In the next few sections, we describe the proposed model architecture, the dataset and our dialogue recombination approach. This is followed by experimental results and analysis. {u 1 , u 2 ...u t } and at time step t we are trying to output the parse of a user utterance u t , given D t . Let any utterance u k be a sequence of tokens given by {x k 1 , x k 2 ...x k n k }. We divide the model into 2 components, the context encoder that acts on D t to produce a vector representation of the dialogue context denoted by h t = H(D t ), and the tagger, which takes the dialogue context encoding h t , and the current utterance u t as input and produces the domain, intent and slot annotations as output. Context Encoder Architectures In this section we describe the architectures of the context encoders used for our experiments. We compare the performance of 3 different architectures that encode varying levels of dialogue context. Previous Utterance Encoder This is the baseline context encoder architecture. We feed the embeddings corresponding to tokens in the previous system utterance, u t−1 = {x t−1 1 , x t−1 2 ...x t−1 n t−1 }, into a single Bidirectional RNN (BiRNN) layer with Gated Recurrent Unit (GRU) (Chung et al., 2014) cells and 128 dimensions (64 in each direction). The embeddings are shared with the tagger. The final state of the context encoder GRU is used as the dialogue context. h t = BiGRU c (u t−1 )(1) Memory Network This architecture is identical to the approach described in . We encode all dialogue context utterances, {u 1 , u 2 ...u t−1 }, into memory vectors denoted by {m 1 , m 2 , ...m t−1 } using a Bidirectional GRU (BiGRU) encoder with 128 dimensions (64 in each direction). To add temporal context to the dialogue history utter-ances, we append special positional tokens to each utterance. m k = BiGRU m (u k ) f or 0 ≤ k ≤ t−1 (2) We also encode the current utterance with another BiGRU encoder with 128 dimensions (64 in each direction), into a context vector denoted by c, as in equation 3. This is conceptually depicted in Figure 2 c = BiGRU c (u t )(3) Let M be a matrix with the ith row given by m i . We obtain the cosine similarity between each memory vector, m i , and the context vector c. The softmax of this similarity is used as an attention distribution over the memory M , and an attention weighted sum of M is used to produce the dialogue context vector h t (Equation 4). This is conceptually depicted in Figure 3. a = sof tmax(M c) h t = a T M(4) Sequential Dialogue Encoder Network We enhance the memory network architecture described above by adding a session encoder that temporally combines a joint representation of the current utterance encoding, c, (Eq. 3) and the memory vectors, {m 1 , m 2 ...m t−1 }, (Eq. 2). We combine the context vector c with each memory vector m k , for 1 ≤ k ≤ n k , by concatenating and passing them through a feed forward layer (FF) to produce 128 dimensional context encodings, denoted by {g 1 , g 2 ...g t−1 } (Eq. 5). g k = sigmoid(F F (m k , c)) f or 0 ≤ k ≤ t−1 (5) These context encodings are fed as token level inputs into the session encoder, which is a 128 di- mensional BiGRU layer. The final state of the session encoder represents the dialogue context encoding h t (Eq. 6). h t = BiGRU s ({g 1 , g 2 , ...g t−1 })(6) The architecture is depicted in Figure 4. Tagger Architecture For all our experiments we use a stacked BiRNN tagger to jointly model domain classification, intent classification and slot-filling, similar to the approach described in . We feed learned 256 dimensional embeddings corresponding to the current utterance tokens into the tagger. The first RNN layer uses GRU cells with 256 dimensions (128 in each direction) as in equation 7. The token embeddings are fed into the token level inputs of the first RNN layer to produce the token level outputs o 1 = {o 1 1 , o 1 2 ...o 1 nt }. o 1 = BiGRU 1 (u t )(7) The second layer uses Long Short Term Memory (LSTM) (Hochreiter and Schmidhuber, 1997) cells with 256 dimensions (128 in both dimensions). We use a LSTM based second layer since that improved slot-filling performance on the validation set for all architectures. We apply dropout to the outputs of both layers. The initial states of both forward and backward LSTMs of the second tagger layer are initialized with the dialogue encoding h t as in equation 8. o 2 , s 2 = BiLST M 2 (o 1 , h t )(8) The final state of the second layer, s 2 , is used as input to classification layers for domain and intent classification. p domain = sof tmax(U s 2 ) p intent = sigmoid(V s 2 )(9) The token level outputs of the second layer, o 2 , are used as input to a softmax layer that outputs the IOB slot labels. This results in a softmax layer with 2N +1 dimensions for a domain with N slots. p slot i = sof tmax(So 2 i ) f or 0 ≤ i ≤ n t(10) The architecture is depicted in Figure 5. Dataset We crowd sourced multi-turn dialogue sessions for 3 tasks: buying movie tickets, searching for a restaurant and reserving tables at a restaurant. Our data collection process comprises of two steps: (i) Generating user-agent interactions comprising of dialog acts and slots based on the interplay of a simulated user and a rule based dialogue policy. (ii) Using a crowd sourcing platform to elicit natural language utterances that align with the semantics of the generated interactions. The goal of the spoken language understanding module of our dialogue system is to map each user utterance into frame based semantics that can be processed by the downstream components. Tables describing the intents and slots present in the dataset can be found in the appendix. We use a stochastic agenda-based user simulator (Schatzmann et al., 2007;Shah et al., 2016) for interplay with our rule based system policy. The user goal is specified in terms of a tuple of slots, which denote the user constraints. Some constraints might be unspecified, in which case the user is indifferent to the value of those slots. At any given turn, the simulator samples a user dialogue act from a set of acceptable actions based on (i) the user goal and agenda that includes slots that still need to be specified, (ii) a randomly chosen user profile (co-operative/aggressive, verbose/succinct etc.) and (iii) the previous user and Figure 5: Architecture of the stacked BiRNN tagger. The dialogue context obtained from the context encoder is fed into the initial states of the second RNN layer. Domain Attributes movies date, movie, num tickets, theatre name, time find-restaurants category, location, meal, price range, rating, restaurant name reserve-restaurant date, num people, restaurant name, time system actions. Based on the chosen user dialogue act, the rule based policy might make a backend call to inquire for restaurant or movie availability. Based on the user act and the backend response the system responds back with a dialogue act or a combination of dialogue acts, based on a hand designed rule based policy. These generated interactions were then translated to their natural language counterparts and sent out to crowdworkers for paraphrasing into natural language human-machine dialogues. The simulator and policy were also extended to handle multiple goals spanning different domains. In this set-up, the user goal for the simulator would include multiple tasks and slot values could be conditioned on the previous task, for example, the simulator would ask for booking a table "after the movie", or search for a restaurant "near the theater". The set of slots supported by the simulator is enumerated in Table 1. We collected 1319 dialogues for restaurant reservation, 976 dialogues for finding restaurants and 1048 dialogues for buying movie tickets. All single domain datasets were used for training. The multi-domain simulator was used to collect 467 dialogues for training, 50 for validation and 273 for the test set. Since the natural language dialogues were paraphrased versions of known dialogue-act and slot combinations, they were automatically labeled. These labels were verified by an expert annotator, and turns with missing annotations were manually annotated by the expert. The key idea behind the recombination approach is the conditional independence of sub-dialogues aimed at performing distinct tasks (Grosz and Sidner, 1986). We exploit the presence of task intents, or intents that denote a switch in the primary task the user is trying to perform, since they are a strong indicator of a switch in the focus of the dialogue. We exploit the independence of the sub-dialogue following these intents from the previous dialogue context, to generate synthetic dialogues with multi-domain context. The recombination process is described as follows: Let a dialogue d be defined as a sequence of turns and corresponding semantic labels (domain, intent and slot annotations) Dialogue Recombination {(t d1 , f d1 ), (t d2 , f d2 ), ...(t dn d , f dn d }. To obtain a re-combined dataset composed of dialogues from dataset dataset 1 and dataset 2 , we repeat the following steps 10000 times, for each combination of (dataset 1 , dataset 2 ) from the three single domain datasets. • Sample dialogues x and y from dataset 1 and dataset 2 respectively. • Find the first user utterance labeled with a task intent in y. Let this be turn l. • Randomly sample an insertion point in dialogue x. Let this be turn k. • The new recombined dialogue is {(t x1 , f x1 ), ...(t xk , f xk ), (t yl , f yl ), ...(t yny , f yny )}. A sample dialogue generated using the above procedure is described in table 2. We drop the utterances from dialogue x following the insertion point (turn k) in the recombined dialogue since these turns become ambiguous or confusing in the absence of preceding context. In a sense our approach is one of partial dialogue recombination. Experiments We compare the domain classification, intent classification and slot-filling performances, and the overall frame error rates of the encoder-decoder, memory network and sequential dialogue encoder network on the dataset described above. The frame error rate of a SLU system is the percentage of utterances where it makes a wrong prediction i.e. any of domain, intent or slot is predicted incorrectly. We trained all 3 models with RMSProp for 100000 training steps with a batch size of 100. We started with a learning rate of 0.0003 which was decayed by a factor of 0.95 every 3000 steps. Gradient norms were clipped if they exceed a magnitude of 2.5. All model and optimization hyper-parameters were chosen based on a grid search, to minimize validation set frame error rates. Table 4: Dialogue from the test set with predictions from Encoder Decoder with recombined data (ED+DR), Memory Network with recombined data (MN+DR) and Sequential Dialogue Encoder Network with dialogue recombination (SDEN+DR).Tokens that have been italicized in the dialogue were out of vocabulary or replaced with special tokens. The columns to the right of the dialogue history detail the attention distributions. For SDEN+DR, we use the magnitude of the change in the session GRU state as a proxy for the attention distribution. Attention weights might not sum up to 1 if there is non-zero attention on history padding. We restrict the model vocabularies to contain only tokens occurring more than 10 times in the training set, to prevent over-fitting to training set entities. Digits were replaced with a special "#" token to allow better generalization to unseen numbers. The dialogue history was padded to 40 utterances for batch processing. We report results with and without the recombined dataset in Table 3. Results The encoder decoder model trained on just the previous turn context performs worst on almost all metrics, irrespective of the presence of recombined data. This can be explained by worse performance on in-dialogue utterances, where just the previous turn context isn't sufficient to accurately identify the domain, and in several cases, the intents and slots of the utterance. The memory network is the best performing model in the absence of recombined data, indicating that the model is able to encode additional context effectively to improve performance on all tasks, even when only a small amount of multi-domain data is available. The Sequential dialogue encoder network performs slightly worse than the memory network in the absence of recombined data. This could be explained by the model over-fitting to the single domain context seen during training and failure to utilize context effectively in a multi-domain setting. In the presence of recombined dialogues it outperforms all other implementations. Apart from increasing the noise in the dialogue context, adding recombined dialogues to the training set increases the average turn length of the training data, bringing it closer to that of the test dialogues. Our augmentation approach is, in spirit, an extension of the data recombination described in (Jia and Liang, 2016) to conversations. We hypothesize that the presence of synthetic con- Table 5: Dialogue from the test set with predictions from Encoder Decoder with recombined data (ED+DR), Memory Network with recombined data (MN+DR) and Sequential Dialogue Encoder Network with dialogue recombination (SDEN+DR). Tokens that have been italicized in the dialogue were out of vocabulary or replaced with special tokens. The columns to the right of the dialogue history detail the attention distributions. For SDEN+DR, we use the magnitude of the change in the session GRU state as a proxy for the attention distribution. Attention weights might not sum up to 1 if there is non-zero attention on history padding. text has a regularization-like effect on the models. Similar effects were observed by (Jia and Liang, 2016), where training with longer, syntheticallyaugmented utterances resulted in improved semantic parsing performance on a simpler test set. This is also supported by the observation that performance improvements obtained by addition of recombined data increase as the complexity of the model increases. Table 4 demonstrates an example dialogue from the test set, along with the gold and model annotations from all 3 models. We observe that Encoder Decoder (ED) and Sequential Dialogue Encoder Network (SDEN) are able to successfully identify the domain, intent and slots, while the Memory Network (MN) fails to identify the movie name. Discussion and Conclusions Looking at the attention distributions, we notice that the MN attention is very diffused, whereas SDEN is focusing on the most recent last 2 utterances, which directly identify the domain and the presence of the movie slot in the final user utterance. ED is also able to identify the presence of a movie in the final user utterance from the previous utterance context. Table 5 displays another example where the SDEN model outperforms both MN and ED. Constrained to just the previous utterance ED is unable to correctly identify the domain of the user utterance. The MN model correctly identifies the domain, using its strong focus on the task-intent bearing utterance, but it is unable to identify the presence of a restaurant in the user utterance. This highlights its failure to combine context from multiple history utterances. On the other hand, as indicated by its attention distribution on the final two utterances, SDEN is able to successfully combine context from the dialogue to correctly identify the domain and the restaurant name from the user utterance, despite the presence of several outof-vocabulary tokens. The above two examples hint that SDEN performs better in scenarios where multiple history utterances encode complementary information that could be useful to interpret user utterances. This is usually the case in more natural goal oriented dialogues, where several tasks and sub tasks go in and out of the focus of the conversation (Grosz, 1979). On the other hand, we also observed that SDEN performs significantly worse in the absence of recombined data. Due to its complex architecture and a much larger set of parameters SDEN is prone to over-fitting in low data scenarios. In this paper, we collect a multi-domain dataset of goal oriented human-machine conversations and analyze and compare the SLU performance of multiple neural network based model architectures that can encode varying amounts of context. Our experiments suggest that encoding more context from the dialogue, and enabling the model to combine contextual information in a sequential order results in a reduction in overall frame error rate. We also introduce a data augmentation scheme to generate longer dialogues with richer context, and empirically demonstrate that it results in performance improvement for multiple model architectures. Table 7: Sample dialogue: Sample dialogue generated using a crowd working platform. The LHS consists of the instructions shown to the crowd workers based on the dialog act interactions between the user simulator and the rule based policy. The RHS describes the natural language dialog generated by the crowd workers. Instructions Natural language paraphrase S: greetings hi U:buy movie tickets AND date is DontCare buy some movie tickets for any date S: provide preference for: theatre name Which theatre would you like to make the reservation at ? U: theatre name is GB the GB theatre S: provide preference for: time When is the reservation for? U: number of tickets is 6 and time is DontCare get 6 tickets for any time S: provide preference for: movie Which movie would you like to go for ? U: movie is Power Rangers The power rangers movie S: found the following: date is March 25 and time is 8:00 am I found 6 tickets available for 8 am on March 25. U: disagree That doesn't work for me S: found the following: date is March 25 and time is 6:00 pm Does 6 pm on March 25 work for you ? U: accept Sounds good, make the booking S: purchase confirmed with given details theatre name is GB and date is March 25 and time is 6:00 pm and number of tickets is 6 and movie is Power Rangers Ok, I've reserved 6 tickets for the 6 pm showing of Power Rangers at the GB theatre on March 25. U: thank you Thanks. Figure 1 : 1An example semantic parse of an utterance (u2) with slot (S), domain (D), intent (I) annotations, following the IOB (in-out-begin) representation for slot values. Figure 2 : 2Architecture of the Memory and current utterance context encoder. Figure 3 : 3Architecture of the dialogue context encoder for the cosine similarity based memory network. Figure 4 : 4Architecture of the Sequential Dialogue Encoder Network. The feed-forward networks share weights across all memories. The token level outputs of the first RNN layer, o 1 , are fed as input into the second RNN layer to produce token level outputs o 2 = {o 2 1 , o 2 2 ...o 2 nt } and the final state s 2 . Table 1 : 1List of attributes supported for each domain. Table 2 : 2A sample dialogue obtained from recombining a dialogue from the movies and find-restaurant datasets. Table 3 : 3Test set performances for the encoder decoder (ED) model, Memory Network (MN) and the Sequential Dialogue Encoder Network (SDEN) with and without recombined data (DR). utterance MN+DR SDEN+DR hi! 0.00 0.13 hello ! i want to buy movie tickets for 8 pm at cinelux plaza 0.05 0.34 which movie , how many , and what day ? 0.13 0.24 Trolls , 6 tickets for today True ED+DR MN+DR SDEN+DR Domain buy-movie-tickets movies movies movies Intent contextual contextual contextual contextual date today today today today num tickets 6 6 6 6 movie Trolls Trolls - Trolls Table 6 : 6Supported Intents: List of intents and dialogue acts supported by the user simulator, with descriptions and representative examples. Acts parametrized with slot can be instantiated for any attribute supported within the domain. Doppio Zero is a nice italian restaurant near you. U: Are there any other options available ? affirm(slot) affirming values corresponding to a particular attribute U: 5 pm sounds good to me. deny(slot) negating a particular attribute. U: None of those times would work for me. dont care(slot) expressing that any value is acceptable for a given attribute U: Any time should be ok. movies explicit intent to buy movie tickets U: Get me 3 tickets to Inferno reserve-restaurants explicit intent to reserve a table at a restaurant U: make a reservation at Max Brenner's find-restaurants explicit intent to search for restaurants U: find cheap italian restaurants near me contextual implicit intent continuing from context, also used in place of inform S: What time works for you ? U: 5 pm tomorrow.Intent Intent descriptions Sample utterance affirm generic affirmation U: sounds good. cant understand expressing failure to understand system utterance U: What do you mean ? deny generic negation U: That doesn't work. good bye expressing end of dialogue U: bye thank you expressing gratitude U: thanks a lot! greeting greeting U: Hi request alts request alternatives to a system offer S: Model ArchitectureWe compare the performance of 3 model architectures for encoding dialogue context on a multidomain dialogue dataset. Let the dialogue be a sequence of system and user utterances D t = AcknowledgementsWe would like to thank Pararth Shah, Abhinav Rastogi, Anna Khasin and Georgi Nikolov for their help with the user-machine conversation data collection and labeling. We would also like to thank the anonymous reviewers for their insightful comments. Towards zero-shot frame semantic parsing for domain scaling. Ankur Bapna, Gokhan Tür, Dilek Hakkani-Tür, Larry Heck, Proceedings of the Interspeech. the InterspeechStockholm, SwedenAnkur Bapna, Gokhan Tür, Dilek Hakkani-Tür, and Larry Heck. 2017. Towards zero-shot frame seman- tic parsing for domain scaling. In In Proceedings of the Interspeech. Stockholm, Sweden. Learning end-to-end goal-oriented dialog. Antoine Bordes, Jason Weston, arXiv:1605.07683arXiv preprintAntoine Bordes and Jason Weston. 2016. Learn- ing end-to-end goal-oriented dialog. arXiv preprint arXiv:1605.07683 . End-to-end memory networks with knowledge carryover for multi-turn spoken language understanding. Y.-N Chen, D Hakkani-Tür, G Tur, J Gao, L Deng, Proceedings of the Interspeech. the InterspeechSan Francisco, CAY.-N. Chen, D. Hakkani-Tür, G. Tur, J. Gao, and L. Deng. 2016. End-to-end memory networks with knowledge carryover for multi-turn spoken language understanding. In Proceedings of the Interspeech. San Francisco, CA. Junyoung Chung, Caglar Gulcehre, Kyunghyun Cho, Yoshua Bengio, arXiv:1412.3555Empirical evaluation of gated recurrent neural networks on sequence modeling. arXiv preprintJunyoung Chung, Caglar Gulcehre, KyungHyun Cho, and Yoshua Bengio. 2014. Empirical evaluation of gated recurrent neural networks on sequence model- ing. arXiv preprint arXiv:1412.3555 . Zero-shot learning and clustering for semantic utterance classification. Y Dauphin, G Tur, D Hakkani-Tür, L Heck, Proceedings of the ICLR. the ICLRY. Dauphin, G. Tur, D. Hakkani-Tür, and L. Heck. 2014. Zero-shot learning and clustering for seman- tic utterance classification. In Proceedings of the ICLR. End-to-end reinforcement learning of dialogue agents for information access. Bhuwan Dhingra, Lihong Li, Xiujun Li, Jianfeng Gao, Yun-Nung Chen, Faisal Ahmed, Li Deng, arXiv:1609.00777arXiv preprintBhuwan Dhingra, Lihong Li, Xiujun Li, Jianfeng Gao, Yun-Nung Chen, Faisal Ahmed, and Li Deng. 2016. End-to-end reinforcement learning of dia- logue agents for information access. arXiv preprint arXiv:1609.00777 . Focusing and description in natural language dialogues. J Barbara, Grosz, DTIC DocumentTechnical reportBarbara J Grosz. 1979. Focusing and description in natural language dialogues. Technical report, DTIC Document. Attention, intentions, and the structure of discourse. J Barbara, Candace L Grosz, Sidner, Computational linguistics. 123Barbara J Grosz and Candace L Sidner. 1986. Atten- tion, intentions, and the structure of discourse. Com- putational linguistics 12(3):175-204. Comparing stochastic approaches to spoken language understanding in multiple languages. S Hahn, M Dinarelli, C Raymond, F Lefevre, P Lehnen, R Mori, A Moschitti, H Ney, G Riccardi, IEEE Transactions on Audio, Speech, and Language Processing. 196S. Hahn, M. Dinarelli, C. Raymond, F. Lefevre, P. Lehnen, R. De Mori, A. Moschitti, H. Ney, and G. Riccardi. 2011. Comparing stochastic ap- proaches to spoken language understanding in multi- ple languages. IEEE Transactions on Audio, Speech, and Language Processing 19(6):1569-1583. Multidomain joint semantic frame parsing using bidirectional RNN-LSTM. D Hakkani-Tür, G Tur, A Celikyilmaz, Y.-N Chen, J Gao, L Deng, Y.-Y. Wang, Proceedings of the Interspeech. the InterspeechSan Francisco, CAD. Hakkani-Tür, G. Tur, A. Celikyilmaz, Y.-N. Chen, J. Gao, L. Deng, and Y.-Y. Wang. 2016. Multi- domain joint semantic frame parsing using bi- directional RNN-LSTM. In Proceedings of the In- terspeech. San Francisco, CA. Machine learning for dialog state tracking: A review. Matthew Henderson, Proceedings of The First International Workshop on Machine Learning in Spoken Language Processing. The First International Workshop on Machine Learning in Spoken Language ProcessingMatthew Henderson. 2015. Machine learning for dia- log state tracking: A review. In Proceedings of The First International Workshop on Machine Learning in Spoken Language Processing. The second dialog state tracking challenge. Matthew Henderson, Blaise Thomson, Jason D Williams, Matthew Henderson, Blaise Thomson, and Jason D Williams. 2014. The second dialog state tracking challenge. Long short-term memory. Sepp Hochreiter, Jürgen Schmidhuber, Neural computation. 98Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural computation 9(8):1735-1780. Data recombination for neural semantic parsing. Robin Jia, Percy Liang, arXiv:1606.03622arXiv preprintRobin Jia and Percy Liang. 2016. Data recombina- tion for neural semantic parsing. arXiv preprint arXiv:1606.03622 . Leveraging sentence-level information with encoder LSTM for semantic slot filling. G Kurata, B Xiang, B Zhou, M Yu, Proceedings of the EMNLP. the EMNLPAustin, TXG. Kurata, B. Xiang, B. Zhou, and M. Yu. 2016. Leveraging sentence-level information with encoder LSTM for semantic slot filling. In Proceedings of the EMNLP. Austin, TX. Joint online spoken language understanding and language modeling with recurrent neural networks. Bing Liu, Ian Lane, CoRR abs/1609.01462Bing Liu and Ian Lane. 2016. Joint online spoken lan- guage understanding and language modeling with recurrent neural networks. CoRR abs/1609.01462. http://arxiv.org/abs/1609.01462. Using recurrent neural networks for slot filling in spoken language understanding. G Mesnil, Y Dauphin, K Yao, Y Bengio, L Deng, D Hakkani-Tür, X He, L Heck, G Tur, D Yu, IEEE Transactions on Audio, Speech, and Language Processing. 233G. Mesnil, Y. Dauphin, K. Yao, Y. Bengio, L. Deng, D. Hakkani-Tür, X. He, L. Heck, G. Tur, and D. Yu. 2015. Using recurrent neural networks for slot fill- ing in spoken language understanding. IEEE Trans- actions on Audio, Speech, and Language Processing 23(3):530-539. Dialog state tracking, a machine reading approach using memory network. Julien Perez, Fei Liu, arXiv:1606.04052arXiv preprintJulien Perez and Fei Liu. 2016. Dialog state tracking, a machine reading approach using memory network. arXiv preprint arXiv:1606.04052 . Agenda-based user simulation for bootstrapping a pomdp dialogue system. Jost Schatzmann, Blaise Thomson, Karl Weilhammer, Hui Ye, Steve Young, Human Language Technologies 2007: The Conference of the North American Chapter of the Association for Computational Linguistics; Companion Volume, Short Papers. Association for Computational Linguistics. Jost Schatzmann, Blaise Thomson, Karl Weilhammer, Hui Ye, and Steve Young. 2007. Agenda-based user simulation for bootstrapping a pomdp dialogue sys- tem. In Human Language Technologies 2007: The Conference of the North American Chapter of the Association for Computational Linguistics; Com- panion Volume, Short Papers. Association for Com- putational Linguistics, pages 149-152. Hierarchical neural network generative models for movie dialogues. Iulian Vlad Serban, Alessandro Sordoni, Yoshua Bengio, Aaron C Courville, Joelle Pineau, CoRR abs/1507.04808Iulian Vlad Serban, Alessandro Sordoni, Yoshua Bengio, Aaron C. Courville, and Joelle Pineau. 2015. Hierarchical neural network generative mod- els for movie dialogues. CoRR abs/1507.04808. http://arxiv.org/abs/1507.04808. Interactive reinforcement learning for taskoriented dialogue management. Pararth Shah, Dilek Hakkani-Tür, Larry Heck, Pararth Shah, Dilek Hakkani-Tür, and Larry Heck. 2016. Interactive reinforcement learning for task- oriented dialogue management. A hierarchical recurrent encoder-decoder for generative context-aware query suggestion. Alessandro Sordoni, Yoshua Bengio, Hossein Vahabi, Christina Lioma, Jakob Grue Simonsen, Jian-Yun Nie, 10.1145/2806416.2806493Proceedings of the 24th ACM International on Conference on Information and Knowledge Management. the 24th ACM International on Conference on Information and Knowledge ManagementNew York, NY, USA, CIKMACM15Alessandro Sordoni, Yoshua Bengio, Hossein Va- habi, Christina Lioma, Jakob Grue Simonsen, and Jian-Yun Nie. 2015. A hierarchical recur- rent encoder-decoder for generative context-aware query suggestion. In Proceedings of the 24th ACM International on Conference on Informa- tion and Knowledge Management. ACM, New York, NY, USA, CIKM '15, pages 553-562. https://doi.org/10.1145/2806416.2806493. Reward shaping with recurrent neural networks for speeding up on-line policy learning in spoken dialogue systems. Pei-Hao Su, David Vandyke, Milica Gasic, Nikola Mrksic, Tsung-Hsien Wen, Steve Young, arXiv:1508.03391arXiv preprintPei-Hao Su, David Vandyke, Milica Gasic, Nikola Mrksic, Tsung-Hsien Wen, and Steve Young. 2015. Reward shaping with recurrent neural networks for speeding up on-line policy learning in spoken dia- logue systems. arXiv preprint arXiv:1508.03391 . Spoken language understanding: Systems for extracting semantic information from speech. Gokhan Tur, Renato De Mori, John Wiley & SonsGokhan Tur and Renato De Mori. 2011. Spoken lan- guage understanding: Systems for extracting seman- tic information from speech. John Wiley & Sons. Spoken language understanding -an introduction to the statistical framework. Y.-Y Wang, L Deng, A Acero, IEEE Signal Processing Magazine. 225Y.-Y. Wang, L. Deng, and A. Acero. 2005. Spoken lan- guage understanding -an introduction to the statis- tical framework. IEEE Signal Processing Magazine 22(5):16-31. Multi-domain neural network language generation for spoken dialogue systems. Milica Tsung-Hsien Wen, Nikola Gasic, Lina M Mrksic, Pei-Hao Rojas-Barahona, David Su, Steve Vandyke, Young, arXiv:1603.01232arXiv preprintTsung-Hsien Wen, Milica Gasic, Nikola Mrksic, Lina M Rojas-Barahona, Pei-Hao Su, David Vandyke, and Steve Young. 2016. Multi-domain neural network language generation for spoken di- alogue systems. arXiv preprint arXiv:1603.01232 . Semantically conditioned lstm-based natural language generation for spoken dialogue systems. Milica Tsung-Hsien Wen, Nikola Gasic, Pei-Hao Mrksic, David Su, Steve Vandyke, Young, arXiv:1508.01745arXiv preprintTsung-Hsien Wen, Milica Gasic, Nikola Mrksic, Pei- Hao Su, David Vandyke, and Steve Young. 2015. Semantically conditioned lstm-based natural lan- guage generation for spoken dialogue systems. arXiv preprint arXiv:1508.01745 . . Jason Weston, Sumit Chopra, Antoine Bordes, arXiv:1410.3916Memory networks. arXiv preprintJason Weston, Sumit Chopra, and Antoine Bor- des. 2014. Memory networks. arXiv preprint arXiv:1410.3916 . Talking to machines (statistically speaking). S Young, Proceedings of the ICSLP. the ICSLPDenver, COS. Young. 2002. Talking to machines (statistically speaking). In Proceedings of the ICSLP. Denver, CO.
10,127,544
On Discriminating fMRI Representations of Abstract WordNet Taxonomic Categories
How abstract knowledge is organised is a key question in cognitive science, and has clear repercussions for the design of artifical lexical resources, but is poorly understood. We present fMRI results for an experiment where participants imagined situations associated with abstract words, when cued with a visual word stimulus. We use a multivariate-pattern analysis procedure to demonstrate that 7 WordNet style Taxonomic categories (e.g. 'Attribute', 'Event', 'Social-Role'), can be decoded from neural data at a level better than chance. This demonstrates that category distinctions in artificial lexical resources have some explanatory value for neural organisation.Secondly, we tested for similarity in the interrelationship of the taxonomic categories in our fMRI data and the associated interrelations in popular distributed semantic models (LSA,HAL,COALS). Although distributed models have been successfully applied to predict concrete noun fMRI data (e.g.Mitchell et al., 2008), no evidence of association was found for our abstract concepts. This suggests that development of new models/experimental strategies may be necessary to elucidate the organisation of abstract knowledge.
[ 9548816, 14567261 ]
On Discriminating fMRI Representations of Abstract WordNet Taxonomic Categories 2012. December 2012 Andrew James Anderson andrew.anderson@unitn.it Centro Interdipartimentale Mente e Cervello (CIMeC) University of Trento Italy ( Yuan Tao yuan.tao@unitn.it Centro Interdipartimentale Mente e Cervello (CIMeC) University of Trento Italy ( Brian Murphy brianmurphy@cmu.edu Machine Learning Department School of Computer Science Carnegie Mellon University USA ( Massimo Poesio massimo.poesio@unitn.it Centro Interdipartimentale Mente e Cervello (CIMeC) University of Trento Italy ( School of Computer Science and Electronic Engineering University of Essex UK On Discriminating fMRI Representations of Abstract WordNet Taxonomic Categories Proceedings of the 3rd Workshop on Cognitive Aspects of the Lexicon (CogALex-III) the 3rd Workshop on Cognitive Aspects of the Lexicon (CogALex-III)Mumbai2012. December 2012FMRICONCEPT REPRESENTATIONABSTRACTMVPAWORDNET How abstract knowledge is organised is a key question in cognitive science, and has clear repercussions for the design of artifical lexical resources, but is poorly understood. We present fMRI results for an experiment where participants imagined situations associated with abstract words, when cued with a visual word stimulus. We use a multivariate-pattern analysis procedure to demonstrate that 7 WordNet style Taxonomic categories (e.g. 'Attribute', 'Event', 'Social-Role'), can be decoded from neural data at a level better than chance. This demonstrates that category distinctions in artificial lexical resources have some explanatory value for neural organisation.Secondly, we tested for similarity in the interrelationship of the taxonomic categories in our fMRI data and the associated interrelations in popular distributed semantic models (LSA,HAL,COALS). Although distributed models have been successfully applied to predict concrete noun fMRI data (e.g.Mitchell et al., 2008), no evidence of association was found for our abstract concepts. This suggests that development of new models/experimental strategies may be necessary to elucidate the organisation of abstract knowledge. Introduction Data about the organization of conceptual knowledge in the brain coming from patients with semantic deficits (e.g. Warrington & Shallice, 1984, Caramazza & Shelton, 1998 or collected from healthy patients using functional Magnetic Resonance Imaging 1 (fMRI) (e.g. Martin & Chao, 2001) have proven an essential source of evidence for our understanding of conceptual representations, particularly when analyzed using machine learning methods (e.g. Haxby et al 2001, Mitchell et al., 2008. Most of this work has focused on a fairly narrow range of conceptual categories, primarily concrete concepts such as animals, plants, tools, etc., which represent only a small percentage of the range of conceptual categories that are part of human knowledge. Until recently only a few studies studied the representation in the brain of abstract concepts such as law or freedom (Binder et al, 2005;Friederici et al, 2002;Grossman et al, 2002). Some recent studies have shown that fMRI data contain sufficient information to discriminate between concrete and abstract concepts (Binder et al, 2005;Wang et al, 2012) but meta-analyses such as (Wang et al, 2010) also showed that fairly different results are obtained depending on the types of abstract concepts under study, and that the range of abstract concepts considered tends to be fairly narrow. This type of analysis is complicated by the fact that the representation and organization of human knowledge about abstract conceptual categories is much less understood than for concrete concepts. Human intuitions about abstract concepts are not very sharp: e.g., studies asking subjects to specify the defining characteristics of abstract concepts find that this task is much harder than for concrete ones (Hampton 1981, McRae & Cree, 2002, Wiemer-Hastings & Xu, 2005. On the theoretical side, as well, there is not much agreement on abstract concepts among psychologists, (computational) linguists, philosophers and other cognitive scientists who have proposed theories about the organization of conceptual knowledge. Just about the only point of agreement among such proposals is that there is no such thing as an 'abstract concept' -human conceptual knowledge includes a great variety of abstract categories of varying degrees of abstractness ranging from knowledge about space and time (e.g., day, country) to knowledge about actions and events (e.g., solo, robbery) to knowledge about inner states including emotions (fear) and cognitive states (belief), to purely abstract concepts (e.g., art, jazz, law). It is also known that many of these categories have their own distinct representation in memory (Binder & Desai, 2009). But there is a lot of disagreement among exactly which categories these different types of abstract concepts belong to, e.g., which category does the concept law belong to. These disagreements are clearly in evidence in the significant differences between the representation of such categories in the large-scale repositories of conceptual knowledge that have been developed in the last twenty years, such as WordNet (Fellbaum, 1998), CYC (Lenat, & Guha, 1990) and DOLCE (Gangemi et al, 2002). In WordNet, the top category 'abstract concept' covers attributes, events and actions, temporal entities, and highly abstract concepts such as law both in the sense of 'collection of all laws' and in the sense of 'area of study', whereas locations are considered concrete concepts. In DOLCE, actions and events, attributes, and highly abstract concepts such as propositions are treated as completely unrelated conceptual categories, whereas both temporal and spatial locations are included in the quality category. It follows that there is joint motivation from cognitive science and computational linguistics to extend our understanding of abstract knowledge representation. The objectives of the present work are two fold, (1) to broaden the range of abstract concepts studied using neuroimaging; (2) to examine whether artificial knowledge representation strategies can be used to interpret fMRI data. We adopt an fMRI paradigm, where stimuli were presented in the form of words on the screen and participants were required to imagine a situation associated with the word. We used as stimuli concepts belonging to seven distinct WordNet style taxonomic categories, ranging from concrete to more abstract (tool, location, social role, event, communication, attribute, and a category we called urabstract of highly abstract words) and two different domains (music and law). Domain membership is not important to this paper and will be addressed in future work (this point is returned to in section 4). Firstly a Multivariate Pattern Analysis (MVPA) procedure was used to test whether single stimulus trials could be classified by their taxonomic class. On demonstrating that classifications can indeed be made at a level better than chance (section 3.1), we further examined whether there are similarities between concept representations in the fMRI data and popular distributed semantic models used in computational linguistics (section 3.2). Three semantic models were selected: Hyperspace Analogue to Language (HAL) (Burgess, 1998), Correlated Occurrence Analogue to Lexical Semantics (COALS) (Rohde, et al., 2005) which is a refinement of HAL and Latent Semantic Analysis (LSA) (Landauer et al, 1998). All three models express meaning in terms of a multidimensional statistical model of a word's context. HAL models meaning as a function of the number of times a word occurs in close proximity to a each of a large set of feature words, within a large body of text. LSA counts the occurrences of words in individual documents and subsequently reduces the dimensionality (in documents) through singular value decomposition. COALS incorporates a number of algorithmic modifications to the HAL, including data reduction by singular value decomposition. The important conceptual difference is that LSA attempts to bind words to topic (assumed to be derived from the general themes of the documents), whereas HAL and COALS captures meaning through word inter-relations. All models have been applied with success in one way or other to interpret human cognition in a variety of semantic tasks and psychological experiments, including synonym test, word relatedness judgment, semantic priming, semantic categorization, (Lund & Burgess, 1996;Burgess, 1998;Landauer et al., 1997Landauer et al., , 1998Rohde et al., 2005). Despite their success in explaining behavioural tasks, by using representational dissimilarity analysis (section 3.3) we found that none of the models provide a good general match for the structure of the abstract fMRI data. Methods Participants Seven right handed native Italian speakers (3 female), aged between 19 and 38, were recruited to take part in the study. All had normal or corrected-to-normal vision. Participants received compensation of €15 per hour. The studies were conducted under the approval of the ethics committee of the host University, and participants gave informed consent. Data Acquisition fMRI images were recorded on a 4T Bruker MedSpec MRI. An EPI pulse sequence with TR=1000ms, TE=33ms, and 26° flip angle was used. A 64 * 64 acquisition matrix was used and seventeen slices were imaged with a between slice gap of 1mm. Voxels had dimensions 3mm * 3mm * 5mm. Experimental Paradigm The names of 70 concepts were presented to participants in the form of written words on the screen. The stimuli were displayed using bold Arial-Black size 20 font on a grey background. Each stimulus was presented five times, for a total of 350 trials, split in five blocks with the order of presentation being randomized in each block. Participants had the opportunity to pause between blocks and the overall task time did not exceed 60 minutes. Each trial began with the presentation of a blank screen for 0.5s, followed by the stimulus word of dark grey on a light grey background for 3s, and a fixation cross for 6.5s. Participants were asked to keep still during the task and during breaks. With concrete concepts, participants are often asked to think actively about the properties of the object named (see, e.g., Mitchell et al, 2008) but eliciting properties is not so easy for abstract concepts. On the other hand, participants to studies such as (Hampton, 1981;McRae & Cree, 2002;Wiemer-Hastings & Xu, 2005) appeared able to produce situation-related objects. Our participants were therefore instructed to "think about situations that exemplify the object the word refers to". The list of concept words were supplied to participants in advance of the experiment, so that they could prepare appropriate situations to simulate consistently. Materials Our objective was to obtain a list of words representative of the full range of non-concrete concepts. The list of categories was produced by associating WordNet (Fellbaum, 1998) categories to the terms with highest abstractness ranking in an abstractness norm for Italian. We identified the 6 WordNet categories that occurred most frequently in the norms. Finally, WordNet Domains (Pianta et al, 2002) was used to select 70 words whose unique or most preferred sense belonged to these categories. More in detail, our starting point was the set of behavioural norms by Barca et al (2002) listing Italian words ranked by perceived abstractness. These words were next looked up in the Italian WordNet contained in MultiWordNet (Pianta et al, 2002) to determine the taxonomic category of their dominant sense(s). The authors edited this list down to a set of six taxonomic categories of concepts found in Barca et al's norms plus a category of concrete concepts, tool, for comparison purposes. The six non-concrete categories are: Locations, including concepts such as court, jail and theatre. Locations are considered as concrete objects in WordNet but belong to the separate category `qualities' in DOLCE, and could therefore be considered concepts in between concrete and abstract. Four non-concrete categories of arguably increasing levels of abstractness: event, communication (covering concepts such as accusation or symphony), attribute, and urabstract (our term for concepts such as law or jazz which are fairly common in abstractness norms, are classified as abstract in WordNet, but do not belong to a clear subcategory of abstract such as event or attribute) Finally, the category social-role, containing concepts such as judge or tenor which are fairly common in abstractness norms and are typically associated with scenarios but whose status as concrete or abstract is not very clear. The complete word list including English translations of the Italian stimuli is in TABLE 1. Preprocessing Preprocessing was undertaken using the Statistical Parametric Mapping software (SPM99, Wellcome Department of Cognitive Neurology, London, UK). Data were corrected for head motion, unwarped (to compensate for geometric distortions in the image interacting with motion) and spatially normalised to the MNI template image and resampled at 3mm * 3mm * 6mm. Only voxels estimated to be grey matter were included in the subsequent analysis. For each participant the data, per voxel, in each session (presentation cycle of 70 words) was corrected for linear trend and transformed to z-scores. A single volume was computed to represent each stimulus word, by taking the voxel-wise mean of the four seconds of data offset by four seconds from the stimulus onset (to account for hemodynamic response). Cross validation analysis procedure Broadly the same cross-validation procedure was followed for each analyses. Input and target data pairs were partitioned into training and testing sets (using a leave-n-out approach) to support a number of cross validation iterations. Target patterns were binary vectors with a single field set to one to uniquely specify the category. Input was a masked version of the fMRI grey-matter data, retaining the 1000 most stable voxels in the training set according to the following procedure, similar to that used by Mitchell et al. (2008). For each voxel, the set of 70 words from each unique pair of scanning sessions in the training set were correlated, and the mean of the six resulting correlations (from 4 scanning sets) was taken as the measure of stability. The 1000 voxels with highest mean correlations were selected for analysis. Pattern classification used a single layer neural network with logistic activation functions (MATLAB 2009B, Mathworks, Neural Network toolbox). Weights and biases were initialized using the Nguyen-Widrow algorithm and training used conjugate gradient decent, continued until convergence, with performance evaluated using mean square error, with a goal of 10 -4 or completion of 2000 training epochs. In each cross-validation iteration the network was trained using the masked fMRI data and binary target codes in the training set and subsequently tested on the previously unseen masked fMRI data. The Euclidean distance between the network output vectors and target codes was computed, and the target code with the minimum distance selected as the network output. Results Leave-out-session cross validation analyses were undertaken for each participant to recognize taxonomic distinctions from the fMRI data. There were 5 scanning sessions, therefore training in each of the five cross-validation iterations was on 280 words (4 replicates of each of the 70 stimulus words) and testing was on the remaining 70 words. Figure 1 shows a confusion matrix averaging results across all 7 participants (and cross-validation iterations within participant). Can taxonomic distinctions be recognized within participant? Mean classification accuracy for the 7-way taxonomic distinctions was ~0.3 with chance level at 0.143. Accuracy is greatest for location, tool and attributes and there is a visible diagonal in Figure 1, suggesting all classes can be discriminated. This claim is however statistically unsubstantiated, and indeed until recently the question of how to rigorously interpret the classification performance of multiway classifiers had not been directly addressed. Binomial tests are often applied to test whether a classifier is predicting randomly, however in the multiclass case this leaves many questions unanswered. For instance, here there were 730/2450 correct classifications, and the probability of achieving this by chance is p=2. 2,3], and each of these would be assigned a posterior probability, where as a general rule of thumb a probability in excess of 1/K, where K is the number of hypotheses, (i.e., 5 in the 3 class example) would be seen as informative evidence. (Olivetti pers. comm.) FIGURE 1. Leave-out-one-session Taxonomic category classification confusion matrix. Rows are the target labels and columns are predictions. Numbers overlaid on each cell indicate the proportion of predictions per law and music respectively (as indicated on the right y-axis) for that row, averaging over 7 participants. The numbers on the bottom line of each cell are the mean and standard deviation of predictions. Cell shading is scaled to the range 0 to 0.41 (0.41 is the maximum mean accuracy per cell displayed). Applying Olivetti et al.s' (2012) test to the taxonomic confusion matrix in Figure 1 and sorting all subset partitions in descending order of posterior probability, finds the top ranking partition (posterior probability=0.93) to be that all test classes are discriminable. The highest ranked three partitions are below (posterior probabilities rapidly diminish in the remaining 874 partitions that are not displayed). [ Tool, Location and Attribute are most clearly distinguished, whereas prediction of taxonomic category is weakest for categories toward the middle of the concreteness scale (Event and Communication) and in the second partition of Olivetti et al.s' (2012) analysis these categories aggregate (although the posterior probability for this partition at 0.04 is much lower than the first). Representational dissimilarity analysis between fMRI data and distributed semantic models Representational dissimilarity analyses (Kriegeskorte, 2008) between the fMRI data and the three distributed semantic models (LSA, HAL, COALS) identified in the introduction were run to test for association in inter-representations of taxonomic classes between modalities. Each semantic model was built using the corpus itWaC. This corpus is from WaCky, a collection of very large (>1 billion words) corpora built by web crawling, and annotated with Part-of-Speech tagging and lemmatisation. itWaC is the largest publicly documented Italian language resource (Baroni et al., 2008). Representational dissimilarity analysis was as follows. For each participant, all fMRI representations within each of the seven taxonomic categories were voxel-wise averaged. Then the pairwise difference between each unique taxonomic category pairing was computed (n=21) using 1-rho as a distance metric, where rho is Spearman's rank correlation coefficient. Likewise, for LSA, HAL and COALS, semantic representations of all word models within each taxonomic category were averaged, and pairwise differences between all unique category pairs taken. The list of respective category pair differences for imaging data and each of the semantic models were correlated using Spearman's rank correlation to give a correlation coefficient for each. Following this the 7 per participant lists of 21 category pair differences were collapsed (by averaging) and the resulting list of average differences correlated with the 3 semantic models. Significance was tested using a permutation test as follows. The seven taxonomic condition labels were shuffled in every possible way to construct a null distribution that the two dissimilarity lists are not correlated. The p-value is calculated as the proportion of random correlation coefficients that are greater than or equal to the observed coefficient. Results are in Table 2. Although there are two participants who show signs of a correlation with the HAL, HAL/COALS models, it is clear that this is not a general pattern across participants. Correlations range from positive to negative, and if p-values are corrected for multiple comparisons using Bonferroni correction (where the conventional significance threshold becomes p=0.05/21), results that individually are significant disappear. There is additionally no correlation between the fMRI dissimilarity matrices averaged over participants and the three semantic models. Discussion We have collected evidence that fMRI recordings contain sufficient information to discriminate between all Taxonomic categories that we tested. In other words, the distinctions between types of non-concrete concepts proposed in state-of-the-art models of conceptual knowledge such as WordNet are supported to a certain extent by brain data. Whereas a number of studies have demonstrated a connection between distributional semantic models and neuroimaging data for concrete concepts (e.g. Mitchell et al, 2008;Murphy et al. 2009;Murphy et al., 2011;Chang et al., 2011), representational similarity analysis failed to find a systematic association between the inter-relationship of categories in the fMRI data and the inter-relationship of categories in distributional semantic models. There could be a number of reasons for this. Firstly, it may be that the neural organisation of abstract knowledge is in fact entirely different to the distributed semantic representations in common usage. Given that the semantic models show some explanatory power for human behavioural data, it would be unwise to discount them too quickly. Alternatively it could be that the experimental/fMRI protocol used is unfit for the challenge. As concerns the experimental protocol, abstract concepts generally speaking are more difficult to imagine than concrete objects, and the richness of the neural representations invoked in our experiment may consequently be comparatively weak. Additionally we have no guarantee that participants were compliant with the task (the only gauge on this being the ability to detect systematic patterns in a participants data). It will be valuable to consider modifying the task and if/where possible, to develop tasks that require mental manipulation of the concept in a more realistic context, where the performance of the participant can be evaluated. As concerns fMRI, it is possible that abstract concepts may be represented on a smaller spatial scale than concrete concepts, especially if they are not grounded in sensorimotor mechanisms and associated neural maps (as frequently thought to be the case for concrete concepts). Thus our whole brain analysis using large voxels may overlook pertinent features. However given the success of taxonomic category classification with the current fMRI setup, it should not be dismissed to quickly either. This paper has thus far not directly addressed an important competing theory of concept organisation. Gentner (1981), Hampton (1981), and others found that unlike concrete concepts, abstract concepts are mostly characterized in terms of relations to other entities present in a Barsalou (1999) argued that the representation of abstract concepts is 'framed by abstract event sequences'. This suggests a scenario-based organization for non-concrete concepts. In this type of organization, non-concrete concepts are defined in terms of their role with respect to a scenario: e.g., law is defined with respect to the court scenario, whereas jazz is defined with relation to a music scenario. In fact our experimental data set was carefully selected to allow us to begin to target this question (50% of our words are associated with Law and 50% with Music). Our preliminary analyses suggest that law and music scenarios can also be successfully decoded from the neural data. Complete results will be presented in future work. Conclusion Conclusions are: (1) WordNet style taxonomic categories for abstract concepts, are at least cognitively relevant in that they can be distinguished from neural data; (2) In contrast to previous findings for concrete concepts, we were unable to detect a relationship between interrepresentation of abstract concept categories in fMRI data and inter-representations in popular distributed semantic models. The question of how abstract knowledge organised remains murky, however given the taxonomic classification success we are optimistic that advances are possible with current technology and methods. TABLE 1 . 1Italian stimuli words and English translations, Taxonomic category is indicated in the left column. Taxonomic categories are ordered in terms of increasing abstractness.tool manette handcuffs violino violin toga robe tamburo drum manganello truncheon tromba trumpet cappio noose metronomo metronome grimaldello skeleton key radio radio location tribunale court/tribunal palco stage carcere prison auditorium auditorium questura police station discoteca disco penitenziario penitentiary conservatorio conservatory patibolo gallows teatro theatre social-role giudice judge musicista musician ladro thief cantante singer imputato defendant compositore composer testimone witness chitarrista guitarist avvocato lawyer tenore tenor event arresto arrest concerto concert processo trial recital recital reato crime assolo solo furto theft festival festival assoluzione acquittal spettacolo show communication divieto prohibition canzone song verdetto verdict pentagramma stave ordinanza decree ballata ballad addebito accusation ritornello refrain ingiunzione injunction sinfonia symphony attribute giurisdizione jurisdiction sonorita' sonority cittadinanza citizenship ritmo rhythm impunita' impunity melodia melody legalita' legality tonalita' tonality illegalita' illegality intonazione pitch urabstracts giustizia justice musica music liberta' liberty blues blues legge law jazz jazz corruzione corruption canto singing refurtiva loot punk punk TABLE 2 . 2Representational dissimilarity analysis between neural data and semantic models. situation.Wiemer-Hastings & Xu (2005) provided further support for this finding and proposed that abstract concepts are "anchored in situations"(Wiemer-Hastings & Xu 2005, p. 731); in a similar fashion,Participant HAL COALS LSA 19730713 rho 0.3571 0.1416 -0.1649 P-value 0.0206 0.2061 0.7502 19820508 rho 0.0662 -0.0896 0.0156 P-value 0.346 0.6987 0.4465 19830625 rho 0.5455 0.5312 -0.1091 P-value 0.0347 0.0407 0.6909 19850913 rho 0.0364 -0.1169 0.2169 P-value 0.4083 0.7744 0.17 19861211 rho -0.2494 -0.2649 -0.1805 P-value 0.9288 0.9683 0.7756 19891011 rho -0.2338 -0.0805 -0.039 P-value 0.8931 0.6568 0.5299 19920102 rho 0.1273 0.1051 0.0156 P-value 0.2581 0.2767 0.4469 Collapsed dissimilarity rho 0.2455 0.1481 -0.013 matrix correlation P-value 0.1351 0.2437 0.5281 functional Magnetic Resonance Imaging measures blood flow in the brain, which reflects neural cells' energy consumption which in turn is generally regarded to relate to neural activity. Comparative to other popular neuroimaging techniques (e.g. EEG, MEG) fMRI offers relatively high spatial resolution (data is measured as a 3D volume built from rectangular cuboids known as voxels, of side 1-5 mm, over the entire brain) at relatively low sampling frequency (commonly ≥ 1Hz). Word naming times and psycholinguistic norms for Italian nouns. L Barca, C Burani, S Arduino, Behavior Research Methods. 343Barca, L., Burani, C., Arduino, S., (2002). Word naming times and psycholinguistic norms for Italian nouns. Behavior Research Methods, 34(3): 424-434. The WaCky Wide Web: A collection of very large linguistically processed web-crawled corpora. M Baroni, S Bernardini, A Ferraresi, E Zanchetta, Language Resources and Evaluation. 43Baroni, M., Bernardini, S., Ferraresi, A., Zanchetta, E., (2008). The WaCky Wide Web: A collection of very large linguistically processed web-crawled corpora. Language Resources and Evaluation, 43:209-226. Perceptual Symbol systems. L W Barsalou, Behavioral and Brain Sciences. 22Barsalou, L.W., (1999). Perceptual Symbol systems. Behavioral and Brain Sciences, 22: 577- 660. Revising WordNet Domains Hierarchy: Semantics, Coverage, and Balancing. L Bentivogli, P Forner, B Magnini, E Pianta, Proceedings of COLING 2004 Workshop on "Multilingual Linguistic Resources. COLING 2004 Workshop on "Multilingual Linguistic ResourcesGeneva, SwitzerlandBentivogli, L., Forner, P., Magnini, B., Pianta, E. (2004). Revising WordNet Domains Hierarchy: Semantics, Coverage, and Balancing. In Proceedings of COLING 2004 Workshop on "Multilingual Linguistic Resources", Geneva, Switzerland, August 28, 2004, 101-108. Where is the semantic system? A critical review and meta-analysis of 120 functional neuroimaging studies. J R Binder, R H Desai, W W Graves, L L Conant, Cerebral Cortex. 55Binder, J.R., Desai, R.H., Graves, W.W., Conant, L.L., (2009). Where is the semantic system? A critical review and meta-analysis of 120 functional neuroimaging studies. Cerebral Cortex, bhp055. Distinct brain systems for processing concrete and abstract concepts. J R Binder, C F Westbury, K A Mckiernan, E T Possing, D A Medler, Journal of Cognitive Neuroscience. 17Binder, J.R., Westbury, C.F., McKiernan, K.A. Possing, E.T., Medler, D.A., (2005). Distinct brain systems for processing concrete and abstract concepts. Journal of Cognitive Neuroscience. 17:905-917. From simple associations to the building blocks of language: Modeling meaning in memory with the HAL model. C Burgess, Behavior Research Methods, Instruments, & Computers. 30Burgess, C. (1998). From simple associations to the building blocks of language: Modeling meaning in memory with the HAL model. Behavior Research Methods, Instruments, & Computers, 30, 188-198. Quantitative modeling of the neural representation of objects: How semantic feature norms can account for fMRI activation. K M Chang, T Mitchell, M A Just, NeuroImage. 56Chang, K. M., Mitchell, T., Just, M. A. (2011). Quantitative modeling of the neural representation of objects: How semantic feature norms can account for fMRI activation, NeuroImage 56 (2011) 716-727. WordNet: An Electronic Lexical Database. C Fellbaum, MIT PressCambridge, MAFellbaum, C., (1998, ed.). WordNet: An Electronic Lexical Database. Cambridge, MA: MIT Press. The role of left inferior frontal gyrus and superior temporal cortex in sentence comprehension: localizing syntactic and semantic processes. A D Friederici, S-A Ruschemeyer, A Hahne, C J Fiebach, Cereb Cortex. 13Friederici, A.D., Ruschemeyer S-A, Hahne A., Fiebach, C.J., (2003). The role of left inferior frontal gyrus and superior temporal cortex in sentence comprehension: localizing syntactic and semantic processes. Cereb Cortex, 13:170-177. Knowledge Engineering and Knowledge Management. Ontologies and the Semantic Web. A Gangemi, N Guarino, C Masolo, A Oltramari, L Schneider, 13th International Conference. A. Gómez-Pérez, V.R. BenjaminsSiguenza, SpainSpringer VerlagSweetening Ontologies with DOLCEGangemi, A., Guarino, N., Masolo, C., Oltramari, A., Schneider, L., (2002). Sweetening Ontologies with DOLCE. In A. Gómez-Pérez, V.R. Benjamins (eds.) Knowledge Engineering and Knowledge Management. Ontologies and the Semantic Web, 13th International Conference, EKAW 2002, Siguenza, Spain, October 1-4, 2002, Springer Verlag, pp. 166-181 Why nouns are learned before verbs: Linguistic relativity versus natural partitioning. D Gentner, S. A. KuczajErlbaum, Hillsdale, NJGentner, D., (1981). Why nouns are learned before verbs: Linguistic relativity versus natural partitioning. In S. A. Kuczaj, editor, Language development: 2:301-334. Erlbaum, Hillsdale, NJ. The neural basis for category specific knowledge: An fMRI study. M Grossman, P Koenig, C Devita, G Glosser, D Alsop, J Detre, Neuroimage. 16Grossman, M., Koenig, P., DeVita, C., Glosser, G., Alsop, D., Detre, J., (2002). The neural basis for category specific knowledge: An fMRI study. Neuroimage, 16:936-948. An investigation of the nature of abstract concepts. J Hampton, Memory & Cognition. 92Hampton, J., (1981). An investigation of the nature of abstract concepts. Memory & Cognition, 9(2):149-156. . J V Haxby, M I Gobbini, M L Furey, A Ishai, J L Schouten, P Pietrini, Haxby, J.V., Gobbini, M.I., Furey, M.L, Ishai, A., Schouten, J.L., Pietrini, P., (2001). Distributed and overlapping representations of faces and objects in ventral temporal cortex. Science. 293Distributed and overlapping representations of faces and objects in ventral temporal cortex. Science, 293:2425-2430. Representational similarity analysisconnecting the branches of systems neuroscience. N Kriegeskorte, M Mur, P Bandettini, Frontiers in Systems Neuroscience. 24Kriegeskorte, N., Mur. M., Bandettini, P., (2008). Representational similarity analysis - connecting the branches of systems neuroscience. Frontiers in Systems Neuroscience, 2:4. A solution to Plato's problem: The latent semantic analysis, theory of acquisition, induction, and representation of knowledge. T K Landauer, S T Dumais, Psychological Review. 1042Landauer, T.K., Dumais, S.T., (1997). A solution to Plato's problem: The latent semantic analysis, theory of acquisition, induction, and representation of knowledge. Psychological Review, 104(2):211-240. An introduction to latent semantic analysis. T K Landauer, P W Foltz, D Laham, Discourse Processes. 27Landauer, T.K., Foltz, P.W., Laham, D., (1998). An introduction to latent semantic analysis. Discourse Processes, 27:303-310. Building large Knowledge-based systems: Representation and inference in the Cyc Project. D Lenat, R V Guha, Addision-WeslelyLenat, D. and Guha, R. V., (1990). Building large Knowledge-based systems: Representation and inference in the Cyc Project. Addision-Weslely. Producing high-dimensional semantic spaces from lexical cooccurrence. K Lund, C Burgess, Behavior Research Methods, Instrumentation and Computers. 28Lund, K., Burgess, C., (1996). Producing high-dimensional semantic spaces from lexical co- occurrence. Behavior Research Methods, Instrumentation and Computers, 28: 203-208. Semantic feature production norms for a large set of living and nonliving things. K Mcrae, G S Cree, M S Seidenberg, C Mcnorgan, Behavior Research Methods, Instruments, and computers. 374McRae, K., Cree, G.S., Seidenberg, M.S., McNorgan, C., (2005). Semantic feature production norms for a large set of living and nonliving things. Behavior Research Methods, Instruments, and computers, 37(4):547-559. Predicting Human Brain Activity Associated with the Meanings of Nouns. T M Mitchell, S V Shinkareva, A Carlson, K M Chang, V L Malave, R A Mason, M A Just, 10.1126/science.1152876Science. 320Mitchell, T. M., Shinkareva, S. V., Carlson, A., Chang, K.M., Malave, V. L., Mason R. A., and Just., M. A. (2008). Predicting Human Brain Activity Associated with the Meanings of Nouns, Science, 320, 1191-1195. DOI: 10.1126/science.1152876 EEG Responds to Conceptual Stimuli and Corpus Semantics. B Murphy, M Baroni, M Poesio, Proceedings of ACL/EMNLP. ACL/EMNLPMurphy, B., Baroni, M., Poesio, M. (2009). EEG Responds to Conceptual Stimuli and Corpus Semantics. Proceedings of ACL/EMNLP 2009. EEG decoding of semantic category reveals distributed representations for single concepts. B Murphy, . M Poesio, F Bovolo, L Bruzzone, M Dalponte, H Lakany, Brain and Language. 117Murphy, B., Poesio. M, Bovolo, F., Bruzzone, L., Dalponte, M., Lakany, H. (2011). EEG decoding of semantic category reveals distributed representations for single concepts. Brain and Language, 117, 12-22. Testing multiclass pattern discrimination. E Olivetti, S Greiner, P Avesani, 10.1109/PRNI.2012.14IEEE International Workshop on Pattern Recognition in NeuroImaging (PRNI). Olivetti, E., Greiner, S., & Avesani, P. (2012). Testing multiclass pattern discrimination. In IEEE International Workshop on Pattern Recognition in NeuroImaging (PRNI). 57-60 DOI:10.1109/PRNI.2012.14 MultiWordNet: developing an aligned multilingual database" pdf document. E Pianta, L Bentivogli, C Girardi, Proceedings of the First International Conference on Global WordNet. the First International Conference on Global WordNetMysore, IndiaPianta, E., Bentivogli, L., Girardi., C., (2002). MultiWordNet: developing an aligned multilingual database" pdf document. In Proceedings of the First International Conference on Global WordNet, Mysore, India, January 21-25, 2002. Neural representation of abstract and concrete concepts: A meta-analysis of neuroimaging studies. J Wang, J A Conder, D N Blitzer, S V Shinkareva, Human Brain Mapping. 31Wang, J., Conder, J.A., Blitzer, D.N., Shinkareva, S.V. (2010). Neural representation of abstract and concrete concepts: A meta-analysis of neuroimaging studies. Human Brain Mapping, 31:1459-1468. Decoding abstract and concrete concept representations based on single-trial fMRI data. J Wang, L B Baucom, S V Shinkareva, 10.1002/hbm.21498Human Brain Mapping. Wang, J., Baucom, L.B., Shinkareva, S.V. (2012). Decoding abstract and concrete concept representations based on single-trial fMRI data. Human Brain Mapping, DOI: 10.1002/hbm.21498 Category specific semantic impairments. E K Warrington, T Shallice, Brain. 1073Warrington, E.K. & Shallice, T., (1984). Category specific semantic impairments. Brain, 107(3):829-853. Content differences for abstract and concrete concepts. K Wiemer-Hastings, X Xu, Cognitive Science. 29Wiemer-Hastings, K., Xu, X. (2005). Content differences for abstract and concrete concepts. Cognitive Science, 29:719-736.
51,874,051
[]
Symbiotic Human and Machine Translation MT seamlessly • adapts to user data • learns from post-editing user enjoys • enhanced productivity • better user experience 3 Usable technology for the translation industry • easy to install and deploy • fast to set-up for a new project • effective, also on small projects • scalable with data and users • works with commodity hardware Can improve on top of static and adaptive engine! Uses incremental learning, adaptation and online learning Portable (in principle) on the multi-domain setting Limited gain on top of full-fledged adaptive NMT Can be an extra component to manage The Modern MT way(1) connect your CAT with a plug-in(2)drag & drop your private TMs (3) start translating! Modern MT in a nutshell zero training time adapts to context learns from user corrections scales with data and users Training data is a dynamic collection of Translation Memories At any time: • new TMs are added • existing TMs are extended Training time comparable to uploading time! Simple. Adaptive. Neural.Source: Farajian et al, "Multi-Domain Neural MT through Unsupervised Adaptation", Proc. WMT 2017. Farajian et al. (2017) "Multi-domain NMT through unsupervised adaptation", WMT. Progression in one month on English-Italian Online-learning contribution is consistent Does it scale with number of domains? Incremental learning contributes marginally Probably depends on test set size We are not always able to beat specialized models How to improve further adaptation ? Source: Turchi et al. (2017) "Continuous learning from human post-edits for NMT", EAMT.Neural APE uses two encoders and two attention models, which are merged and used by one decoder.5 7 Context aware translation party CONTEXT We are going out. TRANSLATION fête SENTENCE CONTEXT We approved the law TRANSLATION parti requests Machine Translation suggestions post-edits Incremental Learning Simple. Adaptive. Neural. Core technology [original plan] context analyser phrase-based decoder adaptive models incremental structures parallel processing Simple. Adaptive. Neural. Language support • 45 languages • fast pre-/post-processing • simple interfaces • tags and XML management • localization of expressions • TM cleaning Simple. Adaptive. Neural. Context Analyzer A B C 50% 45% 5% • analyze input text • retrieve best matching TMs • compute matching scores • dynamic structure Adaptive Phrase Table 1000 Suffix Array with Ranked Sampling • suffix array indexed with TMs • phrases sampled on demand • priority sampling over TMs • dynamic structure Adaptive Language Model A B C ∑ w • p • large static background model • n-grams stats indexed with TMs • combination of active TM LMs • TM LMs computed on the fly • dynamic structure 15 M. Cettolo, et al. (2016), The IWSLT 2016 Evaluation Campaign, IWSLT. TED Talks English-French Simple. Adaptive. Neural. Second Prototype (0.14 January 2017) Domains: ECB, Gnome, JRC, KDE, OpenOffice, PHP, Ubuntu, UN-TM Open benchmark: -Training speed: 12x Moses -100x NMT -MT quality (BLEU): +1 vs Moses -0.5 vs NMT Ada What happened Research on adaptive neural MT Believed PBMT was competitive on technical translation Finally realised superiority of NMT quality Completed PBMT release and switched to NMT Data collection for 14 translation directions Simple. Adaptive. Neural. Roadmap from last review meeting 2015 Q2 2016 Q2 2016 Q4 2017 Q4 minimum viable product context aware 1 lang pair first alpha release fast training, context aware, distributed, 1 lang pair first beta release online learning plug-in, 3 lang pairs final release neural MT, enterprise ready, 14 lang pairs technology switch Multi-user adaptive NMT 29 Multi-user adaptive NMT 30 Instances are selected by combining context scores and similarity scores Adaptation, too! 31 Sep: integration of MateCat Oct: NMT code released Nov: co-development release of 14 engines Dec: performance boost 34 Relative BLEU scores wrt Google Translate Performance of generic MMT 1-6 scale (w/o adaptation) 47 Proceedings for AMTA 2018 Workshop: Translation Quality Estimation and Automatic Post-Editing Boston, March 21, 2018 | Page 208 Proceedings for AMTA 2018 Workshop: Translation Quality Estimation and Automatic Post-Editing Boston, March 21, 2018 | Page 209 Proceedings for AMTA 2018 Workshop: Translation Quality Estimation and Automatic Post-Editing Boston, March 21, 2018 | Page 210 Proceedings for AMTA 2018 Workshop: Translation Quality Estimation and Automatic Post-Editing Boston, March 21, 2018 | Page 211 Proceedings for AMTA 2018 Workshop: Translation Quality Estimation and Automatic Post-Editing Boston, March 21, 2018 | Page 212Simple. Adaptive. Neural. Proceedings for AMTA 2018 Workshop: Translation Quality Estimation and Automatic Post-Editing Boston, March 21, 2018 | Page 213 Proceedings for AMTA 2018 Workshop: Translation Quality Estimation and Automatic Post-Editing Boston, March 21, 2018 | Page 214Simple. Adaptive. Neural. Proceedings for AMTA 2018 Workshop: Translation Quality Estimation and Automatic Post-Editing Boston, March 21, 2018 | Page 215 Multi-user scenario Proceedings for AMTA 2018 Workshop: Translation Quality Estimation and Automatic Post-Editing Boston, March 21, 2018 | Page 216 Multi-user scenario Multi-user scenario Proceedings for AMTA 2018 Workshop: Translation Quality Estimation and Automatic Post-Editing Boston, March 21, 2018 | Page 217 Proceedings for AMTA 2018 Workshop: Translation Quality Estimation and Automatic Post-Editing Boston, March 21, 2018 | Page 218 Proceedings for AMTA 2018 Workshop: Translation Quality Estimation and Automatic Post-Editing Boston, March 21, 2018 | Page 219 Proceedings for AMTA 2018 Workshop: Translation Quality Estimation and Automatic Post-Editing Boston, March 21, 2018 | Page 220 Proceedings for AMTA 2018 Workshop: Translation Quality Estimation and Automatic Post-Editing Boston, March 21, 2018 | Page 221 Proceedings for AMTA 2018 Workshop: Translation Quality Estimation and Automatic Post-Editing Boston, March 21, 2018 | Page 222 Proceedings for AMTA 2018 Workshop: Translation Quality Estimation and Automatic Post-Editing Boston, March 21, 2018 | Page 223 Proceedings for AMTA 2018 Workshop: Translation Quality Estimation and Automatic Post-Editing Boston, March 21, 2018 | Page 224 Proceedings for AMTA 2018 Workshop: Translation Quality Estimation and Automatic Post-Editing Boston, March 21, 2018 | Page 225 Proceedings for AMTA 2018 Workshop: Translation Quality Estimation and Automatic Post-Editing Boston, March 21, 2018 | Page 226 Proceedings for AMTA 2018 Workshop: Translation Quality Estimation and Automatic Post-Editing Boston, March 21, 2018 | Page 227 Proceedings for AMTA 2018 Workshop: Translation Quality Estimation and Automatic Post-Editing Boston, March 21, 2018 | Page 228 Proceedings for AMTA 2018 Workshop: Translation Quality Estimation and Automatic Post-Editing Boston, March 21, 2018 | Page 229 Proceedings for AMTA 2018 Workshop: Translation Quality Estimation and Automatic Post-Editing Boston, March 21, 2018 | Page 230 Proceedings for AMTA 2018 Workshop: Translation Quality Estimation and Automatic Post-Editing Boston, March 21, 2018 | Page 231 Proceedings for AMTA 2018 Workshop: Translation Quality Estimation and Automatic Post-Editing Boston, March 21, 2018 | Page 232 Proceedings for AMTA 2018 Workshop: Translation Quality Estimation and Automatic Post-Editing Boston, March 21, 2018 | Page 233 Proceedings for AMTA 2018 Workshop: Translation Quality Estimation and Automatic Post-Editing Boston, March 21, 2018 | Page 234 Proceedings for AMTA 2018 Workshop: Translation Quality Estimation and Automatic Post-Editing Boston, March 21, 2018 | Page 235 Proceedings for AMTA 2018 Workshop: Translation Quality Estimation and Automatic Post-Editing Boston, March 21, 2018 | Page 236 Proceedings for AMTA 2018 Workshop: Translation Quality Estimation and Automatic Post-Editing Boston, March 21, 2018 | Page 237 Proceedings for AMTA 2018 Workshop: Translation Quality Estimation and Automatic Post-Editing Boston, March 21, 2018 | Page 238 Proceedings for AMTA 2018 Workshop: Translation Quality Estimation and Automatic Post-Editing Boston, March 21, 2018 | Page 239 Proceedings for AMTA 2018 Workshop: Translation Quality Estimation and Automatic Post-Editing Boston, March 21, 2018 | Page 240 Proceedings for AMTA 2018 Workshop: Translation Quality Estimation and Automatic Post-Editing Boston, March 21, 2018 | Page 241 Proceedings for AMTA 2018 Workshop: Translation Quality Estimation and Automatic Post-Editing Boston, March 21, 2018 | Page 242 All we need is a memoryAll we need is a memory25All we need is a memory All we need is a memory27All we need is a memoryData CleaningWe added a simple QE module to filter out bad examples:Can improve MT without touching it insideWe can adapt an "external" MT service! Similar to NMT: two inputs (src,mt), one output (ape) Can be trained with less data than NMT We can deploy instance based adaptation
3,061,213
TMop: a Tool for Unsupervised Translation Memory Cleaning
We present TMop, the first open-source tool for automatic Translation Memory (TM) cleaning. The tool implements a fully unsupervised approach to the task, which allows spotting unreliable translation units (sentence pairs in different languages, which are supposed to be translations of each other) without requiring labeled training data. TMop includes a highly configurable and extensible set of filters capturing different aspects of translation quality. It has been evaluated on a test set composed of 1,000 translation units (TUs) randomly extracted from the English-Italian version of MyMemory, a large-scale public TM. Results indicate its effectiveness in automatic removing "bad" TUs, with comparable performance to a state-of-the-art supervised method (76.3 vs. 77.7 balanced accuracy).
[ 15895424, 10823395, 4895939, 6558253, 6299630, 16312537, 16272251, 61821757, 17188191 ]
TMop: a Tool for Unsupervised Translation Memory Cleaning Association for Computational LinguisticsCopyright Association for Computational LinguisticsAugust 7-12, 2016. 2016 Masoud Jalili Sabet Matteo Negri negri@fbk.eu Fondazione Bruno Kessler TrentoItaly Marco Turchi turchi@fbk.eu Fondazione Bruno Kessler TrentoItaly José G C De Souza desouza@fbk.eu Fondazione Bruno Kessler TrentoItaly Marcello Federico federico@fbk.eu Fondazione Bruno Kessler TrentoItaly School of Electrical and Computer Engineering University of Tehran Iran ( TMop: a Tool for Unsupervised Translation Memory Cleaning Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics-System Demonstrations the 54th Annual Meeting of the Association for Computational Linguistics-System DemonstrationsBerlin, GermanyAssociation for Computational LinguisticsAugust 7-12, 2016. 2016 We present TMop, the first open-source tool for automatic Translation Memory (TM) cleaning. The tool implements a fully unsupervised approach to the task, which allows spotting unreliable translation units (sentence pairs in different languages, which are supposed to be translations of each other) without requiring labeled training data. TMop includes a highly configurable and extensible set of filters capturing different aspects of translation quality. It has been evaluated on a test set composed of 1,000 translation units (TUs) randomly extracted from the English-Italian version of MyMemory, a large-scale public TM. Results indicate its effectiveness in automatic removing "bad" TUs, with comparable performance to a state-of-the-art supervised method (76.3 vs. 77.7 balanced accuracy). Introduction Computer-assisted translation (CAT) refers to a framework in which the work of human translators is supported by machines. Its advantages, especially in terms of productivity and translation consistency, have motivated huge investments both economic (by the translation industry) and intellectual (by the research community). Indeed, the high market potential of solutions geared to speed up the translation process and reduce its costs has attracted increasing interest from both sides. Advanced CAT tools currently integrate the strengths of two complementary technologies: translation memories (TM -a high-precision mechanism for storing and retrieving previously translated segments) and machine translation (MT -a high-recall technology for translating unseen segments). The success of the integration has determined the quick growth of market shares that are held by CAT, as opposed to fully manual translation that became a niche of the global translation market. However, differently from MT that is constantly improving and reducing the distance from human translation, core TM technology has slightly changed over the years. This is in contrast with the fact that TMs are still more widely used than MT, especially in domains featuring high text repetitiveness (e.g. software manuals). Translation memories have a long tradition in CAT, with a first proposal dating back to (Arthern, 1979). They consist of databases that store previously translated segments, together with the corresponding source text. Such (source, target) pairs, whose granularity can range from the phrase level to the sentence or even the paragraph level, are called translation units (TUs). When working with a CAT tool, each time a segment of a document to be translated matches with the source side of a TU, the corresponding target is proposed as a suggestion to the user. The user can also store each translated (source, target) pair in the TM for future use, thus increasing the size and the coverage of the TM. Due to such constant growth, in which they evolve over time incorporating users style and terminology, the so-called private TMs represent an invaluable asset for individual translators and translation companies. Collaborativelycreated public TMs grow in a less controlled way but still remain a practical resource for the translators' community at large. The usefulness of TM suggestions mainly depends on two factors: the matching process and the quality of the TU. To increase recall, the retrieval is based on computing a "fuzzy match" score. Depending on how the matching is performed, its output can be a mix of perfect and partial matches requiring variable amounts of correc-tions by the user. For this reason, most prior works on TM technology focused on improving this aspect (Gupta et al., 2014;Bloodgood and Strauss, 2014;Vanallemeersch and Vandeghinste, 2015;Chatzitheodoroou, 2015;Gupta et al., 2015). The other relevant factor, TU quality, relates to the reliability of the target translations. Indeed, a perfectly matching source text associated to a wrong translation would make the corresponding suggestion useless or, even worse, an obstacle to productivity. On this aspect, prior research is limited to the work proposed in (Barbu, 2015), which so far represents the only attempt to automatically spot false translations in the bi-segments of a TM. However, casting the problem as a supervised binary classification task, this approach highly depends on the availability of labelled training data. Our work goes beyond the initial effort of Barbu (2015) in two ways. First, we propose a configurable and extensible open source framework for TM cleaning. In this way, we address the demand of easy-to-use TM management tools whose development is out of the reach of individual translators and translation companies. Such demand is not only justified by productivity reasons (remove bad suggestions as a cause of slow production), but also for usability reasons. Loading, searching and editing a TM are indeed time-consuming and resource-demanding operations. In case of very large databases (up to millions of TUs) the accurate removal of useless units can significantly increase usability. Though paid, the few existing tools that incorporate some data cleaning methods (e.g. Apsic X-Bench 1 ) only implement very simple syntactic checks (e.g. repetitions, opening/closing tags consistency). These are insufficient to capture the variety of errors that can be encountered in a TM (especially in the public ones). Second, our approach to TM cleaning is fully unsupervised. This is to cope with the lack of labelled training data which, due to the high acquisition costs, represents a bottleneck rendering supervised solutions unpractical. It is worth remarking that also current approaches to tasks closely related to TM cleaning (e.g. MT quality estimation (Mehdad et al., 2012;C. de Souza et al., 2014)) suffer from the same problem. Besides not being customised for the specificities of the TM cleaning scenario (their usefulness for the task should be demonstrated), their dependence on labelled training data is a strong requirement from the TM cleaning application perspective. The TM cleaning task The identification of "bad" TUs is a multifaceted problem. First, it deals with the recognition of a variety of errors. These include: • Surface errors, such as opening/closing tags inconsistencies and empty or suspiciously long/short translations; • Language inconsistencies, for instance due to the inversion between the source and target languages; • Translation fluency issues, such as typos and grammatical errors (e.g. morpho-syntactic disagreements, wrong word ordering); • Translation adequacy issues, such as the presence of untranslated terms, wrong lexical choices or more complex phenomena (e.g. negation and quantification errors) for which a syntactically correct target can be a semantically poor translation of the source segment. The severity of the errors is another aspect to take into account. Deciding if a given error makes a TU useless is often difficult even for humans. For instance, judging about the usefulness of a TU whose target side has missing/extra words would be a highly subjective task. 2 For this reason, identifying "bad" TUs with an automatic approach opens a number of problems related to: i) defining when a given issue becomes a real error (e.g. the ratio of acceptable missing words), ii) combining potentially contradictory evidence (e.g. syntactic and semantic issues), and iii) making these actions easily customisable by different users having different needs, experience and quality standards. What action to take when one or more errors are identified in a TU is also important. Ideally, a TM cleaning tool should allow users either to simply flag problematic TUs (leaving the final decision to a human judgment), or to automatically remove them without further human intervention. Finally, two critical aspects are the external knowledge and resources required by the TMcleaning process. On one side, collecting evidence for each TU can involve processing steps that access external data and tools. On the other side, decision making can require variable amounts of labelled training data (i.e. positive/negative examples of "good"/"bad" TUs). For both tasks, the recourse to external support can be an advantage in terms of performance due to the possibility to get informed judgments taken from models trained in a supervised fashion. At the same time, it can be a limitation in terms of usability and portability across languages. When available, external resources and tools (e.g. syntactic/semantic parsers) can indeed be too slow to process huge amounts of data. Most importantly, labelled training data are usually difficult to acquire. In case of need, a TM cleaning tool should hence minimise the dependence of its performance from the availability of external resources. All these aspects were considered in the design of TMop, whose capability to cope with a variety of errors, customise its actions based on their severity and avoid the recourse to external knowledge/resources are described in the next section. The TMop framework TMop (Translation Memory open-source purifier) is an open-source TM cleaning software written in Python. It consists of three parts: core, filters and policy managers. The core, the main part of the software, manages the workflow between filters, policy managers and input/output files. The filters ( §3.2) are responsible for detecting "bad" TUs. Each of them can detect a specific type of problems (e.g. formatting, fluency, adequacy) and will emit an accept or reject judgment for each TU. Policy managers ( §3.3) collect the individual results from each filter and take a final decision for each TM entry based on different possible strategies. Filters, policies and basic parameters can be set by means of a configuration file, which was structured by keeping ease of use and flexibility as the main design criteria. TMop implements a fully unsupervised approach to TM cleaning. The accept/reject criteria are learned from the TM itself and no training data are required to inform the process. 3 Nevertheless, the filters' output could be also used to instantiate feature vectors in any supervised learning scenario supported by training data. 3 The tool has been recently used also in the unsupervised approach by Jalili Sabet et al. Workflow The input file of TMop is a TM represented as a text file containing one TU per line in the form (ID, source, target). The output consists of several files, the most important of which are the accept and reject files containing the TUs identified as "good"/"bad", in the same format of the input. As depicted in Figure 1, TMop filters operate in two steps. In the first one, the learning step, each filter i iterates over the TM or a subset of it to gather the basic statistics needed to define its accept/reject criteria. For instance, by computing mean and standard deviation values for a given indicator (e.g. sentence length ratio, proportion of aligned words), quantiles or std counts in case of normal value distributions will be used as decision boundaries. Then, in the decision step, each filter uses the gathered information to decide about each TU. At the end of this process, for each TU the policy manager collects all the decisions taken by the filters and applies the policy set by the user in the configuration file to assign an accept or reject judgment. The final labels, the TUs and the filters outputs are saved in different files. Filters Our filters capture different aspects of the similarity between the source and the target of a TU. The full set consists of 23 filters, which are organized in four groups. Basic filters (8 in total). This group (B) extends the filters proposed by Barbu (2015) and substantially covers those offered by commercial TM cleaning tools. They capture translation quality by looking at surface aspects, such as the possible mismatches in the number of dates, numbers, URLs, XML tags, ref and image tags present in the source and target segments. Other filters model the similarity between source and target by computing the direct and inverse ratio between the number of characters and words, as well as the average word length in the two segments. Finally, two filters look for uncommon character or word repetitions. Language identification filter (1). This filter (LI) exploits the Langid tool (Lui and Baldwin, 2012) to verify the consistency between the source and target languages of a TU and those indicated in the TM. Though simple, it is quite effective since often the two languages are inverted or even completely different from the expected ones. (9). This group (QE) contains filters borrowed from the closely-related task of MT quality estimation, in which the complexity of the source, the fluency of the target and the adequacy between source and target are modeled as quality indicators. Focusing on the adequacy aspect, we exploit a subset of the features proposed by C. de Souza et al. (2013). They use word alignment information to link source and target words and capture the quantity of meaning preserved by the translation. For each segment of a TU, word alignment information is used to calculate: i) the proportion of aligned and unaligned word n-grams (n=1,2), ii) the ratio between the longest aligned/unaligned word sequence and the length of the segment, iii) the average length of the aligned/unaligned word sequences, and iv) the position of the first/last unaligned word, normalized by the length of the segment. Word alignment models can be trained on the whole TM with one of the many existing word aligners. For instance, the results of WE filters reported in §4 were obtained using MGIZA++ (Gao and Vogel, 2008). QE-derived filters Word embedding filters (5). Cross-lingual word embeddings provide a common vector representation for words in different languages and allow looking at the source and target segments at the same time. In TMop, they are computed using the method proposed in (Søgaard et al., 2015) but, instead of considering bilingual documents as atomic concepts to bridge the two languages, they exploit the TUs contained in the TM itself. Given a TU and a 100-dimensional vector representation of each word in the source and target segments, this group of filters (WE) includes: i) the cosine similarity between the source and target segment vectors obtained by averaging (or using the median) the source and target word vectors; ii) the average embedding alignment score obtained by computing the cosine similarity between each source word and all the target words and averaging over the largest cosine score of each source word; iii) the average cosine similarity between source/target word alignments; iv) a score that merges features (ii) and (iii) by complementing word alignments (also in this case obtained using MGIZA++) with the alignments obtained from word embedding and averaging all the alignment weights. Policies Decision policies allow TMop combining the output of the active filters into a final decision for each TU. Simple decision-making strategies can consider the number of accept and reject judgments, but more complex methods can be easily implemented by the user (both filters and policy managers can be easily modified and extended by exploiting well-documented abstract base classes). TMop currently implements three policies: OneNo, 20%No and MajorityVoting. The first one copies a TU in the reject file if at least one filter rejects it. The second and the third policy take this decision only if at least twenty or fifty percent of the filters reject the TU respectively. These three policies reflect different TM cleaning strategies. The first one is a very aggressive (recall-oriented) solution that tends to flag more TUs as "bad". The third one is a more conservative (precision-oriented) solution, as it requires at least half of the judgments to be negative for pushing a TU in the reject file. Depending on the user needs and the overall quality of the TM, the choice of the policy will allow keeping under control the number of false positives ("bad" TUs accepted) and false negatives ("good" TUs rejected). Benchmarking We test TMop on the English-Italian version of MyMemory, 4 one of the world's largest collaborative public TMs. This dump contains about 11M TUs coming from heterogeneous sources: aggregated private TMs, either provided by translators or automatically extracted from the web/corpora, as well as anonymous contributions of (source, target) bi-segments. Its uncontrolled sources call for accurate cleaning methods (e.g. to make it more accurate, smaller and manageable). From the TM we randomly extracted a subset of 1M TUs to compute the statistics of each filter and a collection of 2,500 TUs manually annotated with binary labels. Data annotation was done by two Italian native speakers properly trained with the same guidelines prepared by the TM owner for periodic manual revisions. After agreement computation (Cohen's kappa is 0.78), a reconciliation ended up with about 65% positive and 35% negative examples. This pool is randomly split in two parts. One (1,000 instances) is used as test set for our evaluation. The other (1,500 instances) is used to replicate the supervised approach of Barbu (2015), which leverages human-labelled data to train an SVM binary classifier. We use it as a term of comparison to assess the performance of the different groups of filters. To handle the imbalanced (65%-35%) data distribution, and equally reward the correct classification on both classes, we evaluate performance in terms of balanced accuracy (BA), computed as the average of the accuracies on the two classes (Brodersen et al., 2010). In Table 1, different combinations of the four groups of filters are shown with results aggregated with the 20%No policy, which, on this data, results to be the best performing policy among the ones implemented in TMop. Based on the statistics collected in the learning phase of each filter, the accept/reject criterion applied in these experiments considers as "good" all the TUs for which the filter value is below one standard deviation from the mean and "bad" otherwise. Looking at the results, it is worth noting that the LI, QE and WE groups, both alone and in combination, outperform the basic filters (B), which substantially represent those implemented by commercial tools. Although relying on an external component (the word aligner), QE filters produce the best performance in isolation, showing that word alignment information is a good indicator of translation quality. The results obtained by combining the different groups confirm their complementarity. In particular, when using all the groups, the performance is close to the results achieved by the supervised method by Barbu (2015), which relies on human-labelled data (76.3 vs. 77.7). The choice of which filter combination to use strongly depends on the application scenario and it is often a trade-off. A first important aspect concerns the type of user. When the expertise to train a word aligner is not available, combining B, WE and LI is the best solution, though it comes at the cost of lower accuracy. Another aspect is the processing time that the user can afford. TM cleaning is an operation conceived to be performed once in a while (possibly overnight), once the TM has grown enough to justify a new sanity check. However, although it does not require real-time processing, the size of the TM can motivate the selection of faster filter combinations. An analysis of the efficiency of the four groups, made by counting the number of processed TUs per second, 5 indicates that B and QE are the fastest filters (processing on average ∼2,000 TUs/sec.). The LI filter is slower, processing ∼300 TUs per second, while the large number of times the cosine similarity score is computed does not allow the WE filter to process more than 50 TUs per second. Figure 1 : 1TMop workflow 2016).Start Initialize Other Filters For # of iterations Learning For each TU Process TU Finalize a full scan Finalize Learning For each TU Decision Decide on TU For each TU Collect Filters' Decisions Apply Policy Write to Files End Filter i Policy Manager Table 1: Balanced accuracy of different filter combinations on a 1,000 TU, EN-IT test set. B=Basic, LI=language identification, QE=quality estimation, WE=word embedding.Filters BA↑ (Barbu, 2015) 77.7 B 52.8 LI 69.0 QE 71.2 WE 65.0 B + LI 55.4 B + QE 70.1 B + WE 68.7 QE + LI 71.7 QE + WE 67.9 LI + WE 68.1 B + QE + LI 72.9 B + WE + LI 70.3 B + QE + WE 73.3 B + QE + LI + WE 76.3 http://www.xbench.net/ Likely, the perceived severity of a missing word out of n perfectly translated terms will be inversely proportional to n. http://mymemory.translated.net ConclusionWe presented TMop, the first open-source tool for automatic Translation Memory (TM) cleaning. We summarised its design criteria, workflow and main components, also reporting some efficiency and performance indicators. TMop is implemented in Python and can be downloaded, together with complete documentation, from https://github.com/hlt-mt/TMOP. Its license is FreeBSD, a very open permissive noncopyleft license, compatible with the GNU GPL and with any use, including commercial. AcknowledgmentsThis work has been partially supported by the ECfunded project ModernMT (H2020 grant agreement no. 645487). The work carried out at FBK by Masoud Jalili Sabet was sponsored by the EAMT summer internships 2015 program and supported by Prof. Heshaam Faili (University of Tehran). The authors would also like to thank Translated for providing a dump of MyMemory. Machine Translation and Computerized Terminology Systems: a Translator's Viewpoint. Peter Arthern, Translating and the computer. Proc. of a seminar. London, UKPeter Arthern. 1979. Machine Translation and Computerized Terminology Systems: a Translator's Viewpoint. In Translating and the computer. Proc. of a seminar, pages 77-108, London, UK. Spotting False Translation Segments in Translation Memories. Eduard Barbu, Proc. of the Workshop Natural Language Processing for Translation Memories. of the Workshop Natural Language essing for Translation MemoriesHissar, BulgariaEduard Barbu. 2015. Spotting False Translation Seg- ments in Translation Memories. In Proc. of the Workshop Natural Language Processing for Trans- lation Memories, pages 9-16, Hissar, Bulgaria. Translation Memory Retrieval Methods. Michael Bloodgood, Benjamin Strauss, Proc. of the 14th Conference of the EACL. of the 14th Conference of the EACLGothenburg, SwedenMichael Bloodgood and Benjamin Strauss. 2014. Translation Memory Retrieval Methods. In Proc. of the 14th Conference of the EACL, pages 202-210, Gothenburg, Sweden. The Balanced Accuracy and Its Posterior Distribution. Kay Henning Brodersen, Cheng Soon Ong, Klaas Enno Stephan, Joachim M Buhmann, Proc. of the 2010 20th International Conference on Pattern Recognition, ICPR '10. of the 2010 20th International Conference on Pattern Recognition, ICPR '10Kay Henning Brodersen, Cheng Soon Ong, Klaas Enno Stephan, and Joachim M. Buhmann. 2010. The Bal- anced Accuracy and Its Posterior Distribution. In Proc. of the 2010 20th International Conference on Pattern Recognition, ICPR '10, pages 3121-3124. Experiments were run with a PC with an Intel Core i5 M540 @ 2.53GHz and 6 GB RAM. Experiments were run with a PC with an Intel Core i5 M540 @ 2.53GHz and 6 GB RAM. FBK-UEdin Participation to the WMT13 Quality Estimation Shared Task. G C José, Christian Souza, Marco Buck, Matteo Turchi, Negri, Proc. of the Eighth Workshop on Statistical Machine Translation. of the Eighth Workshop on Statistical Machine TranslationSofia, BulgariaAssociation for Computational LinguisticsJosé G. C. de Souza, Christian Buck, Marco Turchi, and Matteo Negri. 2013. FBK-UEdin Participation to the WMT13 Quality Estimation Shared Task. In Proc. of the Eighth Workshop on Statistical Machine Translation, pages 352-358, Sofia, Bulgaria. Asso- ciation for Computational Linguistics. FBK-UPV-UEdin Participation in the WMT14 Quality Estimation Shared-task. G C José, Jesús De Souza, Christian González-Rubio, Marco Buck, Matteo Turchi, Negri, Proc. of the Ninth Workshop on Statistical Machine Translation. of the Ninth Workshop on Statistical Machine TranslationBaltimore, Maryland, USAJosé G. C. de Souza, Jesús González-Rubio, Chris- tian Buck, Marco Turchi, and Matteo Negri. 2014. FBK-UPV-UEdin Participation in the WMT14 Quality Estimation Shared-task. In Proc. of the Ninth Workshop on Statistical Machine Translation, pages 322-328, Baltimore, Maryland, USA. Improving Translation Memory Fuzzy Matching by Paraphrasing. Konstantinos Chatzitheodoroou, Proc. of the Workshop Natural Language Processing for Translation Memories. of the Workshop Natural Language essing for Translation MemoriesHissar, BulgariaKonstantinos Chatzitheodoroou. 2015. Improving Translation Memory Fuzzy Matching by Paraphras- ing. In Proc. of the Workshop Natural Language Processing for Translation Memories, pages 24-30, Hissar, Bulgaria. Parallel Implementations of Word Alignment Tool. Qin Gao, Stephan Vogel, Proc. of the ACL 2008 Software Engineering, Testing, and Quality Assurance Workshop. of the ACL 2008 Software Engineering, Testing, and Quality Assurance WorkshopQin Gao and Stephan Vogel. 2008. Parallel Implemen- tations of Word Alignment Tool. In In Proc. of the ACL 2008 Software Engineering, Testing, and Qual- ity Assurance Workshop. Intelligent Translation Memory Matching and Retrieval Metric Exploiting Linguistic Technology. Rohit Gupta, Hanna Bechara, Constantin Orasan, Proc. of Translating and the Computer. of Translating and the Computer36Rohit Gupta, Hanna Bechara, and Constantin Orasan. 2014. Intelligent Translation Memory Matching and Retrieval Metric Exploiting Linguistic Technology. In Proc. of Translating and the Computer: Vol. 36., pages 86-89. Can Translation Memories afford not to use paraphrasing?. Rohit Gupta, Constantin Orasan, Marcos Zampieri, Mihaela Vela, Josef Van Genabith, Proc. of the 18th Annual Conference of the European Association for Machine Translation. of the 18th Annual Conference of the European Association for Machine TranslationAntalya, TurkeyRohit Gupta, Constantin Orasan, Marcos Zampieri, Mihaela Vela, and Josef Van Genabith. 2015. Can Translation Memories afford not to use paraphras- ing? In Proc. of the 18th Annual Conference of the European Association for Machine Translation, pages 35-42, Antalya, Turkey. An Unsupervised Method for Automatic Translation Memory Cleaning. Jalili Masoud, Matteo Sabet, Marco Negri, Eduard Turchi, Barbu, Proc. of the 54th Annual Meeting of the Association for Computational Linguistics. of the 54th Annual Meeting of the Association for Computational LinguisticsBerlin, GermanyMasoud Jalili Sabet, Matteo Negri, Marco Turchi, and Eduard Barbu. 2016. An Unsupervised Method for Automatic Translation Memory Cleaning. In Proc. of the 54th Annual Meeting of the Association for Computational Linguistics, Berlin, Germany. 2012. langid.py: An Off-the-shelf Language Identification Tool. Marco Lui, Timothy Baldwin, Proc. of the ACL 2012 system demonstrations. of the ACL 2012 system demonstrationsAssociation for Computational LinguisticsMarco Lui and Timothy Baldwin. 2012. langid.py: An Off-the-shelf Language Identification Tool. In Proc. of the ACL 2012 system demonstrations, pages 25- 30. Association for Computational Linguistics. Match without a Referee: Evaluating MT Adequacy without Reference Translations. Yashar Mehdad, Matteo Negri, Marcello Federico, Proc. of the Machine Translation Workshop (WMT2012). of the Machine Translation Workshop (WMT2012)Montréal, CanadaYashar Mehdad, Matteo Negri, and Marcello Federico. 2012. Match without a Referee: Evaluating MT Adequacy without Reference Translations. In Proc. of the Machine Translation Workshop (WMT2012), pages 171-180, Montréal, Canada. Inverted indexing for cross-lingual NLP. Anders Søgaard, Željko Agić, Barbara Héctor Martínez Alonso, Bernd Plank, Anders Bohnet, Johannsen, The 53rd Annual Meeting of the Association for Computational Linguistics (ACL. Anders Søgaard,Željko Agić, Héctor Martínez Alonso, Barbara Plank, Bernd Bohnet, and Anders Jo- hannsen. 2015. Inverted indexing for cross-lingual NLP. In The 53rd Annual Meeting of the Associa- tion for Computational Linguistics (ACL 2015). Assessing Linguistically Aware Fuzzy Matching in Translation Memories. Tom Vanallemeersch, Vincent Vandeghinste, Proc. of the 18th Annual Conference of the European Association for Machine Translation. of the 18th Annual Conference of the European Association for Machine TranslationAntalya, TurkeyTom Vanallemeersch and Vincent Vandeghinste. 2015. Assessing Linguistically Aware Fuzzy Matching in Translation Memories. In Proc. of the 18th Annual Conference of the European Association for Ma- chine Translation, pages 153-160, Antalya, Turkey.
10,477,775
The Development of the Index Thomisticus Treebank Valency Lexicon
We present a valency lexicon for Latin verbs extracted from the Index Thomisticus Treebank, a syntactically annotated corpus of Medieval Latin texts by Thomas Aquinas. In our corpus-based approach, the lexicon reflects the empirical evidence of the source data. Verbal arguments are induced directly from annotated data. The lexicon contains 432 Latin verbs with 270 valency frames. The lexicon is useful for NLP applications and is able to support annotation.
[ 12227983 ]
The Development of the Index Thomisticus Treebank Valency Lexicon Association for Computational LinguisticsCopyright Association for Computational Linguistics30 March 2009. 2009 Barbara Mcgillivray b.mcgillivray@ling.unipi.it University of Pisa Italy Marco Passarotti marco.passarotti@unicatt.it Catholic University of the Sacred Heart Milan Italy The Development of the Index Thomisticus Treebank Valency Lexicon Proceedings of the EACL 2009 Workshop on Language Technology and Resources for Cultural Heritage, Social Sciences, Humanities, and Education -LaTeCH -SHELT&R 2009 the EACL 2009 Workshop on Language Technology and Resources for Cultural Heritage, Social Sciences, Humanities, and Education -LaTeCH -SHELT&R 2009Athens, GreeceAssociation for Computational Linguistics30 March 2009. 2009 We present a valency lexicon for Latin verbs extracted from the Index Thomisticus Treebank, a syntactically annotated corpus of Medieval Latin texts by Thomas Aquinas. In our corpus-based approach, the lexicon reflects the empirical evidence of the source data. Verbal arguments are induced directly from annotated data. The lexicon contains 432 Latin verbs with 270 valency frames. The lexicon is useful for NLP applications and is able to support annotation. Introduction Over the last decades, annotated corpora and computational lexicons have gained an increasing role among language resources in computational linguistics: on the one hand, they are used to train Natural Language Processing (NLP) tools such as parsers and PoS taggers; on the other hand, they are developed through automatic procedures of linguistic annotation and lexical acquisition. The relation between annotated corpora and computational lexicons is circular: as a matter of fact, if linguistic annotation of textual data is supported and improved by the use of lexicons, these latter can be induced from annotated data in a corpus-based fashion. In the field of cultural heritage and in particular that of classical languages studies, much effort has been devoted throughout the years to the digitization of texts, but only recently have some projects begun to annotate them above the morphological level. Concerning lexicology and lexicography of classical languages, a long tradition has produced and established many dictionaries, thesauri and lexicons, providing examples from real texts. Nevertheless, nowadays it is possible and indeed necessary to match lexicons with data from (annotated) corpora, and viceversa. This requires the scholars to exploit the vast amount of textual data from classical languages already available in digital format, 1 and particularly those annotated at the highest levels. The evidence provided by the texts themselves can be fully represented in lexicons induced from these data. Subsequently, these lexicons can be used to support the textual annotation itself in a virtuous circle. This paper reports on the creation of a valency lexicon induced from the Index Thomisticus Treebank, a syntactically annotated corpus of Medieval Latin texts by Thomas Aquinas. The paper is organised as follows: section 2 describes the available Latin treebanks, their annotation guidelines and gives some specific information on the Index Thomisticus treebank; section 3 deals with the notion of valency, while section 4 describes the state of the art on valency lexicons; section 5 illustrates the procedures of acquisition and representation of our valency lexicon; finally, section 6 draws some conclusions and describes future work. Latin Treebanks Latin is a richly inflected language, showing: -discontinuous constituents ('nonprojectivity'): this means that phrasal constituents may not be continuous, but broken up by words of other constituents. An example is the following sentence by Ovid (Metamorphoses, I.1-2): "In nova fert animus mutatas dicere formas corpora" ("My mind leads me to tell of forms changed into new bodies"). In this sentence, both the nominal phrases "nova corpora" and "mutatas formas" are discontinuous; -moderately free word-order: for instance, the order of the words in a sentence like "au-daces fortuna iuvat" ("fortune favours the bold") could be changed into "fortuna audaces iuvat", or "fortuna iuvat audaces", without affecting the meaning of the sentence. These features of Latin influenced the choice of Dependency Grammars (DG) 2 as the most suitable grammar framework for building Latin annotated corpora like treebanks. While since the 1970s the first treebanks were annotated via Phrase Structure Grammar (PSG)based schemata (as in IBM, Lancaster and, later on, Penn treebanks), in the past decade many projects of dependency treebanks development have started, such as the ALPINO treebank for Dutch (Van der Beek et al., 2002), the Turin University Treebank for Italian (Lesmo et al., 2002), or the Danish Dependency Treebank (Kromann, 2003). On the one hand, this is due to the fact that the first treebanks were mainly English language corpora. PSG were a suitable framework for a poorly inflected language like English, showing a fixed word-order and few discontinuous constituents. Later on, the syntactic annotation of moderately free word-order languages required the adoption of the DG framework, which is more appropriate than PSG for such a task. On the other hand, Carroll et al. (1998) showed that inter-annotator agreement was significantly better for dependency treebanks, indicating that phrase structure annotation was requiring too many irrelevant decisions (see also Lin, 1995). Although much Latin data is nowadays available in digital format, the first two projects for the development of Latin treebanks have only recently started: namely the Latin Dependency Treebank (LDT) at the Tufts University in Boston (within the Perseus Digital Library) based on texts of the Classical era (Bamman, 2006), and the Index Thomisticus Treebank (IT-TB) at the Catholic University of the Sacred Heart in Milan, based on the Opera omnia of Thomas Aquinas (Passarotti, 2007). Taking into account the above mentioned features of Latin, both the treebanks independently chose the DG framework as the most suitable one for data annotation. The same approach was later on followed by a third Latin treebank now 2 With Tesnière (1959) as a common background, there are many different current DG flavours. See for instance the following: Dependency Unification Grammar (Hellwig, 1986), Functional Generative Description (Sgall, Hajičová and Panevová, 1986), Meaning Text Theory (Mel'čuk, 1988), Word Grammar (Hudson, 1990). available, which is ongoing at the University of Oslo in the context of the PROIEL project (Pragmatic Resources in Old Indo-European Languages): the aim of PROIEL is the syntactic annotation of the oldest extant versions of the New Testament in Indo-European languages, including Greek, Latin, Gothic, Armenian and Church Slavonic (Haug and Jøhndal, 2008). Annotation Guidelines Since LDT and IT-TB were the first projects of their kind for Latin, no prior established guidelines were available to rely on for syntactic annotation. Therefore, the so-called 'analytical layer' of annotation of the Prague Dependency Treebank (PDT) for Czech (Hajič et al., 1999) was chosen and adapted to specific or idiosyncratic constructions of Latin. These constructions (such as the ablative absolute or the passive periphrastic) could be syntactically annotated in several different ways and are common to Latin of all eras. Rather than have each treebank project decide upon and record each decision for annotating them, LDT and IT-TB decided to pool their resources and create a single annotation manual that would govern both treebanks (Bamman et al., 2007a;Bamman et al., 2007b;. As we are dealing with Latin dialects separated by 13 centuries, sharing a single annotation manual is very useful for comparison purposes, such as checking annotation consistency or diachronically studying specific syntactic constructions. In addition, the task of data annotation through these common guidelines allows annotators to base their decisions on a variety of examples from a wider range of texts and combine the two datasets in order to train probabilistic dependency parsers. Although the PROIEL annotation guidelines are grounded on the same grammar framework as the LDT and IT-TB, they differ in a number of details, some of which are described in Passarotti (forthcoming). The Index Thomisticus Treebank The Index Thomisticus (IT) by Roberto Busa SJ (1974-1980 was begun in 1949 and is considered a groundbreaking project in computational linguistics. It is a database containing the Opera omnia of Thomas Aquinas (118 texts) as well as 61 texts by other authors related to Thomas, for a total of around 11 million tokens. The corpus is morphologically tagged and lemmatised. Early in the 1970's Busa started to plan a project aimed at both the morphosyntactic disambiguation of the IT lemmatisation and the syntactic annotation of its sentences. Today, these tasks are performed by the IT-TB project, which is part of the wider 'Lessico Tomistico Biculturale', a project whose target is the development of a lexicon from the IT texts. 3 Presently, the size of the IT-TB is 46,456 tokens, for a total of 2,103 parsed sentences excerpted from the Scriptum super Sententiis Magistri Petri Lombardi. Valency As outlined above, the notion of valency is generally defined as the number of complements required by a word: these obligatory complements are usually named 'arguments', while the non-obligatory ones are referred to as 'adjuncts'. Although valency can refer to different parts of speech (usually verbs, nouns and adjectives), scholars have mainly focused their attention on verbs, so that the notion of valency often coincides with verbal valency. Valency is widely used in DG formalisms, but it also figures in PSG-based formalisms like HPSG and LFG. While Karl Bühler can be considered as the pioneer of the modern theory of valency, 4 Lucien Tesnière is widely recognised as its real founder. Tesnière views valency as a quantitative quality of verbs, since only verbs constrain both the quantity and the quality (i.e. nouns and adverbs) of their obligatory arguments; through a metaphor borrowed from drama, Tesnière classifies dependents into actants (arguments) and circonstants (adjuncts): "Le noeud verbal […] exprime tout un petit drame. Comme un drame en effet, il comporte obligatoirement un procès, et le plus souvent des acteurs et des circonstances. Transposés du plan de la réalité dramatique sur celui de la syntaxe structurale, le procès, les acteurs et les circonstances deviennent respectivement le verbe, les actants et les circonstants" (Tesnière, 1959: 102). 5 3 http://itreebank.marginalia.it. 4 In the Sprachtheorie, he writes that "die Wörter einer bestimmten Wortklasse eine oder mehrere Leerstellen um sich eröffnen, die durch Wörter bestimmter anderer Wortklassen ausgefüllt werden müssen" (Bühler, 1934: 173) ("words of a certain word-class open up around themselves one or several empty spaces that have to be filled by words of certain other word-classes"; our translation). 5 "The verbal node expresses a whole little drama. As a drama, it implies a process and, most of the times, actors Arguments can be either obligatory or optional, depending on which sense of the verb is involved. For example, the seem sense of the verb appear requires two obligatory arguments in active clauses, as in the following sentence: "That lawyer appears to love his work". Here the second argument ("to love his work") cannot be left out without changing the meaning of the verb. On the other hand, optional arguments are recorded into the verbal argument structure itself, althought they may not appear at the clausal level. For instance, in the following sentence the object required by the verb eat is missing, but the sentence is still acceptable: "He eats (something)". Optionality can also act at the communicative level as well as at the structural one. For instance, adjuncts can be necessary for communicative intelligibility in particular contexts, as in the following sentence: "I met James at the Marquee club", where the locative adverbial ("at the Marquee club") is required to answer a question like "Where did you meet James?". On the other hand, structural optionality depends on the features of the language and applies at the clausal level. For instance, as a poorly inflected language, English requires the subject of a predicate to be expressed in declarative and interrogative main clauses, so that a sentence like the following is ungrammatical if the subject is missing: "[I] slept all morning". Given the so-called "syntax-semantics interface" (Levin, 1993), arguments are generally associated with a predicate sense rather than a predicate form, and are structured in sequences called 'subcategorization frames' (SCFs) or 'complementation patterns'. For example, there is a semantic difference between the bill sense and the attack sense of the verb charge in English, as in the following sentences: -(a) "The hotel charges 80 euros for a night". -(b) "The army charged the enemy". In these sentences, the two predicate senses show two different SCFs: -(a) [Subj_NP, Pred, Obj_NP, Obj_PP-for] -(b) [Pred, Obj_NP] Arguments are also selected by verbs according to lexical-semantic properties, called 'selectional preferences' (SPs) or 'selectional restrictions'. For example, a sentence like "*The train flew to Rome" is ungrammatical, since it violates and circumstances. Transposed from the dramatic reality to structural syntax, the process, the actors and the circumstances respectively become the verb, the actants and the circumstants" (our translation). the SP of the verb fly on its subject and can only be accepted in a metaphorical context. Valency Lexicons Over the past years, several valency lexicons have been built within different theoretical frameworks: these lexicons have an important role in the NLP community thanks to their wide applications in NLP components, such as parsing, word sense disambiguation, automatic verb classification and selectional preference acquisition. As shown in Urešová (2004), a valency lexicon can also help the task of linguistic annotation (as in treebank development), providing annotators with essential information about the number and types of arguments realized at the syntactic level for a specific verb, along with semantic information on the verb's lexical preferences. In the phase of lexicon creation, both intuition-based and corpus-based approaches can be pursued, according to the role played by human intuition and empirical evidence extracted from annotated corpora such as treebanks. For instance, lexicons like PropBank (Kingsbury and Palmer, 2002), FrameNet (Ruppenhofer et al., 2006) and PDT-Vallex (Hajič et al., 2003) have been created in an intuition-based fashion and then checked and improved with examples from corpora. On the other side, research in lexical acquisition has recently made available a number of valency lexicons automatically acquired from annotated corpora, such as VALEX (Korhonen, et al., 2006) and LexShem (Messiant et al., 2008). Unlike the fully intuition-based ones, these lexicons aim at systematically reflecting the evidence provided by data, with very little human intervention. The role of intuition is therefore left to the annotation phase (where the annotator interprets the corpus data), and not extended to the development of the lexicon itself. Corpus-based lexicons show several advantages if compared with traditional humandeveloped dictionaries. Firstly, they systematically reflect the evidence of the corpus they were extracted from, while acquiring information specific to the domain of the corpus. Secondly, unlike manually built lexicons, they are not prone to human errors that are difficult to detect, such as omissions and inconsistencies. In addition, such lexicons usually display statistical information in their entries, such as the actual frequency of subcategorization frames as attested in the original corpus. Finally, they are less costly than hand-crafted lexical resources in terms of time, money and human resources. While several subcategorization lexicons have been compiled for modern languages, much work in this field still remains to be done on classical languages such as Greek and Latin. Regarding Latin, Happ reports a list of Latin verbs along with their valencies (Happ, 1976: 480-565). describe a "dynamic lexicon" automatically extracted from the Perseus Digital Library, using the LDT as a training set. This lexicon displays qualitative and quantitative information on subcategorization patterns and selectional preferences of each word as it is used in every Latin author of the corpus. Relying on morphological tagging and statistical syntactic parsing of such a large corpus, their approach finds the most common arguments and the most common lexical fillers of these arguments, thus reducing the noise caused by the automatic pre-processing of the data. The Index Thomisticus Treebank Valency Lexicon We propose a corpus-based valency lexicon for Latin verbs automatically induced from IT-TB data. The automatic procedure allows both the extension of this work to the LDT (thanks to the common annotation guidelines) and the updating of the lexicon as the treebank size increases. First, we automatically extract the arguments of all the occurrences of verbal lemmata in the treebank, along with their morphological features and lexical fillers. In the IT-TB, verbal arguments are annotated using the following tags: Sb (Subject), Obj (Object), OComp (Object Complement) and Pnom (Predicate Nominal); adjuncts are annotated with the tag Adv (Adverbial). The difference between Obj and Adv corresponds to the that between direct or indirect arguments (except subjects) and adjuncts. A special kind of Obj is the determining complement of the object, which is tagged with OComp, such as senatorem in the phrase "aliquem senatorem facere" ("to nominate someone senator"). Conversely, the determining complement of the subject is tagged as Pnom, as in "aliquis senator fit" ("someone becomes senator"). 6 In order to retrieve the arguments realised for each verbal occurrence in the treebank, specific database queries have been created to search for the nodes depending on a verbal head through the functional tags listed above. The head-dependent relation can be either direct or indirect, since intermediate nodes may intervene. These nodes are prepositions (tag AuxP), conjunctions (tag AuxC) and coordinating or apposing elements (respectively, tags Coord and Apos). For example, see the following sentences: -[1] "primo determinat formam baptismi;" 7 ("at first it determines the form of the baptism;") -[2] "ly aliquid autem, et ly unum non determinant aliquam formam vel naturam;" 8 ("the 'something' and the 'one' do not determine any form or nature") Figure 1 reports the tree of sentence [1], where the Obj relation between the verbal head determinat and the dependent formam is direct. Figure 2 shows the tree of sentence [2]. In this tree, two coordinated subjects (aliquid and unum) and two coordinated objects (formam and naturam) depend on the common verbal head determinant through two different Coord nodes (et and vel) 9 . 7 Thomas, Super Sententiis Petri Lombardi, IV, Distinctio 3, Quaestio 1, Prologus, 41-6, 42-2. The edition of the text recorded in the IT is Thomas (1856-1858). 8 Thomas, Super Sententiis Petri Lombardi, III, Distinctio 6, Quaestio 2, Articulus 1, Responsio ad Argumentum 7, 4-5, 6-1. 9 Following PDT-style, the distributed determination aliquam, which modifies both the coordinated objects formam In the case of indirect relation, the intermediate nodes need to be detected and extracted, in order to be inserted into the lexicon as subcategorization structures containing the syntactic roles of the verbal arguments. To represent these structures, we distinguished two major types of them: subcategorization frames (SCFs) and subcategorization classes (SCCs). An SCF contains the sequence of functional labels of verbal arguments as they appear in the sentence order, whereas an SCC reports the subcategorization elements disregarding their linear order in the sentence. SCFs and SCCs play a different role in our lexicon. On the one hand, SCFs are very detailed patterns useful for diachronic and/or comparative studies on linear order. On the other hand, SCCs are more general and make the data in the lexicon comparable with the subcategorization structures as usually defined in the literature and in other valency lexicons. For each of these structures we then created the following sub-types, ranging from the most specific to the least specific one. SCF 1 : subcategorization frame marking the full path between the verbal head (referred to as 'V') and each of its argument nodes in the tree. SCF 1 also assigns the same index to those argument nodes linked by coordinating or apposing elements. For instance, the SCF 1 of the verbal and naturam, depends on the coordinating node vel. For more details, see Hajic et al. (1999), 236-238. head determino 10 in sentence [1] is 'V + Obj', while in sentence [2] is '(Coord)Sb_Co (1) + (Coord)Sb_Co (1) + V + (Coord)Obj_Co (2) + (Coord)Obj_Co (2) '. In the latter, the intermediate nodes Coord are in square brackets and indices 1 and 2 link the coordinated nodes. These indices have been adopted in order to disambiguate subcategorization structures where more Obj_Co tags can refer to different verbal arguments. For instance, in a sentence like "I give X and Y to W and Z", both the tranferred objects (X and Y) and the receivers (W and Z) are annotated with Obj_Co. Using indices, the subcategorization structure of the verb give in this sentence appears as follows: 'Sb + V + (Coord)Obj_Co (1) + (Coord)Obj_Co (1) + (Coord)Obj_Co (2) + (Coord)Obj_Co (2) '. The indices cannot be applied a priori to subsequent arguments, since Latin, allowing discontinuous constituents, can show cases where coindexed nodes are separated by other lexical items in the linear order. SCC 1 : the subcategorization class associated with SCF 1 . The SCC 1 of the verb determino in [1] is '{Obj}', while in [2] is '{(Coord)Sb_Co (1) , (Coord)Sb_Co (1) , (Coord)Obj_Co (2) , (Co- ord)Obj_Co (2) }'. SCF 2 : a subcategorization frame containing only the labels and the indices of the arguments, but not the full path. So, the SCF 2 of determino in [1] is 'V + Obj', while in [2] is 'Sb_Co (1) + Sb_Co (1) + V + Obj_Co (2) + Obj_Co (2) '. SCC 2 : the subcategorization class associated with SCF 2 . For determino, this is '{Obj}' in [1] and '{Sb_Co (1) , Sb_Co (1) , Obj_Co (2) , Obj_Co (2) }' in [2]. SCC 3 : a subcategorization frame containing only the argument labels. The SCC 3 of determino is '{Obj}' in [1] and '{Sb,Obj}' in [2], showing that in this sentence determino is used as a biargumental verb, regardless of the number of lexical fillers realised for each of its arguments at the surface level. Conclusion and future work Presently, the size of the IT-TB valency lexicon is 432 entries (i.e. verbal lemmata, corresponding to 5966 wordforms), with 270 different SCF 1 s. In the near future, the lexicon will be enriched with valency information for nouns and adjectives. The corpus-based approach we followed induces verbal arguments directly from annotated data, where the arguments may be present or not, 10 Determino is the lemma of both the wordforms determinat (sentence [1]) and determinant (sentence [2]). depending on the features of the texts. Therefore, the lexicon reflects the empirical evidence given by the data it was extracted from, encouraging linguistic studies on the particular language domain of our corpus. In addition to the syntactic information reported in the different types of SCFs and SCCs, it is possible at each stage to include both the morphological features and the lexical fillers of verbal arguments, helping define verbal selectional preferences. The lexicon may also be useful for improving the performance of statistical parsers, enriching the information acquired by parsers on verbal entries. On the other hand, moving from parser performance to lexicon development, the lexicon can be induced from automatically parsed texts when an accurate parsing system is available. The syntactic and lexical data recorded in the lexicon are also important in further semantic NLP applications, such as word sense disambiguation, anaphora and ellipsis resolution, and selectional preference acquisition. Following a widespread approach in valency lexicons, a close connection between valency frames and word senses will be followed in the description of lexicon entries: this means that each headword entry of our lexicon will consist of one or more SCFs and SCCs, one for each sense of the word. We plan to make the lexicon available online through a graphical interface usable also during the annotation procedures, as has been already done for the PDT via the tree editor TrEd. 11 In this way, the consistency of the annotation process can be tested and enforced thanks to the information stored in the lexicon. In order to test the accuracy of our system, it will be also necessary to evaluate the quality of our valency lexicon against the Perseus "dynamic lexicon", Happ's list and other existing resources for Latin, such as traditional dictionaries and thesauri. A comparison with the lexicon by Perseus is also very interesting in a contrastive diachronic perspective, as it may show important linguistic differences between Classical and Medieval Latin. Figure 1 . 1Tree of sentence [1] Figure 2 Tree of sentence [2] See, for instance, the Perseus Digital Library(Crane et al., 2001), or data repositories such as LASLA(Denooz, 1996). As in the PDT, all of the syntactic tags can be appended with a suffix in the event that the given node is member of a coordinated construction (_Co), an apposition (_Ap) or a parenthetical statement (_Pa). AcknowledgmentsWe would like to thank Paolo Ruffolo for his help in designing the database architecture. Urešová and Alla Bémová. 1999Processing(ICON 2002), Vikas Publ. House, New Delhi, 61-70. Beth Levin. 1993 The Design and Use of Latin Dependency Treebank. David Bamman, TLT 2006. Proceedings of the Fifth Workshop on Treebanks and Linguistic Theories. Jan Hajič and Joakim NivrePrague, Czech Republic; Prague, Czech Republicstitute of Formal and Applied LinguisticsDavid Bamman. 2006. The Design and Use of Latin Dependency Treebank. In Jan Hajič and Joakim Nivre (eds.), TLT 2006. Proceedings of the Fifth Workshop on Treebanks and Linguistic Theories. December 1-2, 2006, Prague, Czech Republic, In- stitute of Formal and Applied Linguistics, Prague, Czech Republic, 67-78. Building a Dynamic Lexicon from a Digital Library. David Bamman, Gregory Crane, Proceedings of the 8th ACM/IEEE-CS Joint Conference on Digital Libraries. the 8th ACM/IEEE-CS Joint Conference on Digital LibrariesPittsburghDavid Bamman and Gregory Crane. 2008. Building a Dynamic Lexicon from a Digital Library. In Pro- ceedings of the 8th ACM/IEEE-CS Joint Confer- ence on Digital Libraries (JCDL 2008), Pittsburgh. Guidelines for the Syntactic Annotation of Latin Treebanks, «Tufts University Digital Library. David Bamman, Marco Passarotti, Gregory Crane, Savina Raynaud, David Bamman, Marco Passarotti, Gregory Crane and Savina Raynaud. 2007a. Guidelines for the Syntac- tic Annotation of Latin Treebanks, «Tufts Univer- sity Digital Library». Available at: http://dl.tufts.edu/view_pdf.jsp?urn=tufts:facpubs:d bamma01-2007.00002. A Collaborative Model of Treebank Development. David Bamman, Marco Passarotti, Gregory Crane, Savina Raynaud, Proceedings of the Sixth International Workshop on Treebanks and Linguistic Theories. Koenraad De Smedt, Jan Hajič and Sandra Küblerthe Sixth International Workshop on Treebanks and Linguistic TheoriesNorway1Northern European Association for Language Technology (NEALT) Proceedings SeriesDavid Bamman, Marco Passarotti, Gregory Crane and Savina Raynaud. 2007b. A Collaborative Model of Treebank Development. In Koenraad De Smedt, Jan Hajič and Sandra Kübler (eds.), Proceedings of the Sixth International Workshop on Treebanks and Linguistic Theories. December 7-8, 2007, Ber- gen, Norway, Northern European Association for Language Technology (NEALT) Proceedings Se- ries, Vol. 1, 1-6. The annotation guidelines of the Latin Dependency Treebank and Index Thomisticus Treebank. The treatment of some specific syntactic constructions in Latin. David Bamman, Marco Passarotti, Roberto Busa, Gregory Crane, Proceedings of the Sixth International Conference on Language Resources and Evaluation (LREC. the Sixth International Conference on Language Resources and Evaluation (LRECMarrakech, MoroccoEuropean Language Resources Association (ELRA)David Bamman, Marco Passarotti, Roberto Busa and Gregory Crane. 2008. The annotation guidelines of the Latin Dependency Treebank and Index Thomis- ticus Treebank. The treatment of some specific syntactic constructions in Latin. In Proceedings of the Sixth International Conference on Language Resources and Evaluation (LREC 2008). May 28- 30, 2008, Marrakech, Morocco, European Lan- guage Resources Association (ELRA), 2008. Karl Bühler, Sprachtheorie: die Darstellungsfunktion der Sprache. JenaGustav FischerStuttgartKarl Bühler. 1934. Sprachtheorie: die Darstellungs- funktion der Sprache, Jena: Gustav Fischer, Stutt- gart. Roberto Busa, Index Thomisticus: sancti Thomae Aquinatis operum omnium indices et concordantiae, in quibus verborum omnium et singulorum formae et lemmata cum suis frequentiis et contextibus variis modis referuntur quaeque / consociata plurium opera atque electronico IBM automato usus digessit Robertus Busa SJ, Frommann-Holzboog. Stuttgart-Bad CannstattRoberto Busa. 1974-1980. Index Thomisticus: sancti Thomae Aquinatis operum omnium indices et con- cordantiae, in quibus verborum omnium et singu- lorum formae et lemmata cum suis frequentiis et contextibus variis modis referuntur quaeque / con- sociata plurium opera atque electronico IBM automato usus digessit Robertus Busa SJ, From- mann-Holzboog, Stuttgart-Bad Cannstatt. Drudgery and deep thought: Designing a digital library for the humanities. Gregory R Crane, Robert F Chavez, Anne Mahoney, Thomas L Milbank, Jeff A Rydberg-Cox, David A Smith, Clifford E Wulfman, Communications of the ACM. 445Gregory R. Crane, Robert F. Chavez, Anne Mahoney, Thomas L. Milbank, Jeff A. Rydberg-Cox, David A. Smith and Clifford E. Wulfman. 2001. Drudg- ery and deep thought: Designing a digital library for the humanities. In Communications of the ACM, 44(5), 34-40. Parser Evaluation: a Survey and a New Proposal. John Carroll, Ted Briscoe, Antonio Sanfilippo, Proceedings of the First International Conference on Language Resources and Evaluation (LREC 1998). the First International Conference on Language Resources and Evaluation (LREC 1998)Granada, SpainJohn Carroll, Ted Briscoe and Antonio Sanfilippo. 1998. Parser Evaluation: a Survey and a New Pro- posal. In Proceedings of the First International Conference on Language Resources and Evalua- tion (LREC 1998). May 28-30, 1998, Granada, Spain, 447-454. La banque de données du laboratoire d'analyse statistique des langues anciennes (LASLA). « Le Médiéviste et l'ordinateur. Joseph Denooz, 33Joseph Denooz. 1996. La banque de données du labo- ratoire d'analyse statistique des langues anciennes (LASLA). « Le Médiéviste et l'ordinateur », 33, 14- 20. . Jan Hajič, Jarmila Panevová, Eva Buráňová, Jan Hajič, Jarmila Panevová, Eva Buráňová, Zdeňka
252,624,559
-BY-NC 4.0 Making Sign Language Corpora Comparable: A Study of Palm-Up and Throw-Away in Polish Sign Language, German Sign Language, and Russian Sign Language
This paper is primarily devoted to describing the preparation phase of a large-scale comparative study based on naturalistic linguistic data drawn from multiple sign language corpora. To provide an example, I am using my current project on manual gestural elements in Polish Sign Language, German Sign Language, and Russian Sign Language. The paper starts with a description of the reasons behind undertaking this project. Then, I describe the scope of my study, which is focused on two manual elements present in all three mentioned sign languages: palm-up and throw-away; and the three corpora which are my data sources. This is followed by a presentation of the steps taken in the initial stages of the project in order to make the data comparable. Those steps are: choosing the adequate data samples from all three corpora, gathering all data within the chosen software, and creating an annotation schema that builds on the annotations already present in all three corpora. Even though the project is still underway, and the annotation process is ongoing, preliminary discussions about the nature of the analysed manual activities are presented based on the initial annotations for the sake of evaluating the created annotation schema. I conclude the paper with some remarks about the performance of the employed methodology.
[ 252624755 ]
-BY-NC 4.0 Making Sign Language Corpora Comparable: A Study of Palm-Up and Throw-Away in Polish Sign Language, German Sign Language, and Russian Sign Language June 2022 Anna Kuder anna.kuder@uw.edu.pl Sign Language Interpreting Section Department of Special Education and Rehabilitation Faculty of Human Sciences Section for Sign Linguistics Faculty of Polish Studies University of Cologne University of Warsaw Krakowskie Przedmieście 26/2800-927WarsawPoland -BY-NC 4.0 Making Sign Language Corpora Comparable: A Study of Palm-Up and Throw-Away in Polish Sign Language, German Sign Language, and Russian Sign Language Proceedings of the 10th Workshop on the Representation and Processing of Sign Languages (sign-lang@LREC 2022) the 10th Workshop on the Representation and Processing of Sign Languages (sign-lang@LREC 2022)MarseilleJune 2022110gesturesignsign language corpuscorpus linguisticsannotationPolish Sign Language (PJM)German Sign Language (DGS)Russian Sign Language (RSL)comparative studies This paper is primarily devoted to describing the preparation phase of a large-scale comparative study based on naturalistic linguistic data drawn from multiple sign language corpora. To provide an example, I am using my current project on manual gestural elements in Polish Sign Language, German Sign Language, and Russian Sign Language. The paper starts with a description of the reasons behind undertaking this project. Then, I describe the scope of my study, which is focused on two manual elements present in all three mentioned sign languages: palm-up and throw-away; and the three corpora which are my data sources. This is followed by a presentation of the steps taken in the initial stages of the project in order to make the data comparable. Those steps are: choosing the adequate data samples from all three corpora, gathering all data within the chosen software, and creating an annotation schema that builds on the annotations already present in all three corpora. Even though the project is still underway, and the annotation process is ongoing, preliminary discussions about the nature of the analysed manual activities are presented based on the initial annotations for the sake of evaluating the created annotation schema. I conclude the paper with some remarks about the performance of the employed methodology. Introduction For many years the standard of sign language (SL) research was based only on small samples of language material and/or the researcher's (and/or his/her informant's) own linguistic intuitions. This approach based on elicited data and linguistic judgements was used both in research regarding single SLs (e.g., Zeshan, 2006), and in comparative studies of multiple SLs (e.g., Pfau and Quer, 2004). In more recent years, since the creation of the Australian Sign Language (Auslan) Corpus (Johnston, 2009) and similar projects that have followed, studies based on corpus material are becoming more common for the analysis of individual SLs. For comparative studies of multiple SLs however, the approach utilizing elicited data and linguistic judgements is still more common. But with the growing number of available resources, more and more cross-linguistic studies are being performed with the use of data coming from two of more separate corpora. Some examples include: the comparison of negation markers in Polish Sign Language (PJM) and Auslan (Kuder et al., 2018), the comparison of information structure in Russian Sign Language (RSL) and Sign Language of the Netherlands (NGT) (Kimmelman, 2019); the comparison of body-anchored verbs and argument omission in DGS and RSL (Oomen and Kimmelman, 2019); the comparison of discourse markers in French Belgian Sign Language (LSFB) and Catalan Sign Language (LSC) (Gabarró-López, 2020). In line with this more recent trend, I set out to perform a cross-linguistic corpus-based study of two manual elements present in three European SLs: Polish Sign Language, German Sign Language, and Russian Sign Language. As the corpora from which I draw my data were primarily created using different standards for annotation, and in different software (PJM and DGS corporain iLex, and RSL corpusin ELAN), this paper details the choices faced and decisions made in the preparation phase of the large-scale cross-linguistic corpus-based study. Theoretical Background & Motivation The topic of gesture and gesticulation has been tackled more often by spoken language (SpL) than SL linguists. Gestures, defined in SpLs as "visible actions of the hand, body, and face that are intentionally used to communicate" (Kendon, 1986(Kendon, , 2004(Kendon, , following: Özyürek, 2012, are usually seen as integrated into the communication system, being another part of language, alongside speech (Özyürek, 2012). This view of gestures is supported by the fact that, in SpLs, gestures are most often produced in a different modality than speech (e.g., Goldin-Meadow, 2003;Kendon, 2004;McNeill, 2005). They are easily distinguishable from fully syntactic elements just by being "shown" and not "said" (note the common notion of gestures as being "nonverbal"). Elements that are being "shown" while a spoken word/clause is being uttered are called co-speech gestures. However, this is not the case for gestures accompanying SLs, in which there is no modality difference between lexical and gestural elements. The fact that both signs and gestures in SLs are "shown" has led researchers to trying to establish a more prominent relationship between them than has ever been argued for SpLs. Namely, it has been claimed that some of the elements that in SpL linguistics are referred to as gestures, when present in SLs take on a grammatical function in a process known as grammaticalization, and instead are referred to as grammatical markers. This has been stated with respect to both non-manual elements, e.g., headshaking, and manual elements, e.g., palm-up (van Loon et al., 2014). This approach to dealing with gestural elements in SLs stems from fact that SL researchers "naturally adopted the theoretical and analytic tools that were established in spoken language linguistics" (Lepic, 2019, p. 3). Using these tools on SL data has led them to establish strict claims about lexicalization and grammaticalization of certain elements in some SLs (i.e., multiword expressions and morphologically complex signs (Lepic, 2019)). However, some recent large-scale corpus-based studies provide evidence that contradicts these previous claims. It has been shown that elements serving as co-speech gestures in SpLs, when studied on the basis of SL corpus data, turn out to function in SLs in a similar way they do in SpLs (e.g., Johnston, 2018;Kuder, 2021 for headshaking), suggesting that they should not have been described as grammaticalized as previously stated. If claims must be made about the nature of these elements in SLs, then adopting a usage-based framework "alleviates the burden for sign language linguists to determine whether or not linguistic constructions have become <<lexicalized>>" (Lepic, 2019, p. 1) or, in this case, grammaticalized. Instead, by focusing only on the degree of analysability (Lepic, 2019) of an element, we can compare to what extent each element has been conventionalized (e.g., Schmid, 2020). My current project follows the corpus-based approach and applies it to manual gestural elements present in SLs, to help gain a new perspective on the analysability of gestural elements in SLs and add to the discussion about the nature and role of gestural elements in SL discourse. The project is motivated by the need to conduct comparative studies of gestures across different sign languages which has been directly expressed by other authors (here with respect to palm-up): "there have already been several insightful corpus-based treatments of the palm-up in sign, but especially valuable would be further studies that compare use of the form in different sign languages using the same analytic criteria and theoretical framework. Such an approach would be critical in distinguishing crosslinguistic patterns from language-specific particulars" (Cooperrider et al., 2018, p. 12). Scope of the Study and Data Sources My current study focuses on two manual activities present both in SLs and SpLs: • the palm-upa multifunctional manual activity taking the form of rotating one's forearms so that the palms of the hand face upward (e.g., 2), which I will call the throw-away. Throw-away has so far only been studied for co-speech gesture (Bressem and Müller, 2014, 2017, Francis et al., 2022. Palm-up, on the other hand, is a manual form that has received a lot of scientific attention. It has been thoroughly studied in a number of sign languages: New Zealand Sign Language (McKee and Wallingford, 2011), Sign Language of the Netherlands (van Loon et al., 2014), Danish Sign Language (Engberg-Pedersen, 2002) and American Sign Language (Conlin et al., 2003). Small-scale studies of palm-up are also present for German Sign Language (Volk, 2016) and Russian Sign Language (Bauer, 2019). Preliminary comparative corpus-based studies of palm-up were also undertaken for French Belgian and Catalan Sign Languages (Gabarró-López, 2020). There are also analyses exploring the origin and relations of the element in signed and spoken communication (Cooperrider et al., 2018). No large-scale and entirely corpus-based study has been conducted across multiple sign languages to compare the use of these two elements, which my study will provide. My current project is based on naturalistic corpus data extracted from the PJM, DGS and RSL corpora, all of which have open-access repositories. A substantial part of the PJM corpus is made publicly accessible as the "Open Repository of the Polish Sign Language Corpus" (Wójcicka et al., 2020;Kuder et al., this volume; https://www.korpuspjm.uw.edu.pl/en). The DGS corpus project is accessible as the "Public DGS Corpus" (with three different levels of access, Konrad et al., 2020; https://www.sign-lang.unihamburg.de/meinedgs/ling/start_en.html) and the RSL corpus as the "Online Russian Sign Language Corpus" (Burkova, 2015; http://rsl.nstu.ru). Making Datasets Comparable As all three corpora were created separately and published in different ways, the process of making my language material comparable involved 3 main questions: (I) Which software(s) should be used for annotation? (II) How to choose comparable data samples? (III) How best to create an annotation schema that builds on the annotations already present in all three corpora? Software All three corpus projects were created and are published in different ways. Both PJM and DGS corpora were primarily created with the use of iLex (Hanke and Storz, 2008), while RSL corpus was made using ELAN (Crasborn and Sloetjes, 2008). Using two different tools throughout the project would make comparison difficult, if not impossible. However, all files in the repositories of PJM and DGS corpora are available to download both in iLex and ELAN formats. Therefore, I decided to work with only the ELAN files throughout my whole project. Importing the RSL annotation files into iLex would have been possible but was deemed unnecessary for a project conducted by an individual. If the study was conducted by a project team that needed to work on the annotation files at the same time, then using iLex would have been recommended instead. Data Samples To obtain comparable results, the data samples had to be chosen carefully, as each of the corpora features a different number of recorded informants and different lengths of recorded texts. A sample of 16 informants from each corpus was picked to be annotated. Each sample is balanced out with respect to gender (8 males & 8 females), and age (4 informants -2 males & 2 femalesfrom each of the age groups: 18-30; 31-45; 46-60; 60+). As the geographical division of the data in the Polish Sign Language Corpus mirrors the distribution of Poland into 16 voivodeships 1 , my sample includes one informant from each part of the country. The DGS corpus is also balanced geographically, following the division of the country into 13 regions 2 which correspond to the location of current and former Deaf schools. I thus decided to include one informant from each of the regular regions and two from the three biggest ones: Berlin, Leipzig and Nürnberg. The data from the Russian Sign Language corpus was collected in two places: Moscow and Novosibirsk 3 . Therefore, I decided to include 8 informants from each of the regions in the RSL sample. The corpora differ also when it comes to the publication format of the publicly available files: approx. half of the files from the PJM Open Repository present signers talking in pairs and half of them present single signers. In the DGS files signers are almost always presented in pairs. Most of the RSL files only show one informant at a time. Due to the different formats of the three corpora, only the material coming from a single signer will be used in the study. For the dialogical tasks which show people signing in pairs, only data coming from one informant will be annotated per task. The next decision was to choose suitable text produced by the informants so that the final samples would be as similar as possible with respect to text types and length. This was the most challenging part of the preparation phase, as here more than elsewhere I was limited to the material present in the open access corpora repositories. My final choices are presented in Existing Annotations The biggest obstacle faced in the data preparation is the fact that the annotation schemas used in the original files from all three repositories are not identical, albeit similar. As none of the present schemas was detailed enough to provide a good template to the study of gestural elements, a new schema had to be created. It had to be developed in such a way that would make use of the existing annotations and at the same time grasp all features of the articulatory elements important from the point of view of my study. This new schema needed to be developed in such a way that it could be applied in the files coming from all three corpora. Only the tiers appearing consistently in all three datasets could have been consistently used in the study. These were limited to: tiers for glosses for dominant and non-dominant hand, and free translation. A comparison of all tiers existing in the files prior to starting the study is presented in The tiers for glosses and translations were used in the study in their present form. No alterations were made to the glossing and translating conventions. Even though they were not identical in all three datasets, they are similar enough from the point of view of the study which is not targeted to research purely lexical elements. Some tiers present in single datasets were important from the point of view of the current study (e.g., tiers for coding mouthing and non-manual elements). In such cases, existing data could already be used as is, but needed to be annotated from scratch for the remaining datasets. The New Annotation Schema The new schema was build based on the reports present in the literature concerning the elements important during studying the manual gestural elements in SLs (e.g., Cooperrider et al., 2018) and my own experience in building and using SL corpora (e.g., Kuder et al., 2018). The annotation process consists of four steps: (1) identifying all occurrences of palm-up and throw-away and defining their manual form; (2) defining the non-manual features associated with a given occurrence; (3) defining the function of the occurrence; (4) delineating the clauses that the occurrences are contained in. As all three corpora feature pre-existing glosses for the two targeted manual elements (even though they are glossed differently in each of the corpora), the base of step (1) was already pre-prepared in all three datasets. After identifying each occurrence, I coded for 5 : a) manual type (is it a palm-up or throw-away and is it one-or two-handed), b) manual subtype (following Kendon, 2004 andCooperrider et al., 2018 four subtypes of palm-up were distinguished: lateral, presentational, addressed and pointing). In the (2) step I marked: c) placing in the signing space, d) handshape assimilation (if present), e) nonmanual elements on the body, f) nonmanual elements on the head, g) nonmanual elements on the face, h) gaze of the signer (if distinguishable by bare eye), i) mouthing/mouth gesture. If it was needed any additional information was added on the tier called: j) "comment". Step (3) consisted of tagging for: k) function of the palm-up, l) function of the throw-away, m) lexical meaning of palm-up (if present), n) lexical meaning of throw-away (if present). Even though the files are equipped in pre-existing glosses for both palm-up and throw-away and in written translations, during the annotation process the whole video files are inspected sign by sign. This is needed to properly grasp the context of signing, which is crucial for establishing the function of the given manual element. Ambiguous cases are discussed with signers of each language. The functional analysis was conducted based on preexisting corpus annotations, my knowledge of the languages, observed context of signing, and consultations 5 Each letter corresponds to a single tier in the annotation schema see fig. 3. with users of the three target languages. The initial set of function tags was based on literature and then later augmented while studying the data, as not all of the functions I observed were previously reported on in the literature. I ended up with approx. 50 detailed function tags, which were later grouped into four broader categories (see section 5.3 for details). Coding each occurrence with respect to the 14 listed tiers constitutes the first round of annotation for any given file. Annotations from these tiers are being used for crosslinguistic frequency counts and analyses of correlation of form and functions of the manual elements in question (see sections 5.2 & 5.3 for preliminary results). Step (4) of annotation (the sentential annotation) serves the purpose of distinguishing "basic articulatory chunks of propositional meaning" (Johnston, 2019). It follows the protocol for clause like units (CLUs) tagging proposed by Johnston (2019) and adapted during the creation of the Polish Sign Language Corpus. This part of annotation consists of defining the boundaries of CLUs and then distinguishing their predicates, main arguments, and peripheral elements. The predicates and arguments are tagged for the macro roles and semantic roles they exhibit in the clauses. They are also marked with tags for parts of speech and in this process, I take into consideration all issues connected with distinguishing parts of speech (PoS) in sign languages (Schwager and Zeshan, 2008) and employ a usage-based notion of PoS (Linde-Usiekniewicz and Rutkowski, 2016) which focuses on the usage of a given sign in a given context. The types of the CLUs and dependencies between the clauses are then marked for, before adding the English translation. Therefore, this subsection of my annotation schema contains eight tiers (see also fig. 3): o) CLU (used for marking the scope of the clause), p) arguments in the CLU (used for marking the predicate, its arguments, and peripherals), q) macro roles in the CLU, r) semantic roles in the CLU, s) part of speech, t) sentence type u) type of CLU, v) CLU within CLU (used for marking dependencies between the clauses), w) English translation (on the basis of the written translations already present in the corpora). Data collected in this round of annotation will be used in the future stages of the project for establishing what is the position of the manual elements in question within the sign languages clauses and whether there is a correlation between the position in the clause and a specific function or meaning of palm-up and throw-away. Current State of the Project Annotated Data Sample As the project is still ongoing, so far the material coming from 9 informants from each of the corpora was annotated with the first round of annotation. The overview of the annotated sample is presented in the tables below. Preliminary Findings: Quantitative Analysis As previously mentioned, the frequency analysis was based mostly on the pre-existing glosses present in all three corpora. However, aside from just targeting the existing glosses, I also examined the videos sign by sign, so as not to miss any instances of the manual forms (which may have been tagged with different labels than the anticipated ones). This also was needed for the functional analyses I will describe below. Fully understanding what is being signed was crucial for properly determining the functions of the manual elements, as they are heavily context based. The frequency of the occurrence of palm-up and throw away in all three data samples is summarized in the Fenlon and colleagues (2014) found that the percentage of palm-up occurrence stays at 5.5% making palm-up the second most frequent type of manual activity in the BSL data. They compared it to the Australian Sign Language (Auslan) data, in which the occurrence rate stays at 3.6% (Fenlon et al., 2014). In New Zealand Sign Language (NZSL), palm-up comprises 5% of all manual signs in the corpus and is the second most frequent sign type in the studied sample (McKee and Wallingford, 2011). In the next phases of the project, I will investigate the slightly higher occurrence rate of palm-up in DGS than in the other two languages. When it comes to throw-away I have less possibilities for cross-linguistic comparison, but the percentages seem to be similar across studied languages. What is more, these figures are consistent when checked against the whole of the PJM corpus, which currently comprises of approx. 706,233 glosses, of which palm-up is the second most common manual activity with approx. 30,558 occurrences (4.33%). Following this is throw-away with 7,134 occurrences, which put its frequency percentage at 1.01%. The fact that the used method yields results comparable with the literature report about similar elements in other SLs shows that the chosen apparatus is working as planned. Preliminary Findings: Qualitative Analysis If the data prepared with the use of the newly formed annotation schema is adequate, then it will allow for a cross-modal comparison with what has been reported about palm-up and throw-away in co-speech gesture. This can be done on the basis of the step (3) in the annotation process -the analysis of the elements' functions. As mentioned previously, all the detailed functions of the studied manual elements were grouped into four categories based on the type of function. The first three categories (van Loon et al., 2014;Bauer, 2019), which are also used to describe the functions of palm-up in co-speech gesture (cf. Ferré, 2012) are: • Expressing modal meanings: o positive (e.g., agreement; revelation; surprise); o negative (e.g., lack of knowledge, lack of understanding, lack of interest, lack of ability; negation, surprise; annoyance; disappointment); o neutral (e.g., hesitation; hypotheticality; reinforcement of the stance); • Discourse regulation: e.g., turn/topic opening or ending; response to the interlocutor's question/stance; connecting sentences; • Conveying coherence: e.g., meta-comment; rhetorical question; self-correction. My data suggests that all the functions performed by throwaway in all three SLs also fit into this categorisation. The last category, labelled as "conveying lexical meaning", features all occurrences of both manual activities that were coded with lexical glosses by the original annotators. This tag was inserted in the "function" tier and the lexical meaning was specified on another annotation level (see the tiers labelled "lexical meaning of palm-up" and "lexical meaning of throw-away" in the fig. 3). The consistency of co-occurrence of palm-up and throw-away with particular lexical glosses raises an important question about the conventionalization level of the elements in question and the reports of palm-up functioning as a grammatical marker (van Loon et al., 2014). Some of the meanings consistently co-occurring with palm-up and throw-aways in the three SLs also possess different, fully lexicalized, manual forms in their lexicons (e.g., NOT-HAVE; NOT-BE; NOT-KNOW in PJM which I found to be associated with palmup or BAD; TO-LET; DROP in DGS which I found to be associated with throw-away). But the signers occasionally chose to substitute them with palm-up or throw-away and were understood by both the interlocutor and later by the annotators who chose to gloss the occurrence with a lexical gloss rather than a gestural marker. Future efforts within the study will be targeted towards explaining this issue within the usage-based framework (Lepic, 2019) and towards explaining the similarity of the functions of palmup and throw-away observed in both signed and spoken modality. Conclusion The aim of this paper was primarily to show the preparation phase of a comparative corpus-based project when dealing with multiple SL corpora. The chosen methodology and annotation schema appear to be working well enough to provide adequate data to already allow preliminary conclusions about the nature of the analysed manual activities to be drawn. The three issues connected to the topic of data comparability raised in the section 4 can be assessed as follows. (I) Performing the annotations in ELAN was a good decision due to the very powerful search engine that is built into the software. Searching throughout annotated files is a key element of calculating the results. Searching in ELAN is more straightforward for a researcher without a programming background than searching within iLex, which requires the knowledge of SQL queries. The central database functionality of iLex was not needed for this project but would make iLex the preferred tool in any multi-annotator setting. (II) The chosen data sample seems to be representative of the language usage as the obtained quantitative results are consistent with existing literature reports about palm-up in other SLs. (III) The developed annotation schema, when applied to the chosen data sample, is providing adequate information about the frequency, form, and function of the two studied manual elements in all three SLs and allows for both crosslanguage and cross-modal comparison with the previous literature reports about the same topic in both signed and spoken languages. If anything, the schema might be too detailed. When it comes to coding for eye-gaze for example, it is unclear at this point if the corpus material is providing adequate data. It is hard to delineate the features that affect the signer's eye-gaze in the conversational data. Probably eye-gaze studies should be mainly based on the data obtained with the use of an eye tracker. As mentioned previously, the current project is still ongoing. In order to gain a better understanding of the actual usage of the manual elements in question and to better understand the level of their conventionalisation, the next stages of my project will be devoted to conducting: • analysis of co-occurrence of both gestures' types and subtypes with specific nonmanual markers; • analysis of the correlation between the gestures' types and subtypes and their function; • sociolinguistic analyses of the usage of the gestures across genders and age groups; • CLU (sentential) coding and analysis; • more detailed comparison of the gestures' usage between SLs and co-speech gesture. The annotation schema has been prepared in a way that should make it possible to tackle all of these topics. However, assessing the choices and decisions made along the way will have to be done again, upon completion of the project (in the next 12 months). With the results of this further analysis, I hope to be able to add more direct claims to the discussion about the conventionalisation of palm-up and throw-away in the three studied SLs, as previously discussed in the theoretical background. If the assessment will yield positive results, in the future this project might serve as a basis for creating a blueprint for other comparative corpus-based studies. Cooperrider et al., 2018 among others; see fig. 1); • the action of an open hand going downward having a common meaning of "never mind" or "not important" (Bressem and Müller, 2014; see fig. Figure 1 : 1A palm-up (photo from the PJM corpus). Figure 2 : 2A throw-away (photo from the PJM corpus). Figure 3 : 3Annotation schema (photo from the PJM corpus). table 1 below. 1 https://www.korpuspjm.uw.edu.pl/en 2 https://www.sign-lang.unihamburg.de/meinedgs/ling/start_en.html 3 http://rsl.nstu.ru/site/datadialogue narrative/ monologue retelling PJM 14 texts 24 texts 37 texts DGS 5 texts 38 texts 3 texts RSL 1 text 42 texts 27 texts Table 1 : 1The distribution of text types in the data samples from three corpora. Table 2 : 2Overview of the annotation schemas used in the open repositories of all the three corpora prior to starting the current study. Table 3 : 3The overview of the annotated dataset.gender/ age 18-30 31-45 46-60 60+ PJM F 1 1 - 2 M 2 1 1 1 DGS F 1 1 - 2 M 1 2 1 1 RSL F 1 1 2 - M 2 1 1 1 Table 4 : 4Age and gender of informants. Table 5 : 5The frequency of both manual elements in the datasets. The findings are consistent with the literature reports about the frequency of palm-up in other SLs of the world. For example, in the study of lexical frequency in British Sign Language (BSL), As the files from the DGS corpus show two informants at the same time the relevant tiers are doubled to present the annotations for both signers separately. AcknowledgementsMy current research is financially supported by the Polish National Agency for Academic Exchange within the Bekker Programme (edition Nonmanual components with palm-up in Russian SL. A Bauer, Paper presented at the 93rd LSA Annual Meeting. New York, NY, USABauer, A. (2019). Nonmanual components with palm-up in Russian SL. Paper presented at the 93rd LSA Annual Meeting, New York, NY, USA. The family of Away gestures: Negation, refusal, and negative assessment. J Bressem, C Müller, Body -Language -Communication. C. Müller et al.BerlinDe Gruyter MoutonBressem, J. and Müller, C. (2014). The family of Away gestures: Negation, refusal, and negative assessment. In C. Müller et al. (Eds.), Body -Language - Communication, Berlin: De Gruyter Mouton, pp. 1592- 1604. The "Negative-Assessment-Construction" -A multimodal pattern based on a recurrent gesture? Linguistic Vanguard. J Bressem, C Müller, 3Bressem, J. and Müller, C. (2017). The "Negative- Assessment-Construction" -A multimodal pattern based on a recurrent gesture? Linguistic Vanguard, 3:1-9. A particle of indefiniteness in American Sign Language. F Conlin, P Hagstrom, C Neidle, Linguistic Discovery. 2Conlin, F., Hagstrom, P. and Neidle, C. (2003). A particle of indefiniteness in American Sign Language. Linguistic Discovery, 2(1):1-21. The Palm-Up Puzzle: Meanings and Origins of a Widespread Form in Gesture and Sign. K Cooperrider, N Abner, S Goldin-Meadow, 10.3389/fcomm.2018.00023Frontiers in Communication. 323Cooperrider, K., Abner, N. and Goldin-Meadow, S. (2018). The Palm-Up Puzzle: Meanings and Origins of a Widespread Form in Gesture and Sign. Frontiers in Communication, 3(23), https://doi.org/10.3389/fcomm.2018.00023. Enhanced ELAN functionality for sign language corpora. O Crasborn, H Sloetjes, Proceedings of the 3rd Workshop on the Representation and Processing of Sign Languages: Construction and Exploitation of Sign Language Corpora. 6th International Conference on Language Resources and Evaluation (LREC 2008). the 3rd Workshop on the Representation and Processing of Sign Languages: Construction and Exploitation of Sign Language Corpora. 6th International Conference on Language Resources and Evaluation (LREC 2008)France, Paris, MayEuropean Language Resource Association (ELRACrasborn, O. and Sloetjes, H. (2008). Enhanced ELAN functionality for sign language corpora. In Proceedings of the 3rd Workshop on the Representation and Processing of Sign Languages: Construction and Exploitation of Sign Language Corpora. 6th International Conference on Language Resources and Evaluation (LREC 2008), pages 39-43, France, Paris, May. European Language Resource Association (ELRA). Gestures in signing: The presentation gesture in Danish Sign Language. E Engberg-Pedersen, Progress in Sign Language Research: In Honor of Siegmund Prillwitz. Hamburg: Signum. R. Schulmeister & H. ReinitzerEngberg-Pedersen, E. (2002). Gestures in signing: The presentation gesture in Danish Sign Language. In R. Schulmeister & H. Reinitzer (Eds.), Progress in Sign Language Research: In Honor of Siegmund Prillwitz. Hamburg: Signum, 143-162. Using conversational data to determine lexical frequency in British Sign Language: The influence of text type. J Fenlon, A Schembri, R Rentelis, D Vinson, K Cormier, Lingua. 143Fenlon, J., Schembri, A., Rentelis, R., Vinson, D. and Cormier, K. (2014). Using conversational data to determine lexical frequency in British Sign Language: The influence of text type. Lingua, 143:187-202. Functions of three open-palm hand gestures. G Ferré, Multimodal Communication. 11Ferré, G. (2012). Functions of three open-palm hand gestures. Multimodal Communication, 1(1):5-20. Analyzing the throwing away gesture as a discourse management device. Paper presented at the DGfS-Workshop: Visual Communication. N Francis, P G Grosz, P Patel-Grosz, New Theoretical and Empirical Developments. Francis, N., Grosz, P.G., Patel-Grosz, P. (2022). Analyzing the throwing away gesture as a discourse management device. Paper presented at the DGfS-Workshop: Visual Communication. New Theoretical and Empirical Developments, Tübingen, Germany. Are discourse markers related to age and educational background? A comparative account between two sign languages. S Gabarró-López, Journal of Pragmatics. 156Gabarró-López, S. (2020). Are discourse markers related to age and educational background? A comparative account between two sign languages. Journal of Pragmatics, 156:68-82. Hearing Gesture: How Our Hands Help Us Think. S Goldin-Meadow, Harvard University PressCambridge, MAGoldin-Meadow, S. (2003). Hearing Gesture: How Our Hands Help Us Think. Cambridge, MA: Harvard University Press. iLex -A database tool for integrating sign language corpus linguistics and sign language lexicography. Hanke, J Storz, Proceedings of the 3rd Workshop on the Representation and Processing of Signed Languages: Construction and exploitation of sign language corpora. International Conference on Language Resources and Evaluation (LREC 2008). the 3rd Workshop on the Representation and Processing of Signed Languages: Construction and exploitation of sign language corpora. International Conference on Language Resources and Evaluation (LREC 2008)France, Paris, MayEuropean Language Resource Association (ELRAHanke, T and Storz, J. (2008). iLex -A database tool for integrating sign language corpus linguistics and sign language lexicography. In Proceedings of the 3rd Workshop on the Representation and Processing of Signed Languages: Construction and exploitation of sign language corpora. International Conference on Language Resources and Evaluation (LREC 2008), pages 64-67, France, Paris, May. European Language Resource Association (ELRA). Creating a corpus of Auslan within an Australian national corpus. T Johnston, Selected Proceedings of the 2008 HCSNet Workshop on Designing the Australian National Corpus: Mustering Languages. Somerville: Cascadilla Proceedings Project. M. Haugh, K. Burridge, J. Mulder and P. PetersJohnston, T. (2009). Creating a corpus of Auslan within an Australian national corpus. In M. Haugh, K. Burridge, J. Mulder and P. Peters (Eds.), Selected Proceedings of the 2008 HCSNet Workshop on Designing the Australian National Corpus: Mustering Languages. Somerville: Cascadilla Proceedings Project, pp. 87-95. A corpus-based study of the role of headshaking in negation in Auslan (Australian Sign Language): Implications for signed language typology. T Johnston, Linguistic Typology. 222Johnston, T. (2018). A corpus-based study of the role of headshaking in negation in Auslan (Australian Sign Language): Implications for signed language typology. Linguistic Typology, 22(2): 185-231. Auslan Corpus Annotation Guidelines. T Johnston, Macquarie University, Sydney, AustraliamanuscriptJohnston, T. (2019). Auslan Corpus Annotation Guidelines. August 2019 Version, manuscript, Macquarie University, Sydney, Australia. Some reasons to studying gesture. A Kendon, Semiotica. 621-2Kendon, A. (1986). Some reasons to studying gesture. Semiotica, 62(1-2):3-28. Gesture. Visible action as utterance. A Kendon, Cambridge University PressCambridgeKendon, A. (2004). Gesture. Visible action as utterance. Cambridge: Cambridge University Press. Information structure in Sign Languages. Evidence from Russian Sign Language and Sign Language of the Netherlands. V Kimmelman, De Gruyter MoutonBerlin, BostonKimmelman, V. (2019). Information structure in Sign Languages. Evidence from Russian Sign Language and Sign Language of the Netherlands. Berlin, Boston: De Gruyter Mouton. What Corpus-based Research on Negation in Auslan and PJM Tells Us about Building and Using Sign Language Corpora. A Kuder, J Filipczak, P Mostowski, P Rutkowski, T Johnston, Proceedings of the 8th Workshop on the Representation and Processing of Sign Languages: Involving the Language Community. Language Resources and Evaluation Conference (LREC 2018). the 8th Workshop on the Representation and Processing of Sign Languages: Involving the Language Community. Language Resources and Evaluation Conference (LREC 2018)Miyazaki, Japan, MayEuropean Language Resource Association (ELRAKuder, A., Filipczak, J., Mostowski, P., Rutkowski, P. and Johnston, T. (2018). What Corpus-based Research on Negation in Auslan and PJM Tells Us about Building and Using Sign Language Corpora. In Proceedings of the 8th Workshop on the Representation and Processing of Sign Languages: Involving the Language Community. Language Resources and Evaluation Conference (LREC 2018), pages 101-106, Miyazaki, Japan, May. European Language Resource Association (ELRA). Negation markers in Polish Sign Language (PJM). A Kuder, Sign Language and Linguistics. 241Kuder, A. (2021). Negation markers in Polish Sign Language (PJM), Sign Language and Linguistics, 24(1):118-131. Open Repository of the Polish Sign Language Corpus: Publication Project of the Polish Sign Language Corpus. A Kuder, J Wójcicka, P Mostowski, P Rutkowski, Proceedings of the 10th Workshop on the Representation and Processing of Sign Languages: Multilingual Sign Language Resources. Language Resources and Evaluation Conference (LREC 2022). the 10th Workshop on the Representation and Processing of Sign Languages: Multilingual Sign Language Resources. Language Resources and Evaluation Conference (LREC 2022)Marseille, Francethis volume. European Language Resource Association (ELRAKuder, A., Wójcicka, J., Mostowski, P., Rutkowski, P. (this volume). Open Repository of the Polish Sign Language Corpus: Publication Project of the Polish Sign Language Corpus. In Proceedings of the 10th Workshop on the Representation and Processing of Sign Languages: Multilingual Sign Language Resources. Language Resources and Evaluation Conference (LREC 2022), Marseille, France, June. European Language Resource Association (ELRA). A usage-based alternative to "lexicalization" in sign language linguistics. R Lepic, 10.5334/gjgl.84023:1-30Glossa: a journal of general linguistics. 41Lepic, R. (2019). A usage-based alternative to "lexicalization" in sign language linguistics. Glossa: a journal of general linguistics, 4(1):23:1-30. https://doi.org/10.5334/gjgl.840. The division into parts of speech in the Corpus-based Dictionary of Polish Sign Language. J Linde-Usiekniewicz, P Rutkowski, Proceedings of the XVII EURALEX International Congress: Lexicography and Linguistic Diversity. T. Margalitadze & G. Meladzethe XVII EURALEX International Congress: Lexicography and Linguistic DiversityTbilisiIvane Javakhishvili Tbilisi University PressLinde-Usiekniewicz, J. and Rutkowski, P. (2016). The division into parts of speech in the Corpus-based Dictionary of Polish Sign Language. In T. Margalitadze & G. Meladze (Eds.), Proceedings of the XVII EURALEX International Congress: Lexicography and Linguistic Diversity. Tbilisi: Ivane Javakhishvili Tbilisi University Press, pp. 375-388. So, well, whatever. R Mckee, S Wallingford, Discourse functions of palm-up in New Zealand SL. Sign Language & Linguistics. 14McKee, R. and Wallingford, S. (2011). "So, well, whatever": Discourse functions of palm-up in New Zealand SL. Sign Language & Linguistics, 14:213-247. Gesture and Thought. D Mcneill, University of Chicago PressChicagoMcNeill, D. (2005). Gesture and Thought. Chicago: University of Chicago Press. Forms and uses of the Palm Up Open Hand: A case of a gesture family. C Müller, The Semantics and Pragmatics of Everyday Gestures. C. Müller, & R. PosnerBerlinWeidlerMüller, C. (2004). Forms and uses of the Palm Up Open Hand: A case of a gesture family? In C. Müller, & R. Posner (Eds.), The Semantics and Pragmatics of Everyday Gestures. Berlin: Weidler, pp. 233-256. Body-anchored verbs and argument omission in two sign languages. M Oomen, V Kimmelman, Glossa: a journal of general linguistics. 4142Oomen, M. and Kimmelman, V. (2019). Body-anchored verbs and argument omission in two sign languages. Glossa: a journal of general linguistics, 4(1): 42. Sign Language. An International Handbook. A Özyürek, R. Pfau, M. Steinbach, & B. WollDe Gruyter MoutonBerlin/BostonGestureÖzyürek, A. (2012). Gesture. In R. Pfau, M. Steinbach, & B. Woll (Eds.), Sign Language. An International Handbook. Berlin/Boston: De Gruyter Mouton, pp. 626- 646. On the syntax of negation and modals in LSC and DGS, paper presented at the 26. R Pfau, J Quer, Jahrestagung der Deutschen Gesellschaft für Sprachwissenschaft. Pfau, R. and Quer, J. (2004). On the syntax of negation and modals in LSC and DGS, paper presented at the 26. Jahrestagung der Deutschen Gesellschaft für Sprachwissenschaft, Mainz, Germany. The Dynamics of the Linguistic System: Usage, Conventionalization, and Entrenchment. H.-J Schmid, Oxford Scholarship OnlineSchmid, H.-J. (2020). The Dynamics of the Linguistic System: Usage, Conventionalization, and Entrenchment. Oxford Scholarship Online. Word classes in sign languages. Criteria and classifications. W Schwager, U Zeshan, Studies in Language. 323Schwager, W. and Zeshan, U. (2008). Word classes in sign languages. Criteria and classifications. Studies in Language 32(3): 509-545. The grammaticalization of gestures in sign languages. E Van Loon, R Pfau, M Steinbach, Body -Language -Communication. C. Müller et al.BerlinDe Gruyter MoutonVan Loon, E., Pfau, R. and Steinbach, M. (2014). The grammaticalization of gestures in sign languages. In C. Müller et al. (Eds.), Body -Language -Communication. Berlin: De Gruyter Mouton, pp. 720-730. Discourse functions of palm-up in German SL (DGS). E Volk, Venice, ItalyPaper presented at the FEAST conferenceVolk, E. (2016). Discourse functions of palm-up in German SL (DGS). Paper presented at the FEAST conference, Venice, Italy. Interrogative and Negative Constructions in Sign Languages. U Zeshan, Ishara PressNijmegenZeshan, U. (Ed.) (2006). Interrogative and Negative Constructions in Sign Languages. Nijmegen: Ishara Press. Language Resource References. Language Resource References Russian Sign Language Corpus. S Burkova, Burkova, S. (2015). Russian Sign Language Corpus. Novosibirsk. http://rsl.nstu.ru/site/signlang. . R Konrad, T Hanke, G Langer, D Blanck, J Bleicken, I Hofmann, O Jeziorski, L König, S König, R Nishio, A Regen, U Salden, S Wagner, S Worseck, O Böse, E Jahn, M Schulder, MY DGS -annotated. Public Corpus of German Sign Language3rd release [DatasetKonrad, R., Hanke, T., Langer, G., Blanck, D., Bleicken, J., Hofmann, I., Jeziorski, O., König, L., König, S., Nishio, R., Regen, A., Salden, U., Wagner, S., Worseck, S., Böse, O., Jahn, E. and Schulder, M. (2020). MY DGS -annotated. Public Corpus of German Sign Language, 3rd release [Dataset]. . 10.25592/dgs.corpus-3.0Universität HamburgUniversität Hamburg. https://doi.org/10.25592/dgs.corpus-3.0. Open Repository of the Polish Sign Language Corpus. Warsaw: Faculty of Polish Studies. J Wójcicka, A Kuder, P Mostowski, P Rutkowski, University of WarsawWójcicka, J., Kuder, A., Mostowski, P. and Rutkowski, P. (2020). Open Repository of the Polish Sign Language Corpus. Warsaw: Faculty of Polish Studies, University of Warsaw. https://www.korpuspjm.uw.edu.pl/en
402,181
Using Semantic Roles to Improve Question Answering
Shallow semantic parsing, the automatic identification and labeling of sentential constituents, has recently received much attention. Our work examines whether semantic role information is beneficial to question answering. We introduce a general framework for answer extraction which exploits semantic role annotations in the FrameNet paradigm. We view semantic role assignment as an optimization problem in a bipartite graph and answer extraction as an instance of graph matching. Experimental results on the TREC datasets demonstrate improvements over state-of-the-art models.
[ 2486369, 10661378, 34491971, 62182406, 6106375, 6541034, 2337034, 1143628, 5541486, 15290012 ]
Using Semantic Roles to Improve Question Answering Association for Computational LinguisticsCopyright Association for Computational LinguisticsJune 2007. 2007 Dan Shen School of Informatics Spoken Language Systems Saarland University SaarbrueckenGermany Mirella Lapata University of Edinburgh Edinburgh UK Using Semantic Roles to Improve Question Answering Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language LearningPragueAssociation for Computational LinguisticsJune 2007. 2007 Shallow semantic parsing, the automatic identification and labeling of sentential constituents, has recently received much attention. Our work examines whether semantic role information is beneficial to question answering. We introduce a general framework for answer extraction which exploits semantic role annotations in the FrameNet paradigm. We view semantic role assignment as an optimization problem in a bipartite graph and answer extraction as an instance of graph matching. Experimental results on the TREC datasets demonstrate improvements over state-of-the-art models. Introduction Recent years have witnessed significant progress in developing methods for the automatic identification and labeling of semantic roles conveyed by sentential constituents. 1 The success of these methods, often referred to collectively as shallow semantic parsing (Gildea and Jurafsky, 2002), is largely due to the availability of resources like FrameNet (Fillmore et al., 2003) and PropBank (Palmer et al., 2005), which document the surface realization of semantic roles in real world corpora. More concretely, in the FrameNet paradigm, the meaning of predicates (usually verbs, nouns, or adjectives) is conveyed by frames, schematic representations of situations. Semantic roles (or frame 1 The approaches are too numerous to list; we refer the interested reader to Carreras and Màrquez (2005) for an overview. elements) are defined for each frame and correspond to salient entities present in the evoked situation. Predicates with similar semantics instantiate the same frame and are attested with the same roles. The FrameNet database lists the surface syntactic realizations of semantic roles, and provides annotated example sentences from the British National Corpus. For example, the frame Commerce Sell has three core semantic roles, namely Buyer, Goods, and Seller -each expressed by an indirect object, a direct object, and a subject (see sentences (1a)-(1c)). It can also be attested with non-core (peripheral) roles (e.g., Means, Manner, see (1d) and (1e)) that are more generic and can be instantiated in several frames, besides Commerce Sell. The verbs sell, vend, and retail can evoke this frame, but also the nouns sale and vendor. (1) a. [ By abstracting over surface syntactic configurations, semantic roles offer an important first step towards deeper text understanding and hold promise for a range of applications requiring broad coverage semantic processing. Question answering (QA) is often cited as an obvious beneficiary of semantic role labeling (Gildea and Jurafsky, 2002;Palmer et al., 2005;Narayanan and Harabagiu, 2004). Faced with the question Q: What year did the U.S. buy Alaska? and the retrieved sentence S: . . .before Russia sold Alaska to the United States in 1867, a hypothetical QA system must identify that United States is the Buyer despite the fact that it is attested in one instance as a subject and in another as an object. Once this information is known, isolating the correct answer (i.e., 1867 ) can be relatively straightforward. Although conventional wisdom has it that semantic role labeling ought to improve answer extraction, surprising little work has been done to this effect (see Section 2 for details) and initial results have been mostly inconclusive or negative Kaisser, 2006). There are at least two good reasons for these findings. First, shallow semantic parsers trained on declarative sentences will typically have poor performance on questions and generally on out-of-domain data. Second, existing resources do not have exhaustive coverage and recall will be compromised, especially if the question answering system is expected to retrieve answers from unrestricted text. Since FrameNet is still under development, its coverage tends to be more of a problem in comparison to other semantic role resources such as PropBank. In this paper we propose an answer extraction model which effectively incorporates FrameNetstyle semantic role information. We present an automatic method for semantic role assignment which is conceptually simple and does not require extensive feature engineering. A key feature of our approach is the comparison of dependency relation paths attested in the FrameNet annotations and raw text. We formalize the search for an optimal role assignment as an optimization problem in a bipartite graph. This formalization allows us to find an exact, globally optimal solution. The graph-theoretic framework goes some way towards addressing coverage problems related with FrameNet and allows us to formulate answer extraction as a graph matching problem. As a byproduct of our main investigation we also examine the issue of FrameNet coverage and show how much it impacts performance in a TREC-style question answering setting. In the following section we provide an overview of existing work on question answering systems that exploit semantic role-based lexical resources. Then we define our learning task and introduce our approach to semantic role assignment and answer extraction in the context of QA. Next, we present our experimental framework and data. We conclude the paper by presenting and discussing our results. Related Work Question answering systems have traditionally depended on a variety of lexical resources to bridge surface differences between questions and potential answers. WordNet (Fellbaum, 1998) is perhaps the most popular resource and has been employed in a variety of QA-related tasks ranging from query expansion, to axiom-based reasoning (Moldovan et al., 2003), passage scoring (Paranjpe et al., 2003), and answer filtering (Leidner et al., 2004). Besides WordNet, recent QA systems increasingly rely on syntactic information as a means of abstracting over word order differences and structural alternations (e.g., passive vs. active voice). Most syntax-based QA systems (Wu et al., 2005) incorporate some means of comparison between the tree representing the question with the subtree surrounding the answer candidate. The assumption here is that appropriate answers are more likely to have syntactic relations in common with their corresponding question. Syntactic structure matching has been applied to passage retrieval and answer extraction (Shen and Klakow, 2006). Narayanan and Harabagiu (2004) were the first to stress the importance of semantic roles in answering complex questions. Their system identifies predicate argument structures by merging semantic role information from PropBank and FrameNet. Expected answers are extracted by performing probabilistic inference over the predicate argument structures in conjunction with a domain specific topic model. incorporate semantic analysis in their TREC05 QA system. They use ASSERT (Pradhan et al., 2004), a publicly available shallow semantic parser trained on PropBank, to generate predicate-argument structures which subsequently form the basis of comparison between question and answer sentences. They find that semantic analysis does not boost performance due to the low recall of the semantic parser. Kaisser (2006) proposes a Figure 1: Architecture of answer extraction question paraphrasing method based on FrameNet. Questions are assigned semantic roles by matching their dependency relations with those attested in the FrameNet annotations. The assignments are used to create question reformulations which are submitted to Google for answer extraction. The semantic role assignment module is not probabilistic, it relies on strict matching, and runs into severe coverage problems. In line with previous work, our method exploits syntactic information in the form of dependency relation paths together with FrameNet-like semantic roles to smooth lexical and syntactic divergences between question and answer sentences. Our approach is less domain dependent and resource intensive than Narayanan and Harabagiu (2004), it solely employs a dependency parser and the FrameNet database. In contrast to Kaisser (2006), we model the semantic role assignment and answer extraction tasks numerically, thereby alleviating the coverage problems encountered previously. Problem Formulation We briefly summarize the architecture of the QA system we are working with before formalizing the mechanics of our FrameNet-based answer extraction module. In common with previous work, our overall approach consists of three stages: (a) determining the expected answer type of the question, (b) retrieving passages likely to contain answers to the question, and (c) performing a match between the question words and retrieved passages in order to extract the answer. In this paper we focus on the last stage: question and answer sentences are normalized to a FrameNet-style representation and answers are retrieved by selecting the candidate whose semantic structure is most similar to the question. The architecture of our answer extraction mod-ule is shown in Figure 1. Semantic structures for questions and sentences are automatically derived using the model described in Section 4 (Model I). A semantic structure SemStruc = p, Set(SRA) consists of a predicate p and a set of semantic role assignments Set(SRA). p is a word or phrase evoking a frame F of FrameNet. A semantic role assignment SRA is a ternary structure w, SR, s , consisting of frame element w, its semantic role SR, and score s indicating to what degree SR qualifies as a label for w. For a question q, we generate a semantic structure SemStruc q . Question words, such as what, who, when, etc., are considered expected answer phrases (EAPs). We require that EAPs are frame elements of SemStruc q . Likely answer candidates are extracted from answer sentences following some preprocessing steps detailed in Section 6. For each candidate ac, we derive its semantic structure SemStruc ac and assume that ac is a frame element of SemStruc ac . Question and answer semantic structures are compared using a model based on graph matching detailed in Section 5 (Model II). We calculate the similarity of all derived pairs SemStruc q , SemStruc ac and select the candidate with the highest value as an answer for the question. Semantic Structure Generation Our method crucially exploits the annotated sentences in the FrameNet database together with the output of a dependency parser. Our guiding assumption is that sentences that share dependency relations will also share semantic roles as long as they evoke the same or related frames. This is motivated by much research in lexical semantics (e.g., Levin (1993)) hypothesizing that the behavior of words, particularly with respect to the expression and interpretation of their arguments, is to a large extent determined by their meaning. We first describe how predicates are identified and then introduce our model for semantic role labeling. Predicate Identification Predicate candidates are identified using a simple look-up procedure which compares POS-tagged tokens against FrameNet entries. For efficiency reasons, we make the simplifying assumption that questions have only one predicate which we select heuristically: (1) verbs are pre-ferred to other parts of speech, (2) if there is more than one verb in the question, preference is given to the verb with the highest level of embedding in the dependency tree, (3) if no verbs are present, a noun is chosen. For example, in Q: Who beat Floyd Patterson to take the title away?, beat, take away, and title are identified as predicate candidates and beat is selected the main predicate of the question. For answer sentences, we require that the predicate is either identical or semantically related to the question predicate (see Section 5). In the example given above, the predicate beat evoques a single frame (i.e., Cause harm). However, predicates often have multiple meanings thus evoquing more than one frame. Knowing which is the appropriate frame for a given predicate impacts the semantic role assignment task; selecting the wrong frame will unavoidably result in erroneous semantic roles. Rather than disambiguiting polysemous predicates prior to semantic role assignment, we perform the assignment for each frame evoqued by the predicate. Semantic Role Assignment Before describing our approach to semantic role labeling we define dependency relation paths. A relation path R is a relation sequence r 1 , r 2 , ..., r L , in which r l (l = 1, 2, ..., L) is one of predefined dependency relations with suffix of traverse direction. An example of a relation path is R = sub j U , ob j D , where the subscripts U and D indicate upward and downward movement in trees, respectively. Given an unannotated sentence whose roles we wish to label, we assume that words or phrases w with a dependency path connecting them to p are frame elements. Each frame element is represented by an unlabeled dependency path R w which we extract by traversing the dependency tree from w to p. Analogously, we extract from the FrameNet annotations all dependency paths R SR that are labeled with semantic role information and correspond to p. We next measure the compatibility of labeled and unlabeled paths as follows: s(w, SR) = max R SR ∈M [sim (R w , R SR ) · P(R SR )](2) where M is the set of dependency relation paths for SR in FrameNet, sim (R w , R SR ) the similarity between paths R w and R SR weighted by the relative Figure 2: Sample original bipartite graph (a) and its subgraph with edge covers (b). In each graph, the left partition represents frame elements and the right partition semantic roles. w SR w SR (a) (b) frequency of R SR in FrameNet (P(R SR )). We consider both core and non-core semantic roles instantiated by frames with at least one annotation in FrameNet. Core roles tend to have more annotations in Framenet and consequently are considered more probable. We measure sim (R w , R SR ), by adapting a string kernel to our task. Our hypothesis is that the more common substrings two dependency paths have, the more similar they are. The string kernel we used is similar to Leslie (2002) and defined as the sum of weighted common dependency relation subsequences between R w and R SR . For efficiency, we consider only unigram and bigram subsequences. Subsequences are weighted by a metric akin to t f · id f which measures the degree of association between a candidate SR and the dependency relation r present in the subsequence: weight SR (r) = f r · log 1 + N n r (3) where f r is the frequency of r occurring in SR; N is the total number of SRs evoked by a given frame; and n r is the number of SRs containing r. For each frame element we thus generate a set of semantic role assignments Set(SRA). This initial assignment can be usefully represented as a complete bipartite graph in which each frame element (word or phrase) is connected to the semantic roles licensed by the predicate and vice versa. (see Figure 2a). Edges are weighted and represent how compatible the frame elements and semantic roles are (see equation (2)). Now, for each frame element w we could simply select the semantic role with the highest score. However, this decision procedure is local, i.e., it yields a semantic role assignment for each frame element independently of all other elements. We therefore may end up with the same role being assigned to two frame elements or with frame elements having no role at all. We remedy this shortcoming by treating the semantic role assignment as a global optimization problem. Specifically, we model the interaction between all pairwise labeling decisions as a minimum weight bipartite edge cover problem (Eiter and Mannila, 1997;Cormen et al., 1990). An edge cover is a subgraph of a bipartite graph so that each node is linked to at least one node of the other partition. This yields a semantic role assignment for all frame elements (see Figure 2b where frame elements and roles are adjacent to an edge). Edge covers have been successfully applied in several natural language processing tasks, including machine translation (Taskar et al., 2005) and annotation projection (Padó and Lapata, 2006). Formally, optimal edge cover assignments are solutions of following optimization problem: max E is edge cover ∏ (nd w ,nd SR )∈E s(nd w , nd SR )(4) where, s(nd w , nd SR ) is the compatibility score be-tween the frame element node nd w and semantic role node nd SR . Edge covers can be computed efficiently in cubic time using algorithms for the equivalent linear assignment problem. Our experiments used Jonker and Volgenant's (1987) solver. 2 Figure 3 shows the semantic role assignments generated by our model for the question Q: Who discovered prions? and the candidate answer sentence S: 1997: Stanley B. Prusiner, United States, discovery of prions. . . Here we identify two predicates, namely discover and discovery. The expected answer phrase (EAP) who and the answer candidate Stanley B. Prusiner are assigned the COGNIZER role. Note that frame elements can bear multiple semantic roles. By inducing a soft labeling we hope to render the matching of questions and answers more robust, thereby addressing to some extent the coverage problems associated with FrameNet. Semantic Structure Matching We measure the similarity between a question and its candidate answer by matching their predicates and semantic role assignments. Since SRs are framespecific, we prioritize frame matching to SR matching. Two predicates match if they evoke the same frame or one of its hypernyms (or hyponyms). The latter are expressed by the Inherits From and Is Inherited By relations in the frame definitions. If the predicates match, we examine whether the assigned semantic roles match. Since we represent SR assignments as graphs with edge covers, we can also formalize SR matching as a graph matching problem. The similarity between two graphs is measured as the sum of similarities between their subgraphs. We first decompose a graph into subgraphs consisting of one frame element node w and a set of SR nodes connected to it. The similarity between two subgraphs SubG 1 , and SubG 2 is then formalized as: (5) Sim(SubG 1 , SubG 2 ) = ∑ nd SR 1 ∈ SubG 1 nd SR 2 ∈ SubG 2 nd SR 1 = nd SR 2 1 |s(nd w , nd SR 1 ) − s(nd w , nd SR 2 )| + 1 where, nd SR 1 and nd SR 2 are semantic role nodes connected to a frame element node nd w in SubG 1 and SubG 2 , respectively. s(nd w , nd sr 1 ) and s(nd w , nd SR 2 ) are edge weights between two nodes in corresponding subgraphs (see (2)). Our intuition here is that the more semantic roles two subgraphs share for a given frame element, the more similar they are and the closer their corresponding edge weights should be. Edge weights are normalized by dividing by the sum of all edges in a subgraph. Experimental Setup Data All our experiments were performed on the TREC02-05 factoid questions. We excluded NIL questions since TREC doesn't supply an answer for them. We used the FrameNet V1.3 lexical database. It contains 10,195 predicates grouped into 795 semantic frames and 141,238 annotated sentences. Figure 4 shows the number of annotated sentences available for different predicates. As can be seen, there are 3,380 predicates with no annotated sentences and 1,175 predicates with less than 5 annotated sentences. All FrameNet sentences, questions, and answer sentences were parsed using MiniPar (Lin, 1994), a robust dependency parser. As mentioned in Section 4 we extract dependency relation paths by traversing the dependency tree from the frame element node to the predicate node. We used all dependency relations provided by MiniPar (42 in total). In order to increase coverage, we combine all relation paths for predicates that evoke the same frame and are labeled with the same POS tag. For example, found and establish are both instances of the frame Intentionally create but the database does not have any annotated sentences for found.v. In default of not assigning any role labels for found.v, our model employs the relation paths for the semantically related establish.v. Preprocessing Here we summarize the steps of our QA system preceding the assignment of semantic structure and answer extraction. For each question, we recognize its expected answer type (e.g., in Q: Which record company is Fred Durst with? we would expect the answer to be an ORGANIZA-TION ). Answer types are determined using classification rules similar to Li and Roth (2002). We also reformulate questions into declarative sentences following the strategy proposed in Brill et al. (2002). The reformulated sentences are submitted as queries to an IR engine for retrieving sentences with relevant answers. Specifically, we use the Lemur Toolkit 3 , a state-of-the-art language model-driven search engine. We work only with the 50 top-ranked sentences as this setting performed best in previous experiments of our QA system. We also add to Lemur's output gold standard sentences, which contain and support an answer for each question. Specifically, documents relevant for each question are retrieved from the AQUAINT Corpus 4 according to TREC supplied judgments. Next, sentences which match both the TREC provided answer pattern and at least one question key word are extracted and their suitability is manually judged by humans. The set of relevant sentences thus includes at least one sentence with an appropriate answer as well as sentences that do not contain any answer specific information. This setup is somewhat idealized, however it allows us to evaluate in more detail our answer extraction module (since when an answer is not found, we know it is the fault of our system). Relevant sentences are annotated with their named entities using Lingpipe 5 , a MUC-based named entity recognizer. When we successfully classify a question with an expected answer type (e.g., ORGANIZATION in the example above), we assume that all NPs attested in the set of relevant sentences with the same answer type are candidate answers; in cases where no answer type is found (e.g., as in Q: What are prions made of? ), all NPs in the relevant answers set are considered candidate answers. Baseline We compared our answer extraction method to a QA system that exploits solely syntactic information without making use of FrameNet or any other type of role semantic annotations. For each question, the baseline identifies key phrases deemed important for answer identification. These are verbs, noun phrases, and expected answer phrases (EAPs, see Section 3). All dependency relation paths connecting a key phrase and an EAP are compared to those connecting the same key phrases and an answer candidate. The similarity of question and answer paths is computed using a simplified version of the similarity measure 6 proposed in Shen and Klakow (2006). Our second baseline employs Shalmaneser (Erk and Padó, 2006), a publicly available shallow semantic parser 7 , for the role labeling task instead of the graph-based model presented in Section 4. The software is trained on the FrameNet annotated sentences using a standard feature set (see Carreras and Màrquez (2005) for details). We use Shalmaneser to parse questions and answer sentences. The parser makes hard decisions about the presence or absence of a semantic role. Unfortunately, this prevents us from using our method for semantic structure matching (see Section 5) which assumes a soft labeling. We therefore came up with a simple matching strategy suitable for the parser's output. For question and answer sentences matching in their frame assignment, phrases bearing the same semantic role as the EAP are considered answer candidates. The latter are ranked according to word overlap (i.e., identical phrases are ranked higher than phrases with no 6 Shen and Klakow (2006) use a dynamic time warping algorithm to calculate the degree to which dependency relation paths are correlated. Correlations for individual relations are estimated from training data whereas we assume a binary value (1 for identical relations and 0 otherwise). The modification was necessary to render the baseline system comparable to our answer extraction model which is unsupervised. 7 The software is available from http://www.coli. uni-saarland.de/projects/salsa/shal/ . overlap at all). Results Our evaluation was motivated by the following questions: (1) How does the incompleteness of FrameNet impact QA performance on the TREC data sets? In particular, we wanted to examine whether there are questions for which in principle no answer can be found due to missing frame entries or missing annotated sentences. (2) Are all questions and their corresponding answers amenable to a FrameNetstyle analysis? In other words, we wanted to assess whether questions and answers often evoke the same or related frames (with similar roles). This is a prerequisite for semantic structure matching and ultimately answer extraction. (3) Do the graph-based models introduced in this paper bring any performance gains over state-of-the-art shallow semantic parsers or more conventional syntax-based QA systems? Recall that our graph-based models were designed especially for the QA answer extraction task. Our results are summarized in Tables 1-3. Table 1 records the number of questions to be answered for the TREC02-05 datasets (Total). We also give information regarding the number of questions which are in principle unanswerable with a FrameNet-style semantic role analysis. Column NoFrame shows the number of questions which don't have an appropriate frame or predicate in the database. For example, there is currently no predicate entry for sponsor or sink (e.g., Q: Who is the sponsor of the International Criminal Court? and Q: What date did the Lusitania sink? ). Column NoAnnot refers to questions for which no semantic role labeling is possible because annotated sentences for the relevant predicates are missing. For instance, there are no annotations for win (e.g., Q: What division did Floyd Patterson win? ) or for hit (e.g., Q: What was the Beatles' first number one hit? ). This problem is not specific to our method which admittedly relies on FrameNet annotations for performing the semantic role assignment (see Section 4). Shallow semantic parsers trained on FrameNet would also have trouble assigning roles to predicates for which no data is available. Finally, column NoMatch reports the number of questions which cannot be answered due to frame These results indicate that FrameNet-based semantic role analysis applies to approximately 35% of the TREC data. This means that an extraction module relying solely on FrameNet will have poor performance, since it will be unable to find answers for more than half of the questions beeing asked. We nevertheless examine whether our model brings any performance improvements on this limited dataset which is admittedly favorable towards a FrameNet style analysis. Table 2 shows the results of our answer extraction module (SemMatch) together with two baseline systems. The first baseline uses only dependency relation path information (SynMatch), whereas the second baseline (SemParse) uses Shalmaneser, a state-of-the-art shallow semantic parser for the role labeling task. We consider an answer correct if it is returned with rank 1. As can be seen, SemMatch is significantly better than both Syn-Match and SemParse, whereas the latter is significantly worse than SynMatch. Although promising, the results in Table 2 are not very informative, since they show performance gains on partial data. Instead of using our answer extraction model on its own, we next combined it with the syntax-based system mentioned above (SynMatch, see also Section 6 for details). If FrameNet is indeed helpful for QA, we would expect an ensemble sys- Table 1); * : significantly better than +SemParse; † : significantly better than SynMatch (p < 0.01, using a χ 2 test). tem to yield better performance over a purely syntactic answer extraction module. The two systems were combined as follows. Given a question, we first pass it to our FrameNet model; if an answer is found, our job is done; if no answer is returned, the question is passed on to SynMatch. Our results are given in Table 3. +SemMatch and +SemParse are ensemble systems using SynMatch together with the QA specific role labeling method proposed in this paper and Shalmaneser, respectively. We also compare these systems against SynMatch on its own. We can now attempt to answer our third question concerning our model's performance on the TREC data. Our experiments show that a FrameNetenhanced answer extraction module significantly outperforms a similar module that uses only syntactic information (compare SynMatch and +Sem-Match in Table 3). Another interesting finding is that the shallow semantic parser performs considerably worse in comparison to our graph-based models and the syntax-based system. Inspection of the parser's output highlights two explanations for this. First, the shallow semantic parser has difficulty assigning accurate semantic roles to questions (even when they are reformulated as declarative sentences). And secondly, it tends to favor precision over recall, thus reducing the number of questions for which answers can be found. A similar finding is reported in for a PropBank trained parser. Conclusion In this paper we assess the contribution of semantic role labeling to open-domain factoid question answering. We present a graph-based answer extraction model which effectively incorporates FrameNet style role semantic information and show that it achieves promising results. Our experiments show that the proposed model can be effectively combined with a syntax-based system to obtain performance superior to the latter when used on its own. Furthermore, we demonstrate performance gains over a shallow semantic parser trained on the FrameNet annotated corpus. We argue that performance gains are due to the adopted graph-theoretic framework which is robust to coverage and recall problems. We also provide a detailed analysis of the appropriateness of FrameNet for QA. We show that performance can be compromised due to incomplete coverage (i.e., missing frame or predicate entries as well as annotated sentences) but also because of mismatching question-answer representations. The question and the answer may evoke different frames or the answer simply falls outside the scope of a given frame (i.e., in a non predicate-argument structure). Our study shows that mismatches are relatively frequent and motivates the use of semantically informed methods in conjunction with syntax-based methods. Important future directions lie in evaluating the contribution of alternative semantic role frameworks (e.g., PropBank) to the answer extraction task and developing models that learn semantic roles directly from unannotated text without the support of FrameNet annotations (Grenager and Manning, 2006). Beyond question answering, we also plan to investigate the potential of our model for shallow semantic parsing since our experience so far has shown that it achieves good recall. Figure 3 : 3Semantic structures induced by our model for a question and answer sentence Figure 4 : 4Distribution of Numbers of Predicates and annotated sentences; each sub-pie, lists the number of predicates (above) with their corresponding range of annotated sentences (below) Lee] Seller sold a textbook [to Abby] Buyer . b. [Kim] Seller sold [the sweater] Goods . c. [My company] Seller has sold [more than three million copies] Goods . d. [Abby] Seller sold [the car] Goods [for cash] Means . e. [He] Seller [reluctanctly] Manner sold [his rock] Goods . SemStruc ac 1 acSemStruc ac 2SemStruc ac i SemStruc q Sent. Model I Q Model I Model II Answer The answer and the question evoke different frames; in fact here a semantic role analysis is not relevant for locating the right answer. As can be seen NoMatch cases are by far the most frequent. The number of questions remaining after excluding NoFrame, NoAnnot, and NoMatch are shown under the Rest heading inTable 1.Data Total NoFrame NoAnnot NoMatch Rest TREC02 444 87 (19.6) 29 (6.5) 176 (39.6) 152 (34.2) TREC03 380 55 (14.5) 30 (7.9) 183 (48.2) 112 (29.5) TREC04 203 47 (23.1) 14 (6.9) 67 (33.0) 75 (36.9) TREC05 352 70 (19.9) 23 (6.5) 145 (41.2) 114 (32.4) Table 1: Number of questions which cannot be answered using a FrameNet style semantic analysis; numbers in parentheses are percentages of Total (NoFrame: frames or predicates are missing; NoAnnot: annotated sentences are missing, NoMatch: questions and candidate answers evoke different frames. mismatches. Consider Q: What does AARP stand for? whose answer is found in S: The American Association of Retired Persons (AARP) qualify for discounts. . .. SynMatch 35.53 * 33.04 * 40.00 * 36.84 * SemMatch 53.29 * † 49.11 * † 54.67 * † 59.65 * †Model TREC02 TREC03 TREC04 TREC05 SemParse 13.16 8.92 17.33 13.16 Table 2 : 2System Performance on subset of TREC datasets (see Rest column inTable 1); * : significantly better than SemParse; † : significantly better than SynMatch (p < 0.01, using a χ 2 test). 38.96 * † 35.53 * † 42.36 * † 41.76 * †Model TREC02 TREC03 TREC04 TREC05 SynMatch 32.88 * 30.70 * 35.95 * 34.38 * +SemParse 25.23 23.68 28.57 26.70 +SemMatch Table 3 : 3System Performance on TREC datasets (see Total column in The software is available from http://www.magiclogic. com/assignment.html . See http://www.lemurproject.org/ for details. 4 This corpus consists of English newswire texts and is used as the main document collection in official TREC evaluations. 5 The software is available from www.alias-i.com/ lingpipe/ Acknowledgements We are grateful to Sebastian Padó for running Shalmaneser on our data. Thanks to Frank Keller and Amit Dubey for insightful comments and suggestions. The authors acknowledge the support of DFG (Shen; PhD studentship within the International Postgraduate College "Language Technology and Cognitive Systems") and EPSRC (Lapata; grant EP/C538447/1). An analysis of the askMSR question-answering system. S Brill, M Dumais, Banko, Proceedings of the EMNLP. the EMNLPPhiladelphia, PABrill, S. Dumais, M. Banko. 2002. An analysis of the askMSR question-answering system. In Proceedings of the EMNLP, 257-264, Philadelphia, PA. Proceedings of the CoNLL shared task: Semantic role labelling. X. Carreras, L. Màrquezthe CoNLL shared task: Semantic role labellingX. Carreras, L. Màrquez, eds. 2005. Proceedings of the CoNLL shared task: Semantic role labelling, 2005. Introduction to Algorithms. T Cormen, C Leiserson, R Rivest, MIT PressT. Cormen, C. Leiserson, R. Rivest. 1990. Introduction to Algorithms. MIT Press. Question answering passage retrieval using dependency relations. H Cui, R X Sun, K Y Li, M Y Kan, T S Chua, Proceedings of the ACM SIGIR. the ACM SIGIRACM PressH. Cui, R. X. Sun, K. Y. Li, M. Y. Kan, T. S. Chua. 2005. Question answering passage retrieval using de- pendency relations. In Proceedings of the ACM SIGIR, 400-407. ACM Press. Distance measures for point sets and their computation. T Eiter, H Mannila, Acta Informatica. 342T. Eiter, H. Mannila. 1997. Distance measures for point sets and their computation. Acta Informatica, 34(2):109-133. Shalmaneser -a flexible toolbox for semantic role assignment. K Erk, S Padó, Proceedings of the LREC. the LRECGenoa, ItalyK. Erk, S. Padó. 2006. Shalmaneser -a flexible toolbox for semantic role assignment. In Proceedings of the LREC, 527-532, Genoa, Italy. WordNet. An Electronic Lexical Database. C Fellbaum, MIT PressCambridge/MassC. Fellbaum, ed. 1998. WordNet. An Electronic Lexical Database. MIT Press, Cambridge/Mass. Background to FrameNet. C J Fillmore, C R Johnson, M R Petruck, International Journal of Lexicography. 16C. J. Fillmore, C. R. Johnson, M. R. Petruck. 2003. Background to FrameNet. International Journal of Lexicography, 16:235-250. Automatic labeling of semantic roles. D Gildea, D Jurafsky, Computational Linguistics. 283D. Gildea, D. Jurafsky. 2002. Automatic labeling of se- mantic roles. Computational Linguistics, 28(3):245- 288. Unsupervised discovery of a statistical verb lexicon. T Grenager, C D Manning, Proceedings of the EMNLP. the EMNLPSydney, AustraliaT. Grenager, C. D. Manning. 2006. Unsupervised dis- covery of a statistical verb lexicon. In Proceedings of the EMNLP, 1-8, Sydney, Australia. A shortest augmenting path algorithm for dense and sparse linear assignment problems. R Jonker, A Volgenant, Computing. 38R. Jonker, A. Volgenant. 1987. A shortest augmenting path algorithm for dense and sparse linear assignment problems. Computing, 38:325-340. Web question answering by exploiting wide-coverage lexical resources. M Kaisser, Proceedings of the 11th ESSLLI Student Session. the 11th ESSLLI Student SessionM. Kaisser. 2006. Web question answering by exploiting wide-coverage lexical resources. In Proceedings of the 11th ESSLLI Student Session, 203-213. The qed opendomain answer retrieval system for TREC. J Leidner, J Bos, T Dalmas, J Curran, S Clark, C Bannard, B Webber, M Steedman, Proceedings of the TREC. the TRECJ. Leidner, J. Bos, T. Dalmas, J. Curran, S. Clark, C. Ban- nard, B. Webber, M. Steedman. 2004. The qed open- domain answer retrieval system for TREC 2003. In Proceedings of the TREC, 595-599. The spectrum kernel: a string kernel for SVM protein classification. C Leslie, E Eskin, W S Noble, Proceedings of the Pacific Biocomputing Symposium. the Pacific Biocomputing SymposiumC. Leslie, E. Eskin, W. S. Noble. 2002. The spectrum kernel: a string kernel for SVM protein classification. In Proceedings of the Pacific Biocomputing Sympo- sium, 564-575. English Verb Classes and Alternations: A Preliminary Investigation. B Levin, University of Chicago PressChicagoB. Levin. 1993. English Verb Classes and Alternations: A Preliminary Investigation. University of Chicago Press, Chicago. Learning question classifiers. X Li, D Roth, Proceedings of the 19th COLING. the 19th COLINGTaipei, TaiwanX. Li, D. Roth. 2002. Learning question classifiers. In Proceedings of the 19th COLING, 556-562, Taipei, Taiwan. PRINCIPAR-an efficient, broadcoverage, principle-based parser. K Lin, Proceedings of the 15th COLING. the 15th COLINGK. Lin. 1994. PRINCIPAR-an efficient, broad- coverage, principle-based parser. In Proceedings of the 15th COLING, 482-488. COGEX: A logic prover for question answering. D Moldovan, C Clark, S Harabagiu, S Maiorano, Proceedings of the HLT/NAACL. the HLT/NAACLEdmonton, CanadaD. Moldovan, C. Clark, S. Harabagiu, S. Maiorano. 2003. COGEX: A logic prover for question answer- ing. In Proceedings of the HLT/NAACL, 87-93, Ed- monton, Canada. Question answering based on semantic structures. S Narayanan, S Harabagiu, Proceedings of the 19th COLING. the 19th COLINGS. Narayanan, S. Harabagiu. 2004. Question answering based on semantic structures. In Proceedings of the 19th COLING, 184-191. Optimal constituent alignment with edge covers for semantic projection. S Padó, M Lapata, Proceedings of the COLING/ACL. the COLING/ACLS. Padó, M. Lapata. 2006. Optimal constituent alignment with edge covers for semantic projection. In Proceed- ings of the COLING/ACL, 1161-1168. The Proposition Bank: An annotated corpus of semantic roles. M Palmer, D Gildea, P Kingsbury, Computational Linguistics. 311M. Palmer, D. Gildea, P. Kingsbury. 2005. The Propo- sition Bank: An annotated corpus of semantic roles. Computational Linguistics, 31(1):71-106. Passage scoring for question answering via bayesian inference on lexical relations. D Paranjpe, G Ramakrishnan, S Srinivasa, Proceedings of the TREC. the TRECD. Paranjpe, G. Ramakrishnan, S. Srinivasa. 2003. Pas- sage scoring for question answering via bayesian infer- ence on lexical relations. In Proceedings of the TREC, 305-210. Shallow semantic parsing using support vector machines. S Pradhan, W Ward, K Hacioglu, J Martin, D Jurafsky, Proceedings of the HLT/NAACL. the HLT/NAACLBoston, MAS. Pradhan, W. Ward, K. Hacioglu, J. Martin, D. Jurafsky. 2004. Shallow semantic parsing using support vector machines. In Proceedings of the HLT/NAACL, 141- 144, Boston, MA. Exploring correlation of dependency relation paths for answer extraction. D Shen, Klakow, Proceedings of the COLING/ACL. the COLING/ACLShen, D. Klakow. 2006. Exploring correlation of de- pendency relation paths for answer extraction. In Pro- ceedings of the COLING/ACL, 889-896. Using syntactic and semantic relation analysis in question answering. R X Sun, J J Jiang, Y F Tan, H Cui, T S Chua, M Y Kan, Proceedings of the TREC. the TRECR. X. Sun, J. J. Jiang, Y. F. Tan, H. Cui, T. S. Chua, M. Y. Kan. 2005. Using syntactic and semantic re- lation analysis in question answering. In Proceedings of the TREC. A discriminative matching approach to word alignment. B Taskar, S Lacoste-Julien, D Klein, Proceedings of the HLT/EMNLP. the HLT/EMNLPVancouver, BCB. Taskar, S. Lacoste-Julien, D. Klein. 2005. A discrim- inative matching approach to word alignment. In Pro- ceedings of the HLT/EMNLP, 73-80, Vancouver, BC. University at albany's ilqua in trec. M Wu, M Y Duan, S Shaikh, S Small, T Strzalkowski, Proceedings of the TREC. the TRECM. Wu, M. Y. Duan, S. Shaikh, S. Small, T. Strzalkowski. 2005. University at albany's ilqua in trec 2005. In Proceedings of the TREC, 77-83.
14,937,262
Towards an environment for the production and the validation of lexical semantic resources
We present the components of a processing chain for the creation, visualization, and validation of lexical resources (formed of terms and relations between terms). The core of the chain is a component for building lexical networks relying on Harris' distributional hypothesis applied on the syntactic dependencies produced by the French parser FRMG on large corpora. Another important aspect concerns the use of an online interface for the visualization and collaborative validation of the resulting resources.
[ 160744178 ]
Towards an environment for the production and the validation of lexical semantic resources Mikaël Morardo mickael.morardo@inria.fr INRIA-Rocquencourt Domaine de Voluceau Rocquencourt B.P. 10578153Le ChesnayFrance Éric Villemonte De La Clergerie INRIA-Rocquencourt Domaine de Voluceau Rocquencourt B.P. 10578153Le ChesnayFrance Towards an environment for the production and the validation of lexical semantic resources Terminology extractionword clusteringvisualization interfacecollaborative interfaceknowledge acquisition We present the components of a processing chain for the creation, visualization, and validation of lexical resources (formed of terms and relations between terms). The core of the chain is a component for building lexical networks relying on Harris' distributional hypothesis applied on the syntactic dependencies produced by the French parser FRMG on large corpora. Another important aspect concerns the use of an online interface for the visualization and collaborative validation of the resulting resources. Introduction Each specialized domain tends to have its own set of concepts, instantiated by specialized terms represented by simple or multi-words expressions. Discovering these terms and their relationships is an important issue for providing useful lexical semantic resources (or lexicalized ontologies) for many NLP-based tasks (such as query expansion for search engines, semantic annotation of documents, question answering, translation, . . . ). However, hand-crafting such resources remains a fastidious task, which has to be replicated for many domains, and the resources have to be regularly updated, to follow the evolution of a domain (in particular with the emergence of new terms). On the other hand, (unsupervised) acquisition tools are now able to extract automatically many interesting pieces of information from linguistically processed corpora. Unfortunately, these tools still make many errors and often miss important relations (suffering from weak recall). Our opinion is that human validation remains a necessary complement of automatic acquisition, but should be applied on rich data trough well conceived interfaces. Moreover, given the amount of data that has often to be validated, we advocate for collaborative interfaces. These motivations led us to develop a process flow that includes: 1. the deep linguistic processing of corpora (ranging from medium to large sized ones, specialized or not); 2. the extraction of (multi-word) terms and the discovery of semantic proximity between these terms (and simple words), expressed as semantic relations; 3. the visualization and validation of the resulting terms and relations through a collaborative online interface. The paper is organized as follows: Section 2. introduces some of the corpora we used for our experiments. Section 3. provides some background information about the way the corpora are linguistically processed, in particular to get syntactic data following the PASSAGE annotation scheme. These data are then used for extracting multi-word terms (Section 4.) and for identifying semantically close terms (Section 5.). Finally, the main aspects of the visualization and validation interface are sketched in Section 6.. The corpora As illustrated by the non-exhaustive list of Linguistic processing All corpora have been processed by the Alpage processing chain 1 , with SXPIPE (Sagot and Boullier, 2008) used for segmentation and named entity recognition (NER), and FRMG used for parsing. The parser is based on a wide-coverage French Tree Adjoining Grammar (Villemonte de la Clergerie, 2010). The native dependency output of FRMG is converted to the EASy/PASSAGE annotation schema (Vilnat et al., 2010), designed during the two first parsing French evaluation campaigns (EASy and PASSAGE). The PASSAGE scheme is based on a set of 6 kinds of non-recursive chunks and a set of 14 kinds of relations, as described by Table 2. The relations can connect either chunks or forms, and all of them are binary, but for the COORD relations. Figure 1 shows an example of English sentence annotated following the PAS-SAGE scheme. Being less rich than FRMG native schema, some information is lost during the conversion to PASSAGE schema. However, the advantage of PASSAGE is to act as some kind of standard, with around 10 parsing systems able to produce it for French. Figure From the syntactic results, we collect and count recurring elements of information using a MapReduce algorithm (Dean and Ghemawat, 2004). These elements are then used by the knowledge acquisition scripts presented in the following two sections. Terminology extraction The first acquisition task concerns the extraction of terms. Terminology extraction still raises some problems but the main ideas are nowadays relatively well identified (Pazienza et al., 2005), in particular for terms corresponding to multi-word expressions. In our experiments, we have focused our work on the extraction of nominal multiword terms that are essentially instances of the pattern . The chunks composing a candidate term must also be syntactically connected (essentially through noun-modifier MOD-N relations). Table 3 show some instances of the pattern for a few terms found in ALL corpus. The candidate terms are then ranked along several criteria, including standard ones such as frequency, internal cohesion (computed via a variation of point-wise mutual information), and more original ones such as autonomy and diversity of contexts. Autonomy exploits the syntactic dependencies to check that a significant amount of the occurrences of the candidate corresponds to "active" syntactic roles (such as subject or object, for instance), and that not all the occurrences are modified (for instance by prepositional chunks). The motivation for the autonomy criterion is to avoid the selection of candidates which are essentially fragments of larger expressions or which play, for instance, the role of adverbial locutions or complex prepositions. Favoring diversity, we penalize candidates that tend to occur in very similar sentences (or sentence fragments) and are more representative of collocations. 2 Variants are then grouped in function of their underlying lemmas, and some candidates are rejected if their variability is too high, for instance when they include a NUMBER, DATE, or LOCATION lemma that get instantiated by many different named entities 3 ). With minimal filtering (to favor recall), we get around 100K terms on the all corpus and around 50K terms on the fiscal part of the law corpus (145Mwords). The terms are enriched with a set of randomly chosen illustrative sentences and statistical information. Figure 6 lists some of the terms extracted from the business law corpus, with It may be noted that, for président du conseil, we observe that several variants of this term have been identified in the corpora, corresponding to several plurals (on chairman and board) and gender (chairman, chairwoman). We are fully aware that terms do not necessarily correspond to multi-word expressions, but we expect the other simpleword terms to be captured when looking for semantic similarity (Section 5.). However, we still need to setup a filtering of the terms to favor domain-specific ones, possibly by contrasting their frequencies with frequencies computed on a reference corpus. Discovering semantic similarities Most works on semantic clustering (Cimiano et al., 2004;Pantel, 2003) have been inspired by Harris' distributional hypothesis (Harris, 1968) that states that words close semantically tend to occur in similar contexts. Several kinds of contexts have been considered, including bag of words, sliding windows, or, in our case, syntactic contexts derived from syntactic dependencies. For instance, for a CPL-V (complement-verb) dependency triple like to sit on chair , one may associate the syntactic context to sit on • to the word chair and, in a dual way, the context • on chair to the word to sit. A weighted vector of such contexts may be attached to each word, with the weights reflecting the frequency and importance of the context (measured via mu-tual information). Table 4 lists the number of occurrences for a few dependency triples involving chaise (chair). We observe a few actions related to the use of a chair (assoir sur chaise and se assoir sur chaise [to sit on a chair], tomber sur chaise [to fall on a chair], or prendre une chaise [to take a chair]), but also many entries corresponding to multi-word terms built upon chair (chaise musical [musical chair] or chaiseélectrique [eletric chair]). Obviously, not all high-frequency dependencies are pertinent to capture the meaning of a chair. We can also observe the high frequency of the coordination between chair and table. For dependencies involving a preposition, we keep triples with the preposition used a relation label. Moreover, we refine the relation label with suffix = when the preposition introduces a noun with no determiner (like chaiseà porteur). To counter-balance attachment ambiguity for prepositional groups, we decided to add extra dependencies for potential attachments that were discarded but could have been chosen: for instance, in an expression like tremblement de terre de magnitude 5 (earthquake of magnitude 5), maybe the attachment of magnitude was done on terre giving triple terre de magnitude but we also add the potential attachment tremblement de * magnitude . A similar treatment is done to attach potential dependency triples for the occurrences of candidate (multi-word) terms that may be retrieved in the corpus. In order to reflect deeper semantic relationships, some of Given context vectors, a wide spectrum of unsupervised learning techniques have been proposed to regroup words, generally into hard clusters (each word belonging to at most one cluster). We favor the search of relations between words rather than hard clustering, believing that the richness of the words (polysemy and sense shift) makes it difficult to capture meaning through strictly delimited clusters. Our learning algorithm is derived from Markov clustering (van Dongen, 2000), based on the search of nodes that are connected through a dense set of short paths. Our main contribution is to switch to a bipartite graph connecting (simple or multi-words) terms to contexts, as shown in Figure 4, with wc i,a (resp. cw a,i ) denoting the weight of context c i for word w a (resp. of w a for context c i ). The weight wc i,a of context c a occurring u ai times with word w i is based on frequency and mutual information, and is given by the following equation, with a similar formulation for the weight cw a,i of w i relatively to c a . w j c b c a w i wc i,a cw a,i cc a,b wc j,b cw b,j ww i,j wci,a = ln(uai) * ηa b ln(u bi ) * η b with ηa = ln #distinct words |{wj|uaj > 0}|(1) The motivation for a bipartite graph is that terms and syntactic contexts play dual roles: terms sharing similar contexts are semantically close and, conversely, contexts sharing similar terms are also semantically close. Following (van Dongen, 2000), the search of dense sets of short paths in the graph may be captured by the following set of mutually recursive equations, involving an inflation coefficient α > 1 than reinforce strong paths over weak ones:                ww i,j = 1 Z i   a,b (wc i,a )(cc a,b )(wc j,b )   α cc a,b = 1 Z a   i,j (cw a,i )(ww i,j )(cw b,j )   α(2) where Z i and Z a denote normalization factors given by              Z i = j ab (wc i,a )(cc a,b )(wc j,b ) α Z a = b   i,j (cw a,i )(ww i,j )(cw b,j )   α(3) These equations may be reformulated with matrices, using an inflation operator Γ α (with normalization), as follows, with the similarity matrices W = (ww i,j ) i,j , C = (cc a,b ) a,b , and the weight matrices F = (wc i,a ) i,a , G = (cw a,i ) a,i : W = Γ α (F t CF ) C = Γ α (G t W G)(4) The formulation involves mutually recursive equations which require the search of a fixpoint, whose solution is approached through an iterative algorithm, starting from initial similarity matrices W (0) and C (0) . The base algorithm is extended by exploiting transfer matrices using to transfer the similarities found between words at the level of contexts and conversely. Indeed, the contexts are built upon words (for instance to sit on • is built upon to sit by combining it with relation on), and one may expect contexts built upon similar words (and same relation r) to be themselves similar. We therefore introduce a transfer coefficient β (set to 0.2 by default) and transfer matrices T r = (τ ia ) i,a for each relation r (such as object) with τ ia = 1 if c a = r.w i and 0 otherwise. Equations (4) are then modified as follows:      W = Γ α (F t CF + r βT t r CT r ) C = Γ α (G t W G + r βT r W T t r )(5) The algorithm can also be easily enriched to handle all kinds of extra sources of information about known or assumed similarities between words or contexts. In particular, bonus/malus matrices may be added to provide similarity bonuses or maluses between pairs of words, coming for instance from some external source (like Wordnet (Fellbaum, 1998)). In practice, we add such bonuses for the following cases: • between w i and itself, to enforce self-similarity; • between words that are frequently coordinated (like chair and table); • between words close for the editing distance (often reflecting typographic errors or diacritic variations); • between words sharing common prefixes or suffixes (reflecting some common origin). More formally, we consider a bonus/malus matrix L added to the identity matrix I, to get the following equation for W , where • denotes the point-wise Hadamard product: W = Γ 1 ((I + L) • Γ α (F t CF + r βT t r CT r )) (6) One of the strengths of the algorithm comes from the possibility to retrieve the most pertinent contexts that explain the semantic similarity between two terms w i and w j . It may be noted that a term may be related to several other terms through (completely or partially) distinct sets of pertinent contexts, illustrating its polysemy or sense shifts. For instance, from the ALL corpus, we found that the words char (in the sense of carriage) was close of charrette (cart) and chariot (trolley) because of contexts like atteler (to harness) or promener en X (X ride) while char (in the sense of tank) was close of tank because of contexts like • de combat ( • of combat ) and régiment de • ( regiment of • ). These contexts are also useful for an human to assess the validity of the semantic relations. On the ALL corpus (without injecting the extracted terms), the algorithm returned a set of 51,980 pairs (w i , w j ), involving 19,960 words w i (including a large number of named entities). By symmetrizing these non-necessarily symmetric pairs, we obtain a large non-oriented network with 47,065 edges. For the busyness corpus (with extracted terms included), we get a non-oriented network with 10,223 nodes and 13,584 edges. Figure 5 shows a tiny part of the ALL network, centered on jambe (leg) and displayed with Tulip software 4 (Auber, 2003). We clearly observe a bush-like structure, with a set of bodypart terms strongly interconnected that form a good cluster, more precisely related to bony and muscular parts. Many other such bush structures were actually identified, which led us to design a small algorithm to extract hard clusters from them, with some of the around 4000 extracted clusters listed below: 79: (a cluster of various kinds of dogs) sulky malinois foxterrier setter cocker colley chiot fox labrador ratier griffon caniche teckelépagneul 80: (a cluster of various kinds of soldiers and military groups) arrière-garde canonnier cavalerie carabinier tirailleur hussard panzer voltigeur blindé grenadier cuirassier avant-garde zouave lancier 83: (a cluster of various kinds of diseases) pneumonie paludisme diphtérie pneumopathie variole dysenterie malaria botulisme poliomyélite septicémie varicelle polio rougeole méningite Visualization and collaborative validation Tulip already offers a nice way to view and navigate in the semantic network. However, it is not always adequate for exploring dense areas and is not designed to validate or invalidate relations. Furthermore, it is also not possible to access the explanations motivating a relation, even if they are provided by the acquisition algorithm. A first step was to complete the subjective intuition provided by Tulip by more objective global evaluations using wordnet-like resources for French as reference resources, for instance by answering automatically and randomly built TOEFL tests (Turney, 2002). Such a test is given by a list of questions, each question specifying a candidate term and a list of 4 potential answer terms, with only one being really close semantically from the candidate term. The success rate when answering randomly is therefore of 25%. We build the tests using two French wordnets, namely French EuroWordnet (Jacquin et al., 2007), and Wolf (Sagot and Fišer, 2008). For each question, the right answer term is selected (randomly) in the same synset than the candidate term. while the other terms are selected (randomly) in other synsets. The results are presented in Table 5. These evaluations essentially provide global information about the recall and precision of the extracted network, and, although the precision may be good (especially for nouns with 94% of good answers, but less for adverbs with only 49%), we mostly observe a weak recall (a low 35% for nouns) as shown in Table 6. We also observed that many relations present in the network but not present in the reference resources may be considered pertinent by an human and it may be noted that comparing two wordnets together (such as Wolf with French EuroWordNet) show that even these reference resources do not provide the same information (with a success rate of 64.5%). Therefore, we finally opted for the development of an online interface 5 for viewing, navigating, and editing the semantic networks and the candidate terms extracted by our acquisition algorithms. Because of the large size of the extracted resources, we also believe that a collaborative approach is needed, hence motivating the choice of an online interface. The implementation was done under the LI-BELLEX platform, in the context of a collaboration with Lingua & Machina, the company developing this platform, primarily for the maintenance of multilingual resources for translation. Figure 6 shows some elements of visualization provided by the interface via several tiles. One of the tile is used to list, query, edit, and validate the terms. For a given term, another tile provides access to illustrative sentences and to statistical explanations. However, the most useful tile (in our opinion) displays a small local graph centered on some selected term (president of the board for Figure 6), with the display of the semantic relations but also of structural relations derived from the internal structure of the multi-word terms (such as term expansion or term embedding). Only neighbors up to distance 2 are displayed for clarity using a force directed algorithm, implemented within javascript library d3.js. The algorithm tends to nicely separate the clusters (with attractive forces inside the clusters and repulsive ones outside the clusters). In Figure 6, the terms close from president of the board include terms related to function or statute, like vide-président (vice-president), directeur (director), administrateur (administrator), or related to membership, like membres du conseil or membre du directoire (board members). However, even if the relations for this example are interesting, it seems necessary to slightly re-organize them and to add a few missing ones, which can done through the interface. A single glimpse is often enough to quickly detect anomalies and browsing may be done by simply clicking on a node to select it and recenter the graph on it. However, when one need to understand more precisely why several terms are close, it is possible to get more precise information by selecting the associated nodes and opening a new tile that displays a synthetic matrix listing the most pertinent contexts (and their strength) behind the relations for these nodes, as illustrated by Figure 7. These matrices are generally very useful for understanding why terms have been grouped together and are completed by illustrative sentences for the terms and contexts. It is worthwhile to mention that this functionality has proven its usefulness in several occasions where the first intuition of an human was to wrongly discard a relation. Interestingly, for the terms listed in Figure 7 corresponding to a few body parts (ankle, toe, wrist), most of the relating contexts correspond to damages (fracture, sprain, . . . ) and pain. Looking at the illustrative sentences, we see that the contexts were actually extracted from journalistic AFP news about sport, which shows how the prox- Figure 6: Visualization with Libellex (fragment of Law subcorpus busyness), centered on presidents of the board imity between terms is not necessarily intrinsic but also related to some point of view. Conclusion We propose a complete set of components for the creation, visualization, and collaborative editing of lexical semantic resources. The linguistic processing chain and the acquisition modules could be easily replaced by similar modules, and the most crucial component is maybe finally the online interface. In particular, in addition of the extracted terms, the law publisher has also inserted (through merging) a list of potential terms that they have accumulated over the years and that they also wanted to validate (totalling 107K terms for the fiscal part, for instance, to be contrasted with the 50K extracted terms). They routinely use the interface for validating the terms, with around 45K terms accepted for the fiscal part (out of the extracted and added terms). They now plan to explore the validation of the relations in a second stage. Their feedback was helpful to improve the design and the functionalities of the interface and we also expect to exploit the validated data to improve our acquisition algorithms, in particular through the training of a reranker for the terms. It is also interesting to mention the strong potential of the interface for many similar kinds of lexical semantic resources. In particular, we have loaded WOLF (Sagot and Fišer, 2008), a freely available version of a French Wordnet, with several kinds of lexical relations between synsets. We have also noted, several times and for various audiences (including children), the impact of the graph view for presenting and navigating in rich lexical networks. Our ambition is now to largely open the service for experiments and feedback with various kinds of lexical semantic resources. Our linguistic processing chain and the acquisition tools are freely available (on the INRIA GForge) but we also plan to offer online processing service for small corpora (up to 1 million words), coupled with the use of the interface. 2 and Figure 3 provide some information about the performances of FRMG on chunks and relations. They have been calculated in 2011 (around the date of our first experiments on the ALL corpus) and, more recently, at the end of 2013, on the EasyDev corpus, a small development set of around 4k sentences covering various styles (journalistic, literacy, medical, mail, speech, . . . ). The improvements between 2011 and 2013 come from a better coverage of FRMG grammar and of the use of training techniques on a treebank for better disambiguation (Villemonte De La Clergerie, 2013). Figure 2 : 2F-measures for Passage chunks (on EasyDev) Figure 3 : 3F-measures for Passage relations (on EasyDev) (GN)(GR * GA|GP|PV|NV)+ over PASSAGE chunks. This pattern captures nominal chunks [GN] modified by adjectival chunks [GA], prepositional chunks [GP] possibly introducing verbs [PV] or participial verbs [NV], and possibly with some adverbs [GR] Figure 1 : 1An exemple of English sentence annotated following PASSAGE schema a focus on président du conseil / Chairman of the Board. Figure 4 : 4Term-context bipartite graph Figure 7 : 7A term-context matrix with illustrative sentences van Dongen, S. (2000). Graph Clustering by Flow Simulation. Phd thesis, University of Utrecht, May. Villemonte de la Clergerie, E. (2010). Building factorized TAGs with meta-grammars. In TAG+10: The 10th International Conference on Tree Adjoining Grammars and Related Formalisms, pages pp. 111-118, New Haven, CO. Villemonte De La Clergerie,É. (2013). Improving a symbolic parser through partially supervised learning. In The 13th International Conference on Parsing Technologies (IWPT), Nara, Japon. Vilnat, A., Paroubek, P., Villemonte de la Clergerie, E., Francopoulo, G., and Guénot, M.-L. (2010). PASSAGE syntactic representation: a minimal common ground for evaluation. In LREC, La Vallete. Table 1 , 1we have run our experiments on a large set of French corpora, covering various styles and domains, and with sizes ranging from around one million words to several hundred millions words. The top corpora were prepared in view of the PAS-SAGE evaluation campaign and constitute the CPL (Corpus Passage Long) corpus. These corpora have been completed with AFP news to form the ALL collection. The ALL collection covers various styles (journalistic, encyclopedic, . . . ) but is not domain specific. The idea is to observe what can be extracted from large non thematic corpora.On the other hand, the 4 bottom corpora are homogeneous in terms of style and fall in the law domain, covering several more specific subfields (fiscal law, social law, business law, and civil law). These law corpora have been provided by a commercial publisher that wishes to complete and maintain accurate terminology for indexing and querying its collec- tions. Corpus #Msent. #Mwords Description Wikipedia (fr) 18.0 178.9 encyclopedic pages Wikisource (fr) 4.4 64.0 literacy texts EstRepublicain 10.5 144.9 journalistic JRC 3.5 66.5 European directives EP 1.6 41.5 parliamentary debates Total CPL 38.0 495.8 all above AFP 14.0 248.3 news Total ALL 52.0 744.2 CPL+AFP fiscal 7.2 145.2 law social 6.8 127.5 law civil 2.6 40.9 law business 7.2 133.8 law Table 1: Some of the corpora used for the experiments Table 2 : 2PASSAGE annotation scheme PV procréation médicalement assistée medically assisted procreation [procréation/nc] GN [médicalement/adv] GR [assisté/adj] GA implant chirurgical non actif non active chirurgical implant [implant/nc] GN [chirurgical/adj] GA [non/adv] GR [actif/adj] GAdioxyde de carbone carbon dioxid [dioxyde/nc] GN [de/prep carbone/nc] GP hockey sur glace ice hockey [hockey/nc] GN [sur/prep glace/nc] GP téléphone portable mobile phone [téléphone/nc] GN [portable/adj] GA laitécrémé skimmed milk [lait/nc] GN [écrémer/v] NV permis de conduire driving license [permis/nc] GN [de/prep conduire/v] Table 3 : 3Examples of terms with their chunk structuregovernor relation governee freq. chaise nc et table nc 235 asseoir v sur chaise nc 227 chaise nc modifier long adj 168 chaise nc de= poste nc 115 tomber v sur chaise nc 103 chaise nc modifier musical adj 102 se asseoir v sur chaise nc 93 governor relation governee freq. prendre v object chaise nc 87 chaise nc modifierélectrique adj 82 chaise nc modifier vide adj 80 chaise ncà= porteur nc 80 dossier nc de chaise nc 78 avoir v object chaise nc 71 table nc et chaise nc 62 Table 4 : 4A few syntactic dependencies involving chaise (chair).the PASSAGE dependencies are rewritten, for instance for passive verbs with the surface subjects transformed into deep objects, or for relating a verb attribute to the subject (rather than to the verb). The relations involving a coor- dination conjunction are distributed along the coordinated elements. Table 5 : 5Toefl evaluation.pos #tests %ok %bad %missing %b/(b + f ) v 3,876 35,5 30,9 33,6 53,4 nc 1,078 33,5 2,1 64,4 94,0 adj 2,085 22,3 11,3 66,4 66,3 adv 1,533 36,9 41,9 21,7 46,8 Table 6 : 6Tests Toefl by syntactic categories (on CPL). freely available at https://www.rocq.inria.fr/ alpage-wiki/tiki-index.php?page=alpc&bl=y. Favoring diversity is also a way to correct some problems related to duplicated or close sentences, a relatively frequent phenomena in AFP news but also in the other corpora.3 but please note that we accept terms built on named entities. Tulip may be found at http://tulip.labri.fr/ TulipDrupal/ and other examples of visualization of the all network with Tulip may be found online at http://alpage. inria.fr/˜clerger/wnet/wnet.html. accessible at http://alpage.inria.fr/Lbx with login guest and password guest, selecting for instance allsemnet under demo. Tulip : A huge graph visualisation framework. D Auber, Graph Drawing Softwares, Mathematics and Visualization. Mutzel, P. and Jünger, M.Springer-VerlagAuber, D. (2003). Tulip : A huge graph visualisa- tion framework. In Mutzel, P. and Jünger, M., editors, Graph Drawing Softwares, Mathematics and Visualiza- tion, pages 105-126. Springer-Verlag. Clustering ontologies from text. P Cimiano, S Staab, A Hotho, Proceedings of LREC'04. LREC'04Cimiano, P., Staab, S., and Hotho., A. (2004). Clustering ontologies from text. In Proceedings of LREC'04, pages 1721-1724. MapReduce: Simplified data processing on large clusters. J Dean, S Ghemawat, OSDI'04: Sixth Symposium on Operating System Design and Implementation. San Francisco, CADean, J. and Ghemawat, S. (2004). MapReduce: Simpli- fied data processing on large clusters. In OSDI'04: Sixth Symposium on Operating System Design and Implemen- tation, San Francisco, CA, December. WordNet An Electronic Lexical Database. Fellbaum, C., editor.The MIT PressCambridge, MALondonFellbaum, C., editor. (1998). WordNet An Electronic Lex- ical Database. The MIT Press, Cambridge, MA ; Lon- don, May. Mathematical Structures of Languages. Z Harris, John Wiley & SonsNew-YorkHarris, Z. (1968). Mathematical Structures of Languages. John Wiley & Sons, New-York. French eurowordnet lexical database improvements. C Jacquin, E Desmontils, L Monceaux, Proc. of CICLing'07, number 4394 in LNCS. of CICLing'07, number 4394 in LNCSMexico City, MexicoJacquin, C., Desmontils, E., and Monceaux, L. (2007). French eurowordnet lexical database improvements. In In Proc. of CICLing'07, number 4394 in LNCS, Mexico City, Mexico. Clustering by Committee. P Pantel, Department of Computing Science, University of Alberta, CanadaPh.d. dissertationPantel, P. (2003). Clustering by Committee. Ph.d. disser- tation, Department of Computing Science, University of Alberta, Canada. Terminology extraction: an analysis of linguistic and statistical approaches. M T Pazienza, M Pennacchiotti, F M Zanzotto, Studies in Fuzziness and Soft Computing. Ed.), S. S., editor, Knowledge Mining185Springer VerlagPazienza, M. T., Pennacchiotti, M., and Zanzotto, F. M. (2005). Terminology extraction: an analysis of linguistic and statistical approaches. In (Ed.), S. S., editor, Knowl- edge Mining, volume 185 of Studies in Fuzziness and Soft Computing. Springer Verlag. SxPipe 2 : architecture pour le traitement présyntaxique de corpus bruts. B Sagot, P Boullier, 49Traitement Automatique des LanguesSagot, B. and Boullier, P. (2008). SxPipe 2 : architecture pour le traitement présyntaxique de corpus bruts. Traite- ment Automatique des Langues (T.A.L.), 49(2):155-188. Construction d'un wordnet libre du françaisà partir de ressources multilingues. B Sagot, D Fišer, TALN. Avignon, FranceSagot, B. and Fišer, D. (2008). Construction d'un wordnet libre du françaisà partir de ressources multilingues. In TALN 2008, Avignon, France. Mining the web for synonyms: PMI-IR versus LSA on TOEFL. P D Turney, cs.LG/0212033Turney, P. D. (2002). Mining the web for synonyms: PMI- IR versus LSA on TOEFL. CoRR, cs.LG/0212033.
14,979,669
An evaluation of different symbolic shallow parsing techniques
This paper presents an evaluation of four shallow parsers The interest of each of these parsers led us to imagine a parameterized multiplexer for syntactic information based on the principle of merging the common boundaries of the outputs given by each of these programs. The question of evaluating the parsers as well as the multiplexer came in the foreground with the problem of not owning reference corpora. We attempt here to demonstrate the interest of observing the 'common boundaries' produced by different parsers as good indices for the evaluation of these algorithms. Such an evaluation is proposed and tested with a set of two experiences.
[]
An evaluation of different symbolic shallow parsing techniques Tristan Vanrullen tristan.vanrullen@lpl.univ-aix.fr Laboratoire Parole et Langage UMR 6057 CNRS Université de Provence 29 Av. Robert Schuman13621Aix-en-ProvenceFrance Philippe Blache blache@lpl.univ-aix.fr Laboratoire Parole et Langage UMR 6057 CNRS Université de Provence 29 Av. Robert Schuman13621Aix-en-ProvenceFrance An evaluation of different symbolic shallow parsing techniques This paper presents an evaluation of four shallow parsers The interest of each of these parsers led us to imagine a parameterized multiplexer for syntactic information based on the principle of merging the common boundaries of the outputs given by each of these programs. The question of evaluating the parsers as well as the multiplexer came in the foreground with the problem of not owning reference corpora. We attempt here to demonstrate the interest of observing the 'common boundaries' produced by different parsers as good indices for the evaluation of these algorithms. Such an evaluation is proposed and tested with a set of two experiences. Introduction Why using different parsers Shallow parsing usually relies on statistical techniques. In the case of symbolic shallow parsers, the method consists in using a reduced set of pre-compiled syntactic information. This information is generally at a very low level and specified in terms of filtering (e.g. constraint grammars). In such techniques, the linguistic information is heavily dependent from the parsing process. One consequence is that such systems are not modular nor reusable. There is another important question to be answered: what is the goal of shallow parsing? The classical answer is: an efficient and robust bracketing technique. Robustness is the most important aspect, shallow parsers must address this point, as well as efficiency: large and unrestricted corpora have to be treated. But the answer is not so obvious as for the last point: bracketing. We think that this constitutes only one aspect of the kind of information that can be built by shallow parsers: other kind of information such as dependency can also, under certain conditions, be built. Even more generally, we could imagine an integrated shallow parser generating syntactic (bracketing), semantic (dependency) and prosodic (intonative contours) information. Such a goal absolutely requires for the parser to rely on high-level linguistic resources. The question is then: is it possible to develop an efficient and robust parsing strategy capable of integrating (if necessary) these different aspects? We propose in this perspective a strategy relying on a constraint-based representation. In such approach, all linguistic information is represented by means of constraints. All constraints being at the same level, it is then possible to verify only a subset of constraints. The idea consists in choosing the granularity of the parser in modifying such subset of constraints to be verified: there is a proportionality relation between the dimension of the set of constraints and the level of the parse. We can choose a very superficial granularity in verifying only one kind of constraints (for example the ones describing linearity) or to refine a little bit the parse in introducing other constraints. The main interest is that (1) the linguistic resource is the same in all cases (a set of constraints) and (2) the same system can be used for different granularity (i.e. different applications). Such a goal doesn't mean that efficient and robust parsing don't require any more specific techniques. But we can make some proposals in this direction, for example implementing a deterministic strategy (ambiguity being in the end the main problem for parsing). Improving parsers improves prosodic information for text-to-speech applications Several domains in language technology can be improved by means of syntactic information. This is in particular the case for text -to-speech systems in which intonation generation can be driven with boundaries indication coming from shallow parsers (cf. [Allen], [Abney91], [Liberman92],or [DiCristo98]). However, if such systems have a larger scope than deep analysis techniques (they are in particular able to treat unrestricted texts in opposition to sublanguages), they also only provide poor linguistic information. The techniques generally used allow a simple chunking useful for some levels of speech synthesis, but too poor to give an actual account of more complex prosodic phenomena. Several algorithms with a same goal Some recent works (cf. [Hirshberg01]) showed that a finer analysis can significantly improve the prosodic quality. We propose in this paper a technique relying on the use of several symbolic shallow parsers (or more precisely deterministic parsers). Its particularity lies in the fact that it makes use of a linguistic formalism in spite of traditional stochastic information. Our goal is to improve quantity and quality of information likely to support intonation generation by means of surface analyzers. In this perspective, while preserving robustness and efficiency of the processing, we based our work on a linguistic formalism, called Property Grammars (cf. [Blache01b]) which main interest comes from the fact that any kind of input, even ill-formed, can be characterized with syntactic properties. Three shallow parsers based on this formalism are presented and compared in this work. A fourth one, relying on a simple chunking approach is used in terms of reference. Evaluation as a necessary crossroads This paper addresses in particular the question of the interest of cascading several parsers in order to improve the result. Moreover, the evaluation problem itself is part of the work: due to the lack of a bracketed reference corpus for French, we present a 'subjective' evaluation (though automated) of these tools. Two experiences are described in order to test the behavior of these parsers. An overview of Property Grammars We propose to use a constraint-based formalism allowing to represent all kind of syntactic information by means of constraints. This formalism, called Property Grammars (cf. [Blache01b]) makes use of different types of constraints. The idea exposed above consists then in varying the granularity level in choosing the type of constraints to be verified. Let's present rapidly this formalism. The representation of syntactic knowledge requires various types of constraints or properties, each one corresponding to a specific kind of information. There is a main difference from the usual presentation of Property Grammars in which the constituency information is not directly represented. In the following, and for efficiency reasons, we add this new type of property even if redundant. The following list presents these properties: • One of the originality of this approach is that a linguistic description is not presented in terms of grammaticality: parsing an input comes to verify the set of constraints. It is then possible to characterize each component of this input with the set of constraints that are satisfied (plus, eventually, the set of constraints that are violated). The core mechanism being a constraint satisfaction one, it is possible to verify only a subpart of the entire constraint system (in other words, the grammar). Shallow, deep and granular parsers A low-level shallow parser The first technique described in the paper is inspired by Liberman & Church's Chink/chunk (1991) and by Di Cristo's Chink/chunk chunker (1998). Let's call A1 this algorithm: the result is a segmentation of the text into chunks, according to a finite-state-automaton based on the concept of function words which plays the role of boundaries between blocks. An improvement of the concept of chunk is proposed, using conjunctions as neutralizing chunks under construction. For M sentences, each sentence consisting of Nm words, its complexity has an order of M*Nm*k (K < 10). That is to say a linear complexity. The figure 1 below is the output of this parser for a sentence taken from the French newspaper 'Le Monde'. The three other techniques described in the remaining of the paper are based on a compiled subset of Property Grammars birefly exposed below (see [Blache01a] for implementation aspects). All three build grammatically annotated blocks by traversing deterministically a sentence. During the process, blocks are opened (sometimes recursively) in a stack. The tagsets used by each of these algorithms are rather different (depending on the granularity of the parser), which implies many differences between their results. These algorithms use different heuristics too. For the first two, opening and closing chunks depends on the precompiled grammar; for the last, the entire set of properties of the 'Property Grammars' is checked for each word. A compiled subset of properties In the second algorithm A2, a grammar based on left and right potential corners, and potential constituents of chunks, is generated with a tool compiling constituency, linear precedence, requirement and exclusion properties. In the worst case, for M sentences, each sentence consisting of Nw words, for a set of C precompiled categories, its complexity is M*C*(Nw²+Nw)*Constant. That is to say a polynomial complexity. Figures 2, 3 and 4 give the outputs of algorithms A2, A3 and A4 for the same sentence as for fig.1: [(phrase) [( The whole set of properties In A3, the parsing strategy relies on left corners, but verifies all the properties for each chunk. Finally, the last parser A4 proposes a deterministic approach relying on the entire set of constraints proposed in a Property Grammar. Their complexity is still polynomial as discussed in a paper not yet published. How to evaluate empirically parsers without reference corpora A brief overview of the problem The question of evaluating parsers (even shallow) is a problem in itself. At the difference of POS-tagging, many aspects can vary from one system to another, including the output itself. Before presenting more precisely our systems, we would like to give some general remarks about evaluating parsers. Generally speaking, evaluating a system consists in comparing for a given input its output with a standardized reference output. In the case of parsing, the reference is a treebank, the comparison comes in comparing the respective bracketings. This means first the availability of a treebank (such resource only exists for few languages). This also means that the parser as to build the same kind of information as in the reference corpus. This can also be problematic. First, bracketing is not totally theory-free. The second problem is that such resource usually only indicates one solution. Finally, as explained above, bracketing is not the only kind of information that we would like to evaluate. Moreover, it seems to us interesting not to limit an evaluation to the comparison of different outputs. It is also necessary in order to interpret such a comparison, to give some indications on the resources and the techniques involved in the system. For example, it is important to have indication on: • Our contribution in this paper lies in the possibility of extracting some evaluation information from a comparison technique. In other words, we show that comparing different parsers, provided that the method is systematic, allow in some cases to give some elements of evaluation. Evaluating and /or multiplexing figure 5 : To evaluate and/or multiplex parser's outputs A multiplexer for bracketed texts The idea of retrieving the same boundaries within texts bracketed with different parsers leds us to imagine a program able to merge in a parameterized way the outputs of these parsers. The goal is to keep the best information given by all of them and to let the worst be lost (see figure 5). This program had to deal with sets of borders, that's why its parameters were of two kinds: • set operators • union • intersection • complement • weights • to balance the respective r esults of each parser for each syntactic category • to avoid the errors common to each parser With such a program, we could exclude the worst and less significant borders and keep the best ones An evaluator for the multiplexer as well as for each parser But the parameters needed by this program could not be found without a good evaluation of the output of each parser. These two needs are so closely related that we cannot distinguish them, except in an empirical step energy from the parameter setting to the e valuation and then in a retroactive way from the evaluation to the parameter setting. Of course, even if all the preceding steps are automatic, the last one is an expert's work. while counting on the effects of the evaluator, we do not have any more but to check the relevance of its parameters. In other words: • the multiplexer program does the main part of the evaluation work by distinguishing the common borders and the less significant or the more particular: it informs us about the importance of each parser relatively to the others. • A human feedback is still needed to improve each parser's outputs and the parameters of the multiplexer. Experiments The evaluation presented in this paper relies on two experiments: for each one, a tagged corpus of 13,236 French sentences (from the CLIF project, see http://www.talana.linguist.jussieu.fr) was used as input. Two kind of tagging of the lexical categories were used for these sentences: a manual tagging and an automatic one (realized with the french version of WinBrill). The main objective of this experiment is to evaluate robustness and efficiency of algorithms for unspecified sentences automatically tagged. To see better what can be found by such a program, we only used as parameters the intersection set operator and the same weight for each parser's output. Further studies should refine them. First experiment The first experiment set aims at comparing block boundaries, according to the algorithm and the tagging. To do this, we carry out a massive simplification of the blocks generated by programs A2, A3 and A4 in order to preserve only boundaries. Then we determine common borders, which constitutes a simple way of testing them without a reference corpus for these French sentences. figure 6 : Is text tagging disturbing parsing? 2 times 4 different outputs implies 64 evaluations With a text tagged two times, we get eight outputs with the four parsers. The data to evaluate give 64 files were very large. Because this experiment only aims at proving the good performance of our parsers with human tagged and automatically tagged texts, we only expose here the results of comparing A1 with A2. Figure 6 shows the experiment procedure, figures 7 and 8 its results. Results Algorithm Words • the difference of behaviour for the same algorithm with two ways of tagging • the difference between two algorithms, which outputs are very different. It comes out from this experiment that boundary differences obtained by a same parser for the two taggers are from 2 to 3%, which indicates that automatic POS t agging remains relevant for the notion of border compared to an expert tagging. This result is highlighted by another statistic given by the evaluator: the number of words per chunk (see figure 7). A second conclusion is that the algorithms are sensitive to the tagging quality (i.e. they react to the variability).: these results indicate that A1 looses up to 10% of its borders when the tagging is not human, and A2 looses up to 20% of its borders. A last conclusion is that the algorithms A1 and A2 really have from 47 to 82% common borders (according to what has already been said, these differences highlight the availability of using these common borders in order to harmonize and guarantee the efficiency of the diverse outputs). This point is discussed in the second experiment. Second experiment The second experiment set aims to compare the three approaches based on Property Grammars. COORD A2 A3 A4 A2 -0% -A3 100% 0% A4 - Figure 14: COORD common borders Other results resulting of the evaluation are as significant as those shown in the tables 10 to 13. The approaches A2 and A3 are rather different (48% average common categories). That partly comes from differences between the tagsets (A3 uses categories that A2 does not know). More precisely, NP, AP, the PP and VP, have respectively up to 55%, 50%, 57% and 30% common borders. A3 is closer to A4, which seeks to satisfy all constraints (90% average). NP, AP, the PP and the VP, have respectively up to 100%, 85%, 86% and 71% common borders. These results imply two conclusions • Common borders inform us about the originality or the conformism of a parser in comparison to another. • A simple knowledge of what each parser does will allow us to parameterize the set operations and the weights associated to each one. For an example, a guide to read these tables can reside in the fact that the algorithm A4 has given the best results in comparison with an expert evaluation of 10 sentences. It comes that most of the common boundaries A4 shares with A2 and A3 are carrying great weight and have to be merged with an 'intersection' set operator. Another information resides in the fact that A3 knows categories that neither A2 nor A4 knows (see figure 14). This knowledge implies that COORD category has to be included in a multiplexing perspective with a weight of 100% and a 'union' set operator. Conclusions Several conclusions c an be extracted from these experiments. In particular, it is possible to calculate efficiently in a deterministic way the syntactic categories constituting a sentence. Moreover it is possible to reduce errors by combining several parsers. An interesting result for further studies lies in the fact that common boundaries obtained by two algorithms eliminates ill-formed and least remarkable boundaries. At the same time, it, increases the size of the blocks while keeping stored the linguistic information available. Finally, the perspective of combining different approaches allows to propose a parameterized granularity in balancing the relative importance of different competing approaches. Other experiments have to be done in order to know more things about mu ltiplexing parsers outputs: cascaded multiplexing will reduce the quantity of chunks per sentence and cause a loss of data that has to be constrained and controlled. figure 1: Output for A1 an indication on the lexical coverage o number of entries o representations (lexical features) • an indication of the syntactic coverage o the number of categories o the different syntactic phenomena • the parsing strategy o robustness o efficiency Constituency (noted Const): Specifies the maximal set of categories that can appear in a category.Example: Const(NP)={Det, AP, N, PP, Sup, Pro} • Obligation (noted Oblig): Specifies the possible heads. One of these categories (and only one) has to be realized. Example: Head(NP) = {N, Pro} • Uniqueness (noted Uniq): Set of categories that cannot be repeated in a phrase. Example: Uniq(NP) = {Det, N, AP, PP, Sup, Pro} • Requirement (noted ⇒): Cooccurrency between sets of categories. Example: N[com] Det • Exclusion (noted ): Cooccurrency restriction between sets of categories. Example: AP Sup (in a NP, a superlative cannot cooccur with an AP) • Linearity (noted <): Linear precedence constraints. • Dependency (noted →): Dependency relations between categories. Common boundaries are compared, category by category. This evaluation reveals several interests for each approach. Figures 9 to 14 show the different data resulting of the avaluation.Algorithm A2 A3 A4 Chunks/ sentence 15.03 19.04 18.97 Words/chunk 1.90 1.50 1.50 Figure 9: Statistics for the second experiment NP A2 A3 A4 A2 100% 54% 45% A3 100% 100% A4 100% Figure 10: NP common borders VP A2 A3 A4 A2 100% 29% 27% A3 100% 75% A4 100% Figure 11: VP common borders AP A2 A3 A4 A2 100% 50% 43% A3 100% 86% A4 100% Figure 12: AP common borders PP A2 A3 A4 A2 100% 57% 49% A3 100% 85% A4 100% Figure 13: PP common borders Principle-based parsing. S Abney, Berwick, R., Abney, S., Tenny, C.Parsing by chunksAbney, S. (1991). Parsing by chunks. In Berwick, R., Abney, S., Tenny, C. (Eds.). Principle-based parsing. (pp. 257--278). Part-of-speech tagging and partial parsing. S Abney, Corpus-Based Methods in Language and Speech Processing. Young, S., Bloothooft, G.DordrechtKluwer Academic PublishersAbney, S. (1997). Part-of-speech tagging and partial parsing. In Young, S., Bloothooft, G. Corpus-Based Methods in Language and Speech Processing, (pp. 118- -136). Kluwer Academic Publishers, Dordrecht. MITalk-79 : The 1979 MIT text -to-speech system. J Allen, S Hunnincutt, R Carlson, B Granström, Papers Presented at the 97th Meeting of the ASA. Wolf and KlattSpeech CommunicationsAllen, J., Hunnincutt, S., Carlson, R., Granström, B. (1979). MITalk-79 : The 1979 MIT text -to-speech system. In Wolf and Klatt (Eds.), Speech Communications (pp. 507-510). Papers Presented at the 97th Meeting of the ASA. From text to speech : The MITalk system. J Allen, S Hunnincutt, D Klatt, Cambridge University PressAllen, J., Hunnincutt, S., Klatt, D. (1987). From text to speech : The MITalk system. Cambridge University Press. . P Blache, Blache P. Property Grammars: a Flexible Constraint-Based Approach to Parsing. &amp; J.-M Balfourier, in proceedings of IWPT-2001& J.-M Balfourier (2001a). "Property Grammars: a Flexible Constraint-Based Approach to Parsing", in proceedings of IWPT-2001. Les Grammaires de Propriétés : Des contraintes pour le traitement automatique des langues naturelles. P Blache, HermèsBlache P. (2001b) Les Grammaires de Propriétés : Des contraintes pour le traitement automatique des langues naturelles, Hermès. A prosodic model for text to speech synthesis in French. Di Cristo, A Di Cristo, P Campione, E Veronis, J , Di Cristo A., Di Cristo P, Campione E, Veronis J, (2000). A prosodic model for text to speech synthesis in French. Génération automatique de la prosodie pour la synthèse à partir du texte. Di Cristo, P , Thèse de DoctoratDi Cristo P, (1998). Génération automatique de la prosodie pour la synthèse à partir du texte. Thèse de Doctorat. Learning Prosodic Features using a Tree Representation. J Hirschberg, O Rambow, ScandinaviaAT&T Labs Research. EurospeechHirschberg J., Rambow O (2001). Learning Prosodic Features using a Tree Representation. AT&T Labs Research. Eurospeech 2001 -Scandinavia. Text analysis and word pronunciation in text -to-speech synthesis. M Liberman, K Church, Advances in Speech Signal Processing. Furui, S., Sondhi, M.M.New YorkDekkerLiberman, M., Church, K. (1992). Text analysis and word pronunciation in text -to-speech synthesis. In Furui, S., Sondhi, M.M. (Eds), Advances in Speech Signal Processing, New York: Dekker, 791-831. Knowledge Extraction. L E Martin, Proceedings of the Twelfth Annual Conference of the Cognitive Science Society. the Twelfth Annual Conference of the Cognitive Science SocietyHillsdale, NJLawrence Erlbaum AssociatesMartin, L.E. (1990). Knowledge Extraction. In Proceedings of the Twelfth Annual Conference of the Cognitive Science Society (pp. 252--262). Hillsdale, NJ: Lawrence Erlbaum Associates.
7,116,703
A Comparison of Various Methods for Concept Tagging for Spoken Language Understanding
The extraction of flat concepts out of a given word sequence is usually one of the first steps in building a spoken language understanding (SLU) or dialogue system. This paper explores five different modelling approaches for this task and presents results on a French state-ofthe-art corpus, MEDIA. Additionally, two log-linear modelling approaches could be further improved by adding morphologic knowledge. This paper goes beyond what has been reported in the literature, e.g. in (Raymond & Riccardi 07). We applied the models on the same training and testing data and used the NIST scoring toolkit to evaluate the experimental results to ensure identical conditions for each of the experiments and the comparability of the results. Using a model based on conditional random fields, we achieve a concept error rate of 11.8% on the MEDIA evaluation corpus.
[ 3446853, 7418935, 16035623 ]
A Comparison of Various Methods for Concept Tagging for Spoken Language Understanding Stefan Hahn hahn@cs.rwth-aachen.de† Lehrstuhl für Informatik 6 -Computer Science Department LIA/CNRS -University of Avignon BP1228 RWTH Aachen University D-52056, 84911Aachen, Avignon cedex 09Germany, France Patrick Lehnen lehnen@cs.rwth-aachen.de† Lehrstuhl für Informatik 6 -Computer Science Department LIA/CNRS -University of Avignon BP1228 RWTH Aachen University D-52056, 84911Aachen, Avignon cedex 09Germany, France Christian Raymond christian.raymond@univ-avignon.fr Lehrstuhl für Informatik 6 -Computer Science Department LIA/CNRS -University of Avignon BP1228 RWTH Aachen University D-52056, 84911Aachen, Avignon cedex 09Germany, France Hermann Ney ney@cs.rwth-aachen.de† Lehrstuhl für Informatik 6 -Computer Science Department LIA/CNRS -University of Avignon BP1228 RWTH Aachen University D-52056, 84911Aachen, Avignon cedex 09Germany, France A Comparison of Various Methods for Concept Tagging for Spoken Language Understanding The extraction of flat concepts out of a given word sequence is usually one of the first steps in building a spoken language understanding (SLU) or dialogue system. This paper explores five different modelling approaches for this task and presents results on a French state-ofthe-art corpus, MEDIA. Additionally, two log-linear modelling approaches could be further improved by adding morphologic knowledge. This paper goes beyond what has been reported in the literature, e.g. in (Raymond & Riccardi 07). We applied the models on the same training and testing data and used the NIST scoring toolkit to evaluate the experimental results to ensure identical conditions for each of the experiments and the comparability of the results. Using a model based on conditional random fields, we achieve a concept error rate of 11.8% on the MEDIA evaluation corpus. Introduction The task of concept tagging is usually defined as the extraction of a sequence of concepts out of a given word sequence. A concept represents the smallest unit of meaning that is relevant for a specific task. A concept may contain various information, like the attribute name or the corresponding value. An example from the MEDIA corpus can be represented as: ...au sept avril temps-date[07/04] dans cet hotel... objetBB[hotel] where the attribute values are written in square brackets behind the attribute name. Within this paper we distinguish between two tasks, the extraction of just the attribute name and the extraction of the attribute name and the corresponding attribute value. In the following section, the various methods which are explored in this paper are shortly described. Section 3. introduces the morphologic features which led to an improved performance for the log-linear models. After the presentation of the training and testing data in Section 4., the experimental results are presented in Section 5. A summary is given in Section 6. The paper concludes with an outlook in Section 7. Methods Log-Linear Models We are using two log-linear models, which only differ in the normalization term. The first one is normalized on a positional level (abbreviated with log-pos) and the second one on sentence level (conditional random fields, abbreviated with CRF). The general representation of these models is described in equation 1 as a conditional probability of a concept sequence c N 1 = c 1 , . . . , c N given a word sequence w N 1 = w 1 , . . . , w N : p(c N 1 |w N 1 ) = 1 Z N n=1 exp M m=1 λ m · h m (c n−1 , c n , w n+2 n−2 ) . (1) The log-linear models are based on feature functions h m (c n−1 , c n , w n+2 n−2 ) representing the information extracted from the given utterance, the corresponding parameters λ m which are estimated in a training process, and a normalization term Z discussed in section 2.1.2. and section 2.1.3. respectively for each model. Feature Functions In our experiments we use binary feature functions h m (c n−1 , c n , w n+2 n−2 ), i.e. they either return the value "0" or "1". If a pre-defined combination of the values c n−1 , c n , w n−2 , . . . , w n+2 is found within the date, the value "1" is returned, otherwise the value "0". E.g. a feature function may fire if and only if the predecessor word w n−1 is "the" and the concept c n is "name". Another example of a feature function would be, if and only if the predecessor concept c n−1 is "number" and the concept c n is "currency". We will call the feature functions based on predecessor, current, and successor word lexical features and the features based on the predecessor concept bigram features. For clarity we will abbreviate the term in the numerator of equation 1 by H(c n−1 , c n , w n+2 n−2 ) = exp M m=1 λ m · h m (c n−1 , c n , w n+2 n−2 ) resulting in p(c N 1 |w N 1 ) = 1 Z N n=1 H(c n−1 , c n , w n+2 n−2 ). (2) Log-Linear on position level One possible normalization of Equation 2 is on a positional level: p(c N 1 |w N 1 ) = N n=1 H(c n−1 , c n , w n+2 n−2 ) c H(c n−1 ,c, w n+2 n−2 ) . This results in the following normalization term: Z = N n=1 c H(c n−1 ,c, w n+2 n−2 ).(3) Using equation 2 with normalization 3 and a given training dataset {{c N 1 } t , {w N 1 } t } T t=1 , the criteria for training and decision making are given bŷ λ M 1 = argmax λ M 1 T t=1 log p({c N 1 } t , {w N 1 } t ) (4) andĉ N 1 (w N 1 ) = argmax c N 1 p(c N 1 |w N 1 )(5) respectively. This modelling approach is usually referred to as Maximum Entropy approach in the literature, e.g. in (Bender & Macherey + 03). Linear Chain Conditional Random Field (CRFs) Linear Chain Conditional Random Fields (CRFs) as defined in (Lafferty & McCallum + 01) could be represented with equation 2 and a normalization Z on sentence level: Z = c N 1 N n=1 H(c n−1 ,c n , w n+2 n−2 ).(6) resulting in the probability p(c N 1 |w N 1 ) = N n=1 H(c n−1 , c n , w n+2 n−2 ) c N 1 N n=1 H(c n−1 ,c n , w n+2 n−2 ) . (7) For both log-linear modelling approaches, the same training and decision criterion is applied. For our experiments, we apply the CRF++ toolkit (Kudo 05) used in (Kudo & Yamamoto + 04). Machine Translation (MT) We use a standard phrase-based machine translation method, which combines several models: phrase-based models in source-to-target and target-to-source direction, IBM-1 like scores at phrase level, again in source-to-target and target-to-source direction, a target language model, and additional word and phrase penalties. These models are log-linearly combined and the respective model weights λ m are optimized using minimum error training. A detailed description of the single models can be found in (Mauser & Zens + 06). Support Vector Machines (SVMs) SVMs realize a standard classifier-based approach to concept tagging. Binary classifiers are trained for each pair of competing classes. For the final classification, the weighted voting of the single classifiers is considered. We apply the open-source toolkit YAMCHA (Kudo & Matsumoto 01). Stochastic Final State Transducers (SFSTs) In the SFST approach, the translation process from word sequences w N 1 to concept sequences c N 1 is implemented by Finite State Machines. The transducer representing the translation process is a composition of • a transducer λ w2c , which groups transducers translating words to concepts, • a transducer λ SLM , representing the stochastic conceptual language model P (w N 1 , c N 1 ) = N n=1 P (w n c n |h n ) with h n = {w n−1 c n−1 , w n−2 c n−2 } (3-gram), • a transducer λ w N 1 , which is the FSM representation of the sentence w N 1 . The best translation is the best path in λ SLU : λ SLU = λ w N 1 • λ w2c • λ SLM(8) All operations are done using the AT&T FSM/GRM Library (Mohri & Pereira + 02). Morphologic Features In addition to the lexical and concept bigram features described in Section 2.1.1., we also tested a set of morphological features. E.g. a capitalized word is a hint for the concept "name". We integrated the following features within both log-linear models: • capitalization: The capitalization feature is true, if a word is capitalized, longer than three letters (to omit abbreviations), and is not after a fullstop (to omit words at the beginning of a sentence). • prefixes with given length n: The prefix feature is true, if the first n letters of a word are equal to a predefined sequence of letters, e.g. for length 2: "in-formal". • suffixes with given length: Similar to the prefix feature, but works on the last letters of a word, e.g. for length 2: "current-ly". Before the model parameters λ m are estimated, a list containing all features which have been seen within the training corpus at least once is generated. Corpus Description For the comparison of the various concept tagging methods respectively modelling approaches described in the previous Section 4., we have chosen a state-of-the art corpus from a spoken language understanding task, namely the MEDIA corpus (Devillers & Maynard + 04). This corpus was collected within the scope of the French Media/Evalda project and covers the domain of the reservation of hotel rooms and tourist information. It is divided into three parts: a training set (approx. 13k sentences), a development set (approx. 1.3k sentences) and an evaluation set (approx. 3.5k sentences). Since the corpus has been collected for the Evaluation of Dialogue systems, there are complete dialogues annotated, i.e. the utterances from the user and from the operator. For this paper, we only consider the dialogue turns uttered by the human users of the system. There are 74 different concept tags ranging from simple date and time expressions (annotated as date resp. temps) to more complex ones like coreferences (annotated as lienRef-coRef). So also the attribute names are written in French since they have been developed within the scope of a French project. One example sentence from the Media training corpus would be: je veux une chambre double pour deux personnes. It translates to "I would like one double room for two persons". The same sentence, annotated on concept level: null{je veux} nombre-chambre{une} chambre-type{chambre double} sejour-nbPersonne{pour deux personnes} So the annotation on concept level is basically a segmentation of the input sentence into various chunks. Since the null-tag mainly tags words without semantic meaning or hesitations etc, the corresponding attribute name and value pairs which have to be extracted by the various algorithms would be nombre-chambre[1] chambre-type[double] sejour-nbPersonne[2] The statistics of the corpora are presented in Table 1. Within this corpus, there is a much richer annotation used than explored within this paper. Here, we just evaluate the concept tagging performance of the various approaches and drop some specifiers and modal information. I.e., the resulting corpus does not stick completely to the MEDIA evaluation guidelines but fits well for a comparison of the systems. Thus, only the statistics w.r.t. the word and concept level are presented in the aforementioned table. Experiments and Results For all experiments in this paper, we use exactly the same evaluation corpus and the same scoring script, based on the NIST evaluation toolkit (NIST ). Thus, we ensure, that the results of the different modelling approaches are comparable. As evaluation criteria, we use the wellestablished Concept Error Rate (CER) and Sentence Error Rate (SER). The CER is defined as the ratio of the sum of deleted, inserted and confused concepts w.r.t. a Levenshtein-alignment for a given reference concept string, and the total number of concepts in that reference string. The SER is defined as ratio of the number of wrong tag sequences and the total number of tag sequences w.r.t. the concept level. In a first experiment, we compare the various models as described in Section 2. w.r.t. tagging performance on the MEDIA eval corpus set (cf. Table 2). The CRF approach outperforms all other models, on both tasks, the attribute name extraction and the additional attribute value extraction. We obtain a CER of 11.8% on the evaluation corpus just considering attribute names and 16.2% also considering attribute values. The log-linear approach on a positional level is second best. Thus, exponential models seem to have a better tagging performance than the other three approaches. For all of the five systems, the attribute value extraction is done in the same way using a rule-based approach. In a second experiment, we explore the effect of morphologic features within log-linear models. Here, we only report results on attribute name extraction. We tried various feature sets and optimized the parameter settings on the development set of the MEDIA corpus. For the CRF model, we get a CER of 12.8% with taking into account only features on word and concept level. Adding morphologic features could reduce the CER by 8% relative from 12.8% CER down to 11.8% CER (cf. Table 3). The gain in SER is also roughly 8% relative. For the position dependent log-linear modelling approach, the CER drops from 16.0% with just the elementary features down to 14.9% CER, a gain of 7% relative. The SER can be improved by roughly 6% relative. The results are presented in Table 4. Conclusion In this paper, we presented a comparison of various models for concept tagging on the MEDIA corpus w.r.t. tagging performance. Two of the models could further be improved --0.01 0.0 0.01 0.0 by adding morphologic knowledge. To ensure the comparability of the models, they were trained and tested on exactly the same data sets and the evaluation of the tagging hypotheses was done using the NIST evaluation toolkit. With the best model, we achieved a CER of 11.8% on the ME-DIA evaluation set. Outlook Additionally to improving the single systems we plan to do experiments on system combination. Also, since there usually is an ASR component involved in an SLU system, we will explore the effect of ASR errors on the tagging performance. It would also be interesting to apply the presented models on lattices and use ASR-based scores, e.g. word posterior confidences, to improve the SLU systems. Table 2 : 2Results on the MEDIA evaluation corpus for var- ious modelling approaches. The error rates are given w.r.t. attribute name extraction only (columns 2,3) and additional attribute value extraction (columns 4,5). attribute attribute/value model CER [%] SER [%] CER [%] SER [%] CRF 11.8 20.6 16.2 23.0 log-pos 14.9 22.2 19.3 26.4 FST 17.9 24.7 21.9 28.1 SVM 18.5 24.5 22.2 28.5 MT 19.2 24.6 23.3 27.6 Table 3 : 3Effect of using morphologic features for the CRF modelling approach. corpus attribute MEDIA-NLU eval CER [%] SER [%] lexical [-2..2] 19.5 28.8 +concepts[-1] 12.8 22.3 +capitalization 12.6 22.2 +suffixes [1..5] 12.0 21.3 +prefixes [1..4] 11.8 20.6 Table 4 : 4Effect of using morphologic features for the loglinear on a positional level modelling approach. corpus attribute MEDIA-NLU eval CER [%] SER [%] lexical [-2..Table 1: Statistics of the MEDIA training, development and evaluation corpora used for all experiments.2] 20.1 26.4 +concepts[-1] 16.0 23.5 +capitalization 15.5 23.2 +suffixes [4..7] 15.3 22.9 +prefixes [1..5] 14.9 22.2 AcknowledgementsThis work was partly funded by the European Union under the specific targeted research project LUNA -spoken language understanding in multilingual communication systems (FP6-033549). Comparison of alignment templates and maximum entropy models for natural language understanding. O Bender, K Macherey, F.-J Och, H Ney, Conference of the European Chapter of the Association for Computational Linguistics. Budapest, HungaryO. Bender, K. Macherey, F.-J. Och, H. Ney. Comparison of alignment templates and maximum entropy models for natural language understanding. In Conference of the European Chapter of the Association for Computational Linguistics, pp. 11-18, Budapest, Hungary, April 2003. The French Media/Evalda project: the evaluation of the understanding capability of spoken language dialog systems. L Devillers, H Maynard, S Rosset, Proceedings of the Fourth Int. Conf. on Language Resources and Evaluation (LREC). the Fourth Int. Conf. on Language Resources and Evaluation (LREC)Lisbon, PortugalL. Devillers, H. Maynard, S. Rosset et al. The French Me- dia/Evalda project: the evaluation of the understanding capability of spoken language dialog systems. In Pro- ceedings of the Fourth Int. Conf. on Language Resources and Evaluation (LREC), pp. 855-858, Lisbon, Portugal, May 2004. Chunking with support vector machines. T Kudo, Y Matsumoto, Proceedings of the Meeting of the North American chapter of the Association for Computational Linguistics (NAACL). the Meeting of the North American chapter of the Association for Computational Linguistics (NAACL)Pitsburgh, PA, USAT. Kudo, Y. Matsumoto. Chunking with support vector machines. In Proceedings of the Meeting of the North American chapter of the Association for Computational Linguistics (NAACL), pp. 1-8, Pitsburgh, PA, USA, June 2001. Applying conditional random fields to japanese morphological analysis. K Kudo, Y Yamamoto, Matsumoto, Proceedings of EMNLP 2004. D. Lin, D. WuEMNLP 2004Barcelona, SpainAssociation for Computational LinguisticsKudo, K. Yamamoto, Y. Matsumoto. Applying condi- tional random fields to japanese morphological analysis. In D. Lin, D. Wu, editors, Proceedings of EMNLP 2004, pp. 230-237, Barcelona, Spain, July 2004. Association for Computational Linguistics. Crf++ toolkit. T Kudo, T. Kudo. Crf++ toolkit. http://crfpp.sourceforge.net/, 2005. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. J Lafferty, A Mccallum, F Pereira, Proceedings of the Eighteenth International Conference on Machine Learning (ICML). the Eighteenth International Conference on Machine Learning (ICML)Williamstown, MA, USAJ. Lafferty, A. McCallum, F. Pereira. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In Proceedings of the Eighteenth Inter- national Conference on Machine Learning (ICML), pp. 282-289, Williamstown, MA, USA, June 2001. The rwth statistical machine translation system for the iwslt 2006 evaluation. A Mauser, R Zens, E Matusov, S Hasan, H Ney, International Workshop on Spoken Language Translation. Kyoto, JapanBest Paper AwardA. Mauser, R. Zens, E. Matusov, S. Hasan, H. Ney. The rwth statistical machine translation system for the iwslt 2006 evaluation. In International Workshop on Spoken Language Translation, pp. 103-110, Kyoto, Japan, Nov. 2006. Best Paper Award. Weighted finite-state transducers in speech recognition. M Mohri, F Pereira, M Riley, Speech recognition scoring toolkit (SCTK). 16M. Mohri, F. Pereira, M. Riley. Weighted finite-state trans- ducers in speech recognition. Computer, Speech and Language, Vol. 16, No. 1, pp. 69-88, 2002. NIST. Speech recognition scoring toolkit (SCTK). http://www.nist.gov/speech/tools/. Generative and discriminative algorithms for spoken language understanding. C Raymond, G Riccardi, Interspeech. Antwerp, BelgiumC. Raymond, G. Riccardi. Generative and discriminative algorithms for spoken language understanding. In Inter- speech, pp. 1605-1608, Antwerp, Belgium, Aug. 2007.
236,898,615
[]
Language and encoding scheme identification of extremely large sets of multilingual text documents Pavol Zavarsky zavarsky@vos.nagaokaut.ac.jp Department of Management and Information Sciences Nagaoka University of Technology 1603-1 Kamitomioka940-2188NagaokaJapan Yoshiki Mikami Department of Management and Information Sciences Nagaoka University of Technology 1603-1 Kamitomioka940-2188NagaokaJapan Shota Wada Department of Management and Information Sciences Nagaoka University of Technology 1603-1 Kamitomioka940-2188NagaokaJapan Language and encoding scheme identification of extremely large sets of multilingual text documents In the paper we present an outline of our approach to identify languages and encoding schemes in extremely large sets of multi-lingual documents. The large sets we are analyzing in our Language Observatory project [1] are formed by dozens of millions of text documents. In the paper we present an approach which allows us to analyze about 250 documents every second (about 20 million documents/day) on a single Linux machine. Using a multithread processing on a cluster of Linux servers we are able to analyze easily more than 100 million documents/day. Introduction Identification of written natural languages and character encoding schemes of text documents is not considered to be a difficult problem. It is true if a document is not written in many languages, is long enough, and the number of documents to be analyzed is not extremely large so that the identification of all documents can be finished within an acceptable period of time. There are two major approaches in written language identification: N-gram and word based approach, see e.g. [3]- [7]. Almost all the existing approaches to language and character encoding scheme identification are language-neutral, in the sense that they can identify any languages that they have been trained on. Both N-gram and word based tools can be trained with any languages the user likes. When the user knows what languages he wants to distinguish between in his application, he gathers up training material in each of these, trains the tool, and uses the tool. Most of the tools are trained on European and a few Asian languages, because those are the most prevalent and useful in on-line documents, but the tools can be successfully used with many other languages. The important notion to understand is the distinction between the algorithm of the identification process itself, which is usually a kind of N-gram or word based classifier, an implementation of the algorithm, and the byte streams of training data. In the following, we present an outline of our implementation of quad-gram vector distance based language and character encoding identification which allows us to analyze more than 1500 documents every second. Language and character encoding identification in Language Observatory project Language Observatory project [1] aims to provide, among others, such information like: -How many written languages are found in the cyberspace? -How many web pages are written in a given language, e.g. Tamil? -What kinds of characte r encoding schemes are employed to encode a given language, e.g. Khmer? To achieve its goals, the Language Observatory project has to collect and analyze about 10 billion web pages every year. In other words, about 27 million web pages must be collected, parsed, and analyzed every day. We are recently able to collect information we are interested in from about 30 million web documents every day, see also [2]. The languages and character encoding schemes of the web pages form a part of the information we are extracting from the web pages. We have already collected, parsed and analyzed several hundred million of documents and more than 1.5 billion of URL links found on documents on web servers in countries of Organization of Islamic Conference and countries of Asia. Efficient access to collected documents Identification of languages and character encoding schemes on more than 20 million web documents every day requires both an efficient storage of downloaded and parsed web documents and an efficient implementation of the language and encoding scheme identification. In the Language Observatory project we store the snapshots of portions of the web in special store files. A typical size of a store file is 20GB ~ 100GB, depending on the size of portion of the web we are interested in. The special store file contains meta-information, such as HTTP headers, and compressed page content of about 2 million ~ 10 million web pages. The store file is a sequence of byte blocks of page records. Every page record starts with a header, which contains a magic cookie used for synchronization purposes. The original page content, without any character encoding conversion, is present in the valid page record of the store file in a compressed form using a Java deflater. The special file format to store web documents allows a very efficient storage and a fast access to all the stored documents from content analysis application programs, which are not limited to the language and character encoding identification. Efficient implementation of language and character encoding scheme identification We use language and character encoding scheme identification based on quad-gram profiles of languages and encoding schemes and Java packages, classes and methods provided to us by Basis Technolo gy [8]. We are using efficiently object-oriented approach in the language and character identification. Our identification scenario employs reusable language and encoding scheme objects that can be called successively to perform detection on all documents stored in the store files outlined in the previous section. During the identification, a quad-gram profile is build for each valid web page in the store file and a vector distance measure between the input profile and the built-in profile is calculated. The best match has the shortest distance. Multi-profile hash containing all quad-grams of all built-in profiles is constructed at the initialization time. This approach allows adding, modifying and removing built-in profiles of languages and encoding schemes. We were able to verify that the runtime performance is linearly affected by the size of the store file, e.g. by the number of documents and the number of unique quad-grams in the documents. We were also able to verify that the runtime performance is linearly affected by the size of the built-in profiles of supported languages and encoding schemes. We have also tested and safely used language and encoding scheme identification in multi-threaded application run in parallel on ten machines of the Language Observatory Linux cluster. In the multithreaded language and encoding identification we are able to analyze more than 1500 web pages every second. Conclusion An efficient storage format that allows a fast access to the stored text documents plays a crucia l role in applications, such as our Language Observatory project, requiring millions of documents to be parsed and analyzed every day. The efficient storage format, briefly described in this paper, allows us an efficient implementation of the language and encoding identification based on concepts of object reusability and multi-threaded programming. In conclusion, Language Observatory project has the aim to help to bridge Digital Divide and welcomes participation and contributions from all interested researchers all around the world. AcknowledgementsThe authors wish to thank Steve Cohen and Andrew Paulsen from Basis Technology, USA, and Dr. Massimo Santini, Dr. Paolo Boldi, and Dr. Sebastiano Vigna from the University of Mila n, Italy, for their help and suggestions.The study was made possible by the financial support of the Japan Science and Technology Agency (JST) under the RISTEX program. Language Observatory Project. Language Observatory Project (2004-2006) http://www.language-observatory.org P Boldi, S Vigna, M Santini, The UbiCrawler Project. P.Boldi, S.Vigna, M.Santini: The UbiCrawler Project http://ubi.iit.cnr.it/projects/ubicrawler/ Language identifier: A computer program for automatic natural-language identification on on-line text. K R Beesley, Proceedings 29th Annual Conference of the American Translators Association. 29th Annual Conference of the American Translators AssociationK. R. Beesley. 1988 Language identifier: A computer program for automatic natural-language identification on on-line text. In Proceedings 29th Annual Conference of the American Translators Association, pages 47-54. Trigram-based method of language identification. J C Schmitt, 5062143U.S. PatentJ. C. Schmitt. 1991 Trigram-based method of language identification, U.S. Patent number: 5062143. Linguini: Language identification for multilingual documents. J M Prager, 32nd Hawaii International Conference on System Sciences. Hawaii, USAJ.M. Prager 1999 Linguini: Language identification for multilingual documents. In 32nd Hawaii International Conference on System Sciences, Hawaii, USA N-gram based text categorization. W B Canvar, J M Trenkle, Symposium on Document Analysis and Information Retrieval. Las VegasUniversity of NevadaW.B. Canvar and J.M. Trenkle 1994 N-gram based text categorization. In Symposium on Document Analysis and Information Retrieval, pages 161-176, University of Nevada, Las Vegas. Comparing Two Language Identification Schemes. G Grefenstette, 3rd International Conference on Statistical Analysis of Textual Data (JADT 95). Rome, ItalyG. Grefenstette 1995 Comparing Two Language Identification Schemes. In 3rd International Conference on Statistical Analysis of Textual Data (JADT 95), Rome, Italy. Word Fragments Based Arabic Language Identification. H El-Shishiny, A Troussov, M Mccloskey, A Takeuchi, P Nevidomsky, Volkov, NEMLAR Conference on Arabic Language Resources and Tools. Cairo, EgyptH. El-Shishiny, A.Troussov, DJ McCloskey, M. Takeuchi, A. Nevidomsky, P. Volkov 2004 "Word Fragments Based Arabic Language Identification. In NEMLAR Conference on Arabic Language Resources and Tools, Cairo, Egypt. . Rosette Language Identifier, Basis Technology. Rosette Language Identifier, 2004, Basis Technology, http://www.basistech.com.
259,370,768
Annotating Mentions Alone Enables Efficient Domain Adaptation for Coreference Resolution
Although recent neural models for coreference resolution have led to substantial improvements on benchmark datasets, transferring these models to new target domains containing out-of-vocabulary spans and requiring differing annotation schemes remains challenging. Typical approaches involve continued training on annotated target-domain data, but obtaining annotations is costly and time-consuming. We show that annotating mentions alone is nearly twice as fast as annotating full coreference chains. Accordingly, we propose a method for efficiently adapting coreference models, which includes a high-precision mention detection objective and requires annotating only mentions in the target domain. Extensive evaluation across three English coreference datasets: CoNLL-2012 (news/conversation), i2b2/VA (medical notes), and previously unstudied child welfare notes, reveals that our approach facilitates annotation-efficient transfer and results in a 7-14% improvement in average F1 without increasing annotator time 1 .
[]
Annotating Mentions Alone Enables Efficient Domain Adaptation for Coreference Resolution Long PapersCopyright Long PapersJuly 9-14, 2023 Nupoor Gandhi nmgandhi@cs.cmu.edu Carnegie Mellon University Anjalie Field anjalief@cs.cmu.edu Carnegie Mellon University Emma Strubell estrubel@cs.cmu.edu Carnegie Mellon University Annotating Mentions Alone Enables Efficient Domain Adaptation for Coreference Resolution Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics the 61st Annual Meeting of the Association for Computational LinguisticsLong Papers1July 9-14, 2023 Although recent neural models for coreference resolution have led to substantial improvements on benchmark datasets, transferring these models to new target domains containing out-of-vocabulary spans and requiring differing annotation schemes remains challenging. Typical approaches involve continued training on annotated target-domain data, but obtaining annotations is costly and time-consuming. We show that annotating mentions alone is nearly twice as fast as annotating full coreference chains. Accordingly, we propose a method for efficiently adapting coreference models, which includes a high-precision mention detection objective and requires annotating only mentions in the target domain. Extensive evaluation across three English coreference datasets: CoNLL-2012 (news/conversation), i2b2/VA (medical notes), and previously unstudied child welfare notes, reveals that our approach facilitates annotation-efficient transfer and results in a 7-14% improvement in average F1 without increasing annotator time 1 . Introduction Neural coreference models have made substantial strides in performance on standard benchmark datasets such as the CoNLL-2012 shared task, where average F1 has improved by 20% since 2016 (Durrett and Klein, 2013;Dobrovolskii, 2021;Kirstain et al., 2021). Modern coreference architectures typically consist of an encoder, mention detector, and antecedent linker. All of these components are optimized end-to-end, using only an antecedent linking objective, so expensive coreference chain annotations are necessary for training (Aralikatte and Søgaard, 2020;Li et al., 2020a). These results have encouraged interest in deploying models in domains like medicine and child protective services, where a small number of practition- Figure 1: Model coreference performance (avg F1) as a function of continued training on limited target domain data requiring varying amounts of annotator time. The source domain is news/conversation (OntoNotes) and the target domain is medical notes (i2b2/VA). Using our method to adapt coreference models using only mentions in the target domain, we achieve strong coreference performance with less annotator time. ers need to quickly obtain information from large volumes of text (Uzuner et al., 2012;Saxena et al., 2020). However, successes over curated data sets have not fully translated to text containing technical vocabulary, frequent typos, or inconsistent syntax. Coreference models struggle to produce meaningful representations for new domain-specific spans and may require many examples to adapt (Uppunda et al., 2021;Lu and Ng, 2020;Zhu et al., 2021). Further, coreference models trained on standard benchmarks are not robust to differences in annotation schemes for new domains (Bamman et al., 2020). For example, OntoNotes does not annotate singleton mentions, those that do not corefer with any other mention. A system trained on OntoNotes would implicitly learn to detect only entities that appear more than once, even though singleton retrieval is often desired in other domains (Zeldes, 2022). Also, practitioners may only be interested in retrieving a subset of domain-specific entities. Continued training on target domain data is an effective approach (Xia and Van Durme, 2021), but it requires costly and time-consuming coreference chain annotations in the new domain (Sachan et al., 2015). Annotating data in high-stakes domains like medicine and child protective services is particularly difficult, where privacy needs to be preserved, and domain experts have limited time. Our work demonstrates that annotating only mentions is more efficient than annotating full coreference chains for adapting coreference models to new domains with a limited annotation budget. First, through timed experiments using the i2b2/VA medical notes corpus (Uzuner et al., 2012), we show that most documents can be annotated for mention detection twice as fast as for coreference resolution ( §3). Then, we propose how to train a coreference model with mention annotations by introducing an auxiliary mention detection objective to boost mention precision ( §4). With this auxiliary objective, we observe that fewer antecedent candidates yields stronger linker performance. Continuity with previous featurebased approaches (Moosavi and Strube, 2016a; Recasens et al., 2013;Wu and Gardner, 2021) suggests this relationship between high-precision mention detection and strong coreference performance in low-resource settings extends beyond the architecture we focus on (Lee et al., 2018). We evaluate our methods using English text data from three domains: OntoNotes (Pradhan et al., 2012), i2b2/VA medical notes (Uzuner et al., 2012), a new (unreleased) corpus of child welfare notes obtained from a county-level Department of Human Services (DHS). We experiment with standard benchmarks for reproducibility, but we focus primarily on real-world settings where there is interest in deploying NLP systems and limited capacity for in-domain annotations (Uzuner et al., 2012;Saxena et al., 2020). For a fixed amount of annotator time, our method consistently out-performs continued training with target domain coreference annotations when transferring both within or across annotation styles and vocabulary. Our primary contributions include: Timing experiments showing the efficiency of mention annotations ( §3), and methodology to easily integrate mention annotations ( §4) into a common coreference architecture (Lee et al., 2018). Furthermore, to the best of our knowledge, this is the first work to examine coreference resolution in child protective settings. With empirical results demonstrating 7-14% improvements in F1 across 3 domains, we find that our approach for adaptation using mention annotations alone is an efficient approach for practical, real-world datasets. Background and Task Definition Neural Coreference Models We focus our examination on the popular and successful neural approach to coreference introduced in Lee et al. (2017). This model includes three components: an encoder to produce span representations, a mention detector that outputs mention scores for candidate mentions, and a linker that outputs candidate antecedent scores for a given mention. For a document of length T , there are T (T −1) 2 possible mentions (sets of contiguous words). For the set of candidate mentions, the system assigns a pairwise score between each mention and each candidate antecedent. The set of candidate antecedents is all previous candidate mentions in the document and a dummy antecedent (representing the case where there is no antecedent). For a pair of spans i, j, the pairwise score is composed of mention scores s m (i), s m (j) denoting the likelihood that spans i and j are mentions and an antecedent score s a (i, j) representing the likelihood that span j is the antecedent of span i. s(i, j) = s m (i) + s m (j) + s a (i, j) This architecture results in model complexity of O(T 4 ), so it is necessary to prune the set of mentions. Lee et al. (2018) introduce coarse-tofine (c2f) pruning: of T possible spans, c2f prunes the set down to M spans based on span mention scores s m (i). Then for each span i, we consider antecedent j based on the sum of their mention scores s m (i), s m (j) and a coarse but efficient pairwise scoring function as defined in Lee et al. (2018). Domain Adaptation Task Setup In this work we investigate the following pragmatic domain adaptation setting: Given a text corpus annotated for coreference from source domain S, an un-annotated corpus from target domain T , and a limited annotation budget, our goal is to maximize coreference F1 performance in the target domain under the given annotation budget. We define this budget as the amount of annotation time. The most straightforward approach to this task is to annotate documents with full coreference chains in the target domain until the annotation budget is exhausted. Given an existing coreference model trained on the source domain, we can continue training on the annotated subset of the target domain. With a budget large enough to annotate at least 100 documents, this has been shown to work well for some domains (Xia and Van Durme, 2021). Effect of In-Domain Training on Mention Detection and Antecedent Linking Given that out-of-domain vocabulary is a common aspect of domain shift in coreference models (Uppunda et al., 2021;Lu and Ng, 2020), we hypothesize that mention detection transfer plays an important role in overall coreference transfer across domains. To test this hypothesis, we conduct a preliminary experiment, examining how freezing the antecedent linker affects overall performance in the continued training domain-adaptation setting described above. We train a c2f model with a Span-BERT encoder (Joshi et al., 2020) on OntoNotes, a standard coreference benchmark, and evaluate performance over the i2b2/VA corpus, a domainspecific coreference data set consisting of medical notes (see §5.2 for details). We additionally use the training set of i2b2/VA for continued in-domain training, and we isolate the impact of mention detection by training with and without freezing the antecedent linker. Results are given in Table 1. Continued training of just the encoder and mention detector results in a large improvement of 17 points over the source domain baseline, whereas unfreezing the antecedent linker does not further significantly improve performance. This result implies that mention detection can be disproportionately responsible for performance improvements from continued training. If adapting only the encoder and mention detection portions of the model yields strong performance gains, this suggests that mention-only annotations, as opposed to full coreference annotations, may be sufficient for adapting coreference models to new domains. does not result in a significant improvement over just tuning the mention detector (MD) and encoder (Enc). All differences between tuned models and SpanBERT + c2f were statistically significant (p < .05) Timed Annotation Experiments In §2 we established that adapting just the mention detection component of a coreference model to a new domain can be as effective as adapting both mention detection and antecedent linking. In this section we demonstrate that annotating mentions is approximately twice as fast as annotating full coreference chains. While coreference has been established as a time-consuming task to annotate for domain experts (Aralikatte and Søgaard, 2020;Li et al., 2020a), no prior work measures the relative speed of mention versus full coreference annotation. Our results suggest, assuming a fixed annotation budget, coreference models capable of adapting to a new domain using only mention annotations can leverage a corpus of approximately twice as many annotated documents compared to models that require full coreference annotations. We recruited 7 in-house annotators with a background in NLP to annotate two tasks for the i2b2/VA dataset. For the first mention-only annotation task, annotators were asked to highlight spans corresponding to mentions defined in the i2b2/VA annotation guidelines. For the second full coreference task, annotators were asked to both highlight spans and additionally draw links between mention pairs if coreferent. All annotators used IN-CEpTION (Klie et al., 2018) and underwent a 45 minute training session to learn and practice using the interface before beginning timed experiments. 2 In order to measure the effect of document length, we sampled short (~200 words), medium (~500), and long (~800) documents. Each annotator annotated four documents for coreference resolution and four documents for mention identification (one short, one medium, and two long, as most i2b2/VA documents are long). Each document was annotated by one annotator for coreference, and one for mention detection. This annotation configuration maximizes the number of documents annotated (as opposed to the number of annotators per document), which is necessary due to the high variance in style and technical jargon in the medical corpus. In total 28 documents were annotated. Table 3 reports the average time taken to annotate each document. On average it takes 1.85X more time to annotate coreference than mention detection, and the disparity is more pronounced (2X) for longer documents. In Table we additionally report inter-annotator agreement. Agreement is slightly higher for mention detection, albeit differences in agreement for the two tasks are not significant due to the small size of the experiment, agreement is higher for mention detection. Although results may vary for different interfaces, we show empirically that mention annotation is faster than coreference annotation. Model Given the evidence that a large benefit of continued training for domain adaptation is concentrated in the mention detector component of the coreference system ( §2.3), and that mention annotations are much faster than coreference annotations ( §3), in this section, we introduce methodology for training a neural coreference model with mention annotations. Our approach includes two core components focused on mention detection: modification to mention pruning ( §4.2) and auxiliary mention detection training ( §4.3). We also incorporate an auxiliary masking objective ( §4.4) targeting the encoder. Baseline In our baseline model architecture (Lee et al., 2018), model components are trained using a coreference loss, where Y (i) is the cluster containing span i predicted by the system, and GOLD(i) is the GOLD cluster containing span i: CL = log N i=1 ŷ∈Y(i)∩GOLD(i) P(ŷ) Of the set of N candidate spans, for each span i we want to maximize the likelihood that the correct antecedent set Y(i) ∩ GOLD(i) is linked with the current span. The distribution over all possible antecedents for a given span i is defined using the scoring function s described in §2: P(y) = e s(i,y) y ∈Y e s(i,y ) Mention Pruning Modification As described in §2, c2f pruning reduces the space of possible spans; however, there is still high recall in the candidate mentions. For example, our SpanBERT c2f model trained and evaluated over OntoNotes achieves 95% recall and 23% precision for mention detection. In state-of-the-art coreference systems, high recall with c2f pruning works well and makes it possible for the antecedent linker to correctly identify antecedents. Aggressive pruning can drop gold mentions. Here, we hypothesize that in domain adaptation settings with a fixed number of in-domain data points for continued training, high-recall in mention detection is not effective. More specifically, it is evident that the benefits of high recall mention tagging are only accessible to highly discerning antecedent linkers. Wu and Gardner (2021) show that antecedent linking is harder to learn than mention identification, so given a fixed number of in-domain examples for continued training, the performance improvement from mention detection would surpass that of the antecedent linker. In this case, it would be more helpful to the flailing antecedent linker if the mention detector were precise. Based on this hypothesis, we propose highprecision c2f pruning to enable adaptation using mention annotations alone. We impose a threshold q on the mention score s m (i) so that only the highest scoring mentions are preserved. Auxiliary Mention Detection Task We further introduce an additional cross-entropy loss to train only the parameters of the mention detector, where x i denotes the span representation for the i'th span produced by the encoder: MD = − N i=1 g(x i ) log (s m (x i )) + (1 − g(x i )) log (1 − s m (x i )) The loss is intended to maximize the likelihood of correctly identifying mentions where the indicator function g(x i ) = 1 iff x i is a GOLD mention. The distribution over the set of mention candidates is defined using the mention score s m . The mention detector is learned using a feed-forward neural network that takes the span representation produced by the encoder as input. The mention identification loss requires only mention labels to optimize. Auxiliary Masking Task We additionally use a masked language modeling objective (MLM) as described in Devlin et al. (2019). We randomly sample 15% of the Word-Piece tokens to mask and predict the original token using cross-entropy loss. This auxiliary objective is intended to train the encoder to produce better span representations. Since continued training with an MLM objective is common for domain adaptation Gururangan et al. (2020), we also include it to verify that optimizing the MD loss is not implicitly capturing the value of the MLM loss. Experiments We evaluate our model on transferring between data domains and annotation styles. To facilitate reproducibility and for comparison with prior work, we conduct experiments on two existing public data sets. We additionally report results on a new (unreleased) data set, which reflects a direct practical application of our task setup and approach. Datasets OntoNotes (ON) (English) is a large widely-used dataset (Pradhan et al., 2012) with standard traindev-test splits. Unlike the following datasets we use, the annotation style excludes singleton clusters. OntoNotes is partitioned into genres: newswire (nw), Sinorama magazine articles (mz), broadcast news (bn), broadcast conversations (bc), web data (wb), telephone calls (tc), the New Testament (pt). i2b2/VA Shared-Task (i2b2) Our first target corpus is a medical notes dataset, released as a part of the i2b2/VA Shared-Task and Workshop in 2011 (Uzuner et al., 2012). Adapting coreference resolution systems to clinical text would allow for the use of electronic health records in clinical decision support or general clinical research for example (Wang et al., 2018). The dataset contains 251 train documents, 51 of which we have randomly selected for development and 173 test documents. The average length of these documents is 962.6 tokens with average coreference chain containing 4.48 spans. The annotation schema of the i2b2 data set differs from OntoNotes, in that annotators mark singletons and only mentions specific to the medical domain (PROBLEM, TEST, TREATMENT, and PERSON). Child Welfare Case Notes (CN) Our second target domain is a new data set of contact notes from a county-level Department of Human Ser-vices (DHS). 3 These notes, written by caseworkers and service providers, log contact with families involved in child protective services. Because of the extremely sensitive nature of this data, this dataset has not been publicly released. However, we report results in this setting, as it reflects a direct, real-word application of coreference resolution and this work. Despite interest in using NLP to help practitioners manage information across thousands of notes (Saxena et al., 2020), notes also contain domain-specific terminology and acronyms, and no prior work has annotated coreference data in this setting. While experienced researchers or practitioners can annotate a small subset, collecting a large in-domain data set is not feasible, given the need to preserve families' privacy and for annotators to have domain expertise. Out of an initial data set of 3.19 million contact notes, we annotated a sample of 200 notes using the same annotation scheme as i2b2, based on conversations with DHS employees about what information would be useful for them to obtain from notes. We adapt the set of entity types defined in the i2b2 annotation scheme to the child protective setting by modifying the definitions (Appendix A, Table 8). To estimate agreement, 20 notes were annotated by both annotators, achieving a Krippendorf's referential alpha of 70.5 and Krippendorf's mention detection alpha of 61.5 (Appendix A, Table 7). On average, documents are 320 words with 13.5 coreference chains with average length of 4.7. We also replicated the timed annotation experiments described in §3 over a sample of 10 case notes, similarly finding that it takes 1.95X more time to annotate coreference than mention detection. We created train/dev/test splits of 100/10/90 documents, allocating a small dev set following Xia and Van Durme (2021). We experiment with different source and target domain configurations to capture common challenges with adapting coreference systems (Table 3). We also select these configurations to account for the influence of singletons on performance metrics. Experimental Setup Baseline: c2f (CL S , CL T ) For our baseline, we assume access to coreference annotations in target domain. We use pre-trained SpanBERT for our encoder. In each experiment, we train on the source We additionally experiment with an alternative baseline (high-prec. c2f CL S , CL T , MD T ) in which coreference annotations are reused to optimize our MD over the target domain. This allows for full utilization the target domain annotations. Proposed: high-prec. c2f (CL S , MD T , MLM T ) We use the same model architecture and pre-trained encoder as the baseline, but also incorporate the joint training objective CL + MD. We optimize CL with coreference examples from the source domain (CL S ), and MD with examples from the target domain (MD T ). We report results only with MD T paired with high-prec. c2f pruning (i.e. threshold q = .5 imposed on the mention score s m ) as described in §4. Without the threshold, MD T has almost no effect on overall coreference performance, likely because the space of candidate antecedents for any given mention does not shrink. Our model uses only mentions without target domain coreference links, while our baseline uses coreference annotations. Accordingly, we compare results for settings where there is (1) an equivalent number of annotated documents and (2) an equivalent amount of annotator time spent, estimated based on the timed annotation experiments in §3. For each transfer setting, we assume the source domain has coreference examples allowing us to optimize CL S . In the target domain, however, we are interested in a few different settings: (1) 100% of annotation budget is spent on coreference, (2) 100% of annotation budget is spent on mentions, (3) the annotation budget is split between mention detection and coreference. In the first and third settings we can optimize any subset of {CL T , MD T , MLM T } over the target domain, whereas CL T cannot be optimized for the second. We train the model with several different samples of the data, where samples are selected using a random seed. We select the number of random seeds based on the subsample size (Appendix B). Augmented Silver Mentions To further reduce annotation burden, we augment the set of annotated mentions over the target domain. We train a mention detector over a subset of gold annotated target-domain. Then, we use it to tag silver mentions over the remaining unlabeled documents, and use these silver mention labels in computing MD T . Coreference Evaluation Configuration In addition to the most common coreference metrics MUC, B 3 , CEAF φ 4 , we average across linkbased metric LEA in our score. We also evaluate each model with and without singletons, since including singletons in the system output can artificially inflate coreference metrics (Kübler and Zhekova, 2011). When evaluating with singletons, we keep singletons (if they exist) in both the system and GOLD clusters. When evaluating without singletons, we drop singletons from both. Table 4 reports results when transfering models trained on ON to i2b2 and models trained on i2b2 to CN with singletons included (for completeness Appendix A, Table 5 reports results without singletons). For both i2b2→CN and ON→i2b2, our model performs better with mention annotations than the continued training baseline with half the coreference annotations (e.g. equivalent annotator time, since the average length of i2b2 documents is 963 words; and timed experiments in CN suggested mention annotations are~2X faster than coreference, §5.1). Combining MLM T with MD T results in our best performing model, but introducing MD T with high-precision c2f pruning is enough to surpass the baseline. The results suggest in-domain mention annotation are more efficient for adaptation than coreference annotations. Results and Analysis Transfer Across Annotation Styles ON and i2b2 have different annotation styles ( §5.2), allowing us to examine how effectively mentiononly annotations facilitate transfer not just across domains, but also across annotation styles. Transferring ON→i2b2 (Table 4), average F-1 improves by 6 points (0.57 to 0.63), when comparing the baseline model with 50% coreference annotations with our model (i.e. equivalent annotator time). Table 4: We report F1 for different models with singletons included in system output, varying the type and amount of target domain annotations. Each shade of gray represents a fixed amount of annotator time (e.g. 50% Coreference and 100% Mention annotations takes an equivalent amount of time to produce). With a limited annotation budget, for both the ON→i2b2 and i2b2→CN experiments, mention annotations are a more efficient use of time, yielding performance gains over the baseline with equivalent annotator time (i.e. indicated with †). * denotes statistical significance with p-value < .05 In Figure 2 (top), we experiment with varying the amount of training data and annotator time in this setting. With more mentions, our model performance steadily improves, flattening out slightly after 1000 mentions. The baseline model continues to improve with more coreference examples. Where there is scarce training data (100-1000 mentions), mention annotations are more effective than coreference ones. This effect persists when we evaluate without singletons ( Figure 5). The baseline likely only identifies mentions that fit into the source domain style (e.g. PEOPLE). Because the baseline model assigns no positive weight in the coreference loss for identifying singletons, in i2b2, entities that often appear as singletons are missed opportunities to improve the baseline mention detector. With enough examples and more entities appearing in the target domain as non-singleton, however, the penalty of these missed examples is smaller, causing the baseline model performance to approach that of our model. Silver Mentions Improve Performance From Figure 2, approximately 250 gold mentions are necessary for sufficient mention detection performance for silver mentions to be useful to our model. For fewer mentions, the mention detector is likely producing silver mention annotations that are too noisy. The benefit of access to additional data starts to dwindle around 3000 mentions. Fixed Annotation Style Transfer We additionally compare effects when transferring between domains, but keeping the annotation style the same. When we transfer from i2b2 to CN, for equivalent annotator time, our model MD T + MLM T improves over baseline CL T by 14 points (.43 to .57) in Table 4. (When singletons are dropped, this effect persists -average F1 im-proves by 10 points, Appendix A, Table 5). When we vary the number of mentions (Figure 2), the marginal benefit of CN mention annotations deteriorates for > 10 4 , but not as rapidly as when we transfer between annotation style in the ON→i2b2 case. While mentions in CN share the same roles as those in i2b2, some types of mentions, (e.g. PROB-LEM), are more difficult to identify. Unlike settings where we transfer between annotation styles, when annotation style remains fixed, the performance improvement from our model increases with more target domain data. This suggests that adapting the mention detector is especially useful when transferring within an annotation style. Given coreference annotations, we find that reusing the annotations to optimize MD T with high-prec. c2f pruning boosts performance slightly when transferring within an annotation style. This is evident in the i2b2→CN case regardless of whether singletons are included in the output. Figure 3 reports results for the genre-to-genre experiments within ON. For equivalent annotator time our model achieves large performance improvements across most genres. Since our model results in significant improvements in low-resource settings when there are no singletons in the system or gold clusters, it is clear that performance gains are not dependent solely on singletons in the system output. Figure 4 shows varying the number of mentions and annotator time in settings where our model performed worse (bn → nw) and better (bn → pt) than the baseline. Regardless of transfer setting or whether singletons are excluded from the system output, our model out-performs the baseline with few mentions. Impact of Singletons Under the with-singleton evaluation scheme, in the ON→i2b2 case, the baseline trained with strictly more data performs worse than our model (Table 4, 0.58 vs. 0.64). Kübler and Zhekova (2011) describe how including singletons in system output causes artificial inflation of coreference metrics based on the observation that scores are higher with singletons included in the system output. Without high-precision c2f pruning with MD T , the baseline drops singletons. So, the gap in Figure 2 between the baseline and our model at 10 4 mentions could be attributed to artificial inflation. In the without- Figure 4: Each subplot shows coreference performance with varied amounts of annotated target data. We report performance with singletons included in system output (left) and singletons excluded from system output (right) for two different genre-to-genre experiments: bn → pt (top) and bn → nw (bottom). Regardless of whether singletons are included, annotating mentions is more efficient for all low-resource settings. singleton evaluation scheme (Figure 4, bottom) the artificial inflation gap between our model and the baseline disappears with enough target examples, better reflecting our intuition that more data should yield better performance. But with fewer examples, our model still out-performs the baseline in the without-singleton evaluation scheme. In practical applications, such as identifying support for families involved in child protective services, retrieving singletons is often desired. Further, excluding singletons in the system output incentivizes high-recall mention detection, since the model is not penalized for a large space of candidate mentions in which valid mentions make up a small fraction. A larger space of possible antecedents requires more coreference examples to adapt antecedent linkers to new domains. Related Work Previous work has used data-augmentation and rule-based approaches to adapt coreference models to new annotation schemes with some success (Toshniwal et al., 2021;Zeldes and Zhang, 2016;Paun et al., 2022). In many cases, adapting to new annotation schemes is not enough -performance degradation persists for out-of-domain data even under the same annotation scheme (Zhu et al., 2021), and encoders (SpanBERT) can struggle to represent domain specific concepts well, resulting in poor mention recall (Timmapathini et al., 2021). Investigation of the popular Lee et al. (2017) architecture has found that coreference systems generally rely more on mentions than context (Lu and Ng, 2020), so they are especially susceptible to small perturbations. Relatedly, Wu and Gardner (2021) find that mention detection precision has a strong positive impact on overall coreference performance, which is consistent with findings on pre-neural systems (Moosavi and Strube, 2016b; Recasens et al., 2013) and motivates our work. Despite challenges associated with limiting source domain annotation schema, with enough annotated data, coreference models can adapt to new domains. Xia and Van Durme (2021) show that continued training is effective with at least 100 target documents annotated for coreference. However, it is unclear how costly it would be to annotate so many documents: while Xia and Van Durme (2021) focus on the best way to use annotated coreference target examples, we focus on the most efficient way to spend an annotation budget. A related line of work uses active learning to select target examples and promote efficient use of annotator time (Zhao and Ng, 2014;Li et al., 2020b;Yuan et al., 2022;Miller et al., 2012). However, since these annotations require link information, there is a persistent trade-off in active learning between reading and labeling (Yuan et al., 2022). Since our method does not require link annotations for adaptation, our annotation strategy circumvents the choice between redundant labeling or reading. Limitations Annotation speed for mention detection and coreference is dependent on many variables like annotation interface, domain expertise of annotators, annotation style, document length distribution. So, while our finding that coreference resolution is approximately 2X slower to annotate than mention detection held for two domains (i2b2, CN), there are many other variables that we do not experiment with. We also experiment with transfer between domains with varying semantic similarity and annotation style similarity. But, our notion of annotation style is narrowly focused on types of mentions that are annotated (i.e. singletons, domain applicationspecific mentions). However, since our method is focused on mention detection, our findings may not hold for transfer to annotation styles with different notions of coreference linking (i.e. split-antecedent anaphoric reference (Yu et al., 2021)). We also focus on one common coreference architecture Lee et al. (2018) with encoder SpanBERT. However, there have been more recent architectures surpassing the performance of Lee et al. (2018) over benchmark ON (Dobrovolskii, 2021;Kirstain et al., 2021). Our key finding that transferring the mention detector component can still be adopted. Ethical Concerns We develop a corpus of child welfare notes annotated for coreference. All research in this domain was conducted with IRB approval and in accordance with a data-sharing agreement with DHS. Throughout this study, the data was stored on a secure disk-encrypted server and access was restricted to trained members of the research team. Thus, all annotations of this data were conducted by two authors of this work. While this work is in collaboration with the DHS, we do not view the developed coreference system as imminently deployable. Prior to considering deploying, at a minimum a fairness audit on how our methods would reduce or exacerbate any inequity would be required. Deployment should also involve external oversight and engagement with stakeholders, including affected families. Conclusion Through timing experiments, new model training procedures, and detailed evaluation, we demonstrate that mention annotations are a more efficient use of annotator time than coreference annotations for adapting coreference models to new domains. Our work has the potential to expand the practical usability of coreference resolution systems and highlights the value of model architectures with components that can be optimized in isolation. A Additional Results For completeness, we additionally include results with singletons omitted from system output. Table 5 reports results for both transfer settings i2b2→CN and ON→i2b2. In Figure 5, we inspect how performance changes with more annotated data. We also report for completeness the difference in model performance using mention annotations and full coreference annotations in Figure 6 for transfer between OntoNotes genres with an equivalent amount of annotated data (unequal amount of annotator time). For our timed annotation experiment described in §3, we report more detailed annotator agreement metrics for the two annotation tasks in Table 6. We expect that agreement scores for both tasks are low, since i2b2/VA dataset is highly technical, and annotators have no domain expertise. The increased task complexity of coreference resolution may further worsen agreement for the task relative to mention detection. We do not use this annotated data beyond timing annotation tasks. B Reproducibility Details Implementation Details For all models, we began first with a pretrained SpanBERT (base) encoder (Joshi et al., 2020) and randomly initialized parameters for the remaining mention detector and antecedent linking. We use 512 for maximum segment length with batch size of one document similar to Lee et al. (2018). We first train the model with a coreference objective over the source domain CL S , and then we train over the target domain with some subset of our objectives CL T , MD T , MLM T We do not weight auxiliary objectives, taking the raw sum over losses as the overall loss. When we train one objective over both the source and target domain (i.e. CL S , CL T ), we interleave examples from each domain. For the CL objective, initial experiments indicated that, for fewer than 1k target For a given number of mentions m, we generated models for min(max(6, 15000/m), 15) random seeds. These bounds were selected based on preliminary experiments assessing deviation. We use a learning rate of 2 × 10 −5 for the encoder and 1 × 10 −4 for all other parameters. We train on the source domain for 20 epochs and on the target domain for 20 epochs or until coreference performance over the dev set degrades for two consecutive iterations. Training time for all models ranges between 80-120 minutes, depending on size of dataset. We used V100, RTX8000, and RTX600 GPUS for training. To reproduce the results in this paper, we approximate at least 1,500 hours of GPU time. All our models contain~134M parameters, with 110M from SpanBERT (base). Evaluation We evaluate with coreference metrics: MUC, B 3 , CEAF φ 4 , LEA for the ON→i2b2 and i2b2→CN transfer settings and only MUC, B 3 , CEAF φ 4 for ON genre transfer experiments, since these three are standard for OntoNotes. We report results with singletons included and excluded from system output. Our evaluation script can be found at src/coref/metrics.py. Table 8 lists the specific definitions for labels used by annotators in the CN dataset, as compared to the descriptions in the i2b2/VA dataset after which they were modeled. TEST phrases that describe procedures, panels, and measures that are done to a patient or a body fluid or sample in order to discover, rule out, or find more information about a medical problem (e.g. exploratory laproratomy, the ekg, his blood pressure) CN Dataset Additional Details phrases that describe steps taken to discover, rule out, or find more information about a problem (e.g. inquired why, school attendance) PROBLEM phrases that contain observations made by patients or clinicians about the patient's body or mind that are thought to be abnormal or caused by a disease (e.g. new ss chest pressure, rigidity, subdued) phrases that contain observations made by CW or client about any client's body or mind that are thought to be abnormal or harmful (e.g. verbal altercation, recent breakdown, lack of connection, hungry) Table 8: In addition to the PERSON entity type which is the same in both domains, we develop a set of types for the child welfare domain that can be aligned with those from the medical domain i2b2/VA as defined in (Uzuner et al., 2012). While the development of these types were intended to facilitate transfer from the medical domain, they are not necessarily comprehensive or sufficiently granular for the downstream tasks that coreference systems may be used for in child protective settings. B2. Did you discuss the license or terms for use and / or distribution of any artifacts? 5.1 B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? 5.1 B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? Not applicable. While the i2b2/VA medical notes dataset is anonymized, the Child Welfare Case Notes dataset that we developed is not anonymized, since it is not public released. B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? 5.1 B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. 5.1 C Did you run computational experiments? 5 C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? A The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance. C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? A C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? 5 C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Not applicable. Left blank. D Did you use human annotators (e.g., crowdworkers) or research with human participants? 3 D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? i2b2/VA data is protected, so we are unable to provide example screenshots D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? 3 D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? 3 D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? Not applicable. Left blank. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? 3 Figure 2 : 2Each subplot shows coreference performance (singletons included) with varied amounts of annotated target domain data wrt the number of mentions (left) and the amount of annotator time (right). Note that for (CLS, MDT , CLT ), we vary only the amount of coreference annotations -the model accesses 100% of mention annotations. For ON→i2b2 (bottom), our model (CLS, MDT ) has the largest improvement over the baseline (CLS, CLT ) with limited annotations/time. For the i2b2→CN (top), however, the disparity increases with more annotations. Figure 3 : 3Heatmap represents performance improvements from our model where singletons are excluded. Our model SpanBERT + high-prec c2f (CLS, MDT ) accesses 100% mention annotations from the target domain, and the baseline SpanBERT + c2f (CLS, CLT ) accesses 50% of coreference examples. Annotating mentions for an equivalent amount of time is much more efficient for most ON genres. Figure 5 : 5Each subplot shows coreference performance (singletons excluded) when trained with different amounts of annotated target domain data. We vary the amount of annotated data with respect to the number of mentions. When transferring ON→i2b2 (bottom row), our model (CLS, MDT ) has the largest improvement over the baseline (CLS, CLT ) with very little training data or annotator time. For the i2b2→CN (top row), however, the performance improvement increases with more annotated data. Figure 6 : 6Heatmap represents performance improvements from our model SpanBERT + high-prec c2f (CLS, MDT ) over the baseline SpanBERT + c2f (CLS, CLT ) where singletons are dropped from the system output. The baseline has access to 100% of target domain coreference examples, and our model has access to 100% mention annotations. domain mentions, our baseline model performed better if we interleaved target and source examples. So, we interleave target and source examples with fewer than 1k mentions from the target domain.For experiments where the number of mentions from the target domain varied, we randomly sampled documents until the number of mentions met our cap (truncating the last document if necessary). Did you discuss any potential risks of your work? 10 A3. Do the abstract and introduction summarize the paper's main claims? 1 A4. Have you used AI writing assistants when working on this paper? Table 2 : 2Timed experiments of mention annotation as com- pared to full coreference annotations. Mention annotation 2X faster over longer documents. Genre i ON Genre j (8.1%, 47.9%)Source S Target T OOV Rate Anno. Style Match i2b2 CN 32.3% ON i2b2 20.8% ON Table 3 : 3Summary of source-target configurations in our experiments. We experiment with transfer between domains with common or differing annotation style, where annotation style can dictate whether or not there are singletons annotated or domain-specific mentions to annotate for example. domain with coreference annotations optimizing only the coreference loss CL S . Then, we continue training with CL T on target domain examples. Model (Lee et al. (2018) + SpanBERT)Target Anno. ON→i2b2 i2b2→CN CLT MDT LEA MUC B 3 CEAFφ Avg. LEA MUC B 3 CEAFφ Avg. + c2f (CLS, CLT ) 0% 0% 0.47 0.61 0.33 0.21 0.41 0.46 0.68 0.41 0.15 0.43 + c2f (CLS, CLT ) † 25% 0% 0.65 0.75 0.44 0.29 0.53 0.49 0.70 0.42 0.16 0.44 + high-prec. c2f (CLS, MDT ) + Silver 0% 50% 0.49 * 0.63 * 0.74 * 0.61 * 0.63 * 0.42 * 0.70 * 0.47 * 0.22 * 0.45 * + c2f (CLS, CLT ) † 50% 0% 0.70 0.79 0.46 0.32 0.57 0.47 0.69 0.42 0.16 0.43 + high-prec. c2f (CLS, CLT , MDT ) † 50% 0% 0.69 0.79 0.45 0.29 0.56 0.52 0.72 0.47 0.21 0.48 + c2f (CLS, MDT ) 0% 100% 0.42 * 0.56 * 0.43 0.32 0.43 0.54 * 0.77 0.47 * 0.21 * 0.49 * + high-prec. c2f (CLS, MDT ) 0% 100% 0.50 * 0.63 * 0.74 * 0.65 0.63 * 0.50 0.77 * 0.52 0.35 * 0.53 + high-prec. c2f (CLS, MDT , MLMT ) 0% 100% 0.50 * 0.63 * 0.77 * 0.68 * 0.64 * 0.57 * 0.76 * 0.58 0.38 0.57 * + c2f (CLS, CLT ) 100% 0% 0.71 0.80 0.48 0.33 0.58 0.77 0.86 0.63 0.29 0.64 the 12th Language Resources and Evaluation Conference, pages 74-79, Marseille, France. European Language Resources Association.David Bamman, Olivia Lewke, and Anya Mansoor. 2020. An annotated dataset of coreference in En- glish literature. In Proceedings of the 12th Lan- guage Resources and Evaluation Conference, pages 44-54, Marseille, France. European Language Re- sources Association. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics. Vladimir Dobrovolskii. 2021. Word-level coreference resolution. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Process- ing, pages 7670-7675, Online and Punta Cana, Do- minican Republic. Association for Computational Linguistics. Greg Durrett and Dan Klein. 2013. Easy victories and uphill battles in coreference resolution. In Proceed- ings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1971-1982, Seattle, Washington, USA. Association for Compu- tational Linguistics. Suchin Gururangan, Ana Marasović, Swabha Swayamdipta, Kyle Lo, Iz Beltagy, Doug Downey, and Noah A. Smith. 2020. Don't stop pretraining: Adapt language models to domains and tasks. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8342-8360, Online. Association for Computational Linguistics. Mandar Joshi, Danqi Chen, Yinhan Liu, Daniel S. Weld, Luke Zettlemoyer, and Omer Levy. 2020. SpanBERT: Improving pre-training by representing and predicting spans. Transactions of the Associa- tion for Computational Linguistics, 8:64-77. Yuval Kirstain, Ori Ram, and Omer Levy. 2021. Coref- erence resolution without span representations. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Lan- guage Processing (Volume 2: Short Papers), pages 14-19, Online. Association for Computational Lin- guistics. Jan-Christoph Klie, Michael Bugert, Beto Boullosa, Richard Eckart de Castilho, and Iryna Gurevych. 2018. The inception platform: Machine-assisted and knowledge-oriented interactive annotation. In Proceedings of the 27th International Conference on Computational Linguistics: System Demonstrations, pages 5-9. Association for Computational Linguis- tics. Event Title: The 27th International Conference on Computational Linguistics (COLING 2018). Sandra Kübler and Desislava Zhekova. 2011. Single- tons and coreference resolution evaluation. In Pro- ceedings of the International Conference Recent Ad- vances in Natural Language Processing 2011, pages 261-267, Hissar, Bulgaria. Association for Compu- tational Linguistics. Kenton Lee, Luheng He, Mike Lewis, and Luke Zettle- moyer. 2017. End-to-end neural coreference reso- lution. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 188-197, Copenhagen, Denmark. Association for Computational Linguistics. Kenton Lee, Luheng He, and Luke Zettlemoyer. 2018. Higher-order coreference resolution with coarse-to- fine inference. In Proceedings of the 2018 Confer- ence of the North American Chapter of the Associ- ation for Computational Linguistics: Human Lan- guage Technologies, Volume 2 (Short Papers), pages 687-692, New Orleans, Louisiana. Association for Computational Linguistics. Maolin Li, Hiroya Takamura, and Sophia Ananiadou. 2020a. A neural model for aggregating corefer- ence annotation in crowdsourcing. In Proceedings of the 28th International Conference on Compu- tational Linguistics, pages 5760-5773, Barcelona, Spain (Online). International Committee on Compu- tational Linguistics. Pengshuai Li, Xinsong Zhang, Weijia Jia, and Wei Zhao. 2020b. Active testing: An unbiased evalua- tion method for distantly supervised relation extrac- tion. In Findings of the Association for Computa- tional Linguistics: EMNLP 2020, pages 204-211, Online. Association for Computational Linguistics. Jing Lu and Vincent Ng. 2020. Conundrums in entity coreference resolution: Making sense of the state of the art. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Process- ing (EMNLP), pages 6620-6631, Online. Associa- tion for Computational Linguistics. Timothy Miller, Dmitriy Dligach, and Guergana Savova. 2012. Active learning for coreference res- olution. In BioNLP: Proceedings of the 2012 Work- shop on Biomedical Natural Language Processing, pages 73-81, Montréal, Canada. Association for Computational Linguistics. Nafise Sadat Moosavi and Michael Strube. 2016a. Search space pruning: A simple solution for bet- ter coreference resolvers. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Hu- man Language Technologies, pages 1005-1011, San Diego, California. Association for Computational Linguistics. Nafise Sadat Moosavi and Michael Strube. 2016b. Which coreference evaluation metric do you trust? a proposal for a link-based entity aware metric. In Proceedings of the 54th Annual Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers), pages 632-642, Berlin, Germany. As- sociation for Computational Linguistics. Silviu Paun, Juntao Yu, Nafise Sadat Moosavi, and Massimo Poesio. 2022. Scoring Coreference Chains with Split-Antecedent Anaphors. Sameer Pradhan, Alessandro Moschitti, Nianwen Xue, Olga Uryupina, and Yuchen Zhang. 2012. CoNLL- 2012 shared task: Modeling multilingual unre- stricted coreference in OntoNotes. In Joint Confer- ence on EMNLP and CoNLL -Shared Task, pages 1-40, Jeju Island, Korea. Association for Computa- tional Linguistics. Marta Recasens, Marie-Catherine de Marneffe, and Christopher Potts. 2013. The life and death of dis- course entities: Identifying singleton mentions. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, pages 627-633, Atlanta, Georgia. Association for Computational Linguistics. Mrinmaya Sachan, Eduard Hovy, and Eric P Xing. 2015. An active learning approach to coreference resolution. In Twenty-Fourth International Joint Conference on Artificial Intelligence. Devansh Saxena, Karla Badillo-Urquiola, Pamela J Wisniewski, and Shion Guha. 2020. A human- centered review of algorithms used within the us child welfare system. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, pages 1-15. Hariprasad Timmapathini, Anmol Nayak, Sarathchan- dra Mandadi, Siva Sangada, Vaibhav Kesri, Karthikeyan Ponnalagu, and Vijendran Gopalan Venkoparao. 2021. Probing the spanbert archi- tecture to interpret scientific domain adaptation challenges for coreference resolution. In Pro- ceedings of the Workshop on Scientific Document Understanding co-located with 35th AAAI Confer- ence on Artificial Inteligence. Shubham Toshniwal, Patrick Xia, Sam Wiseman, Karen Livescu, and Kevin Gimpel. 2021. On gener- alization in coreference resolution. In Proceedings of the Fourth Workshop on Computational Models of Reference, Anaphora and Coreference, pages 111- 120, Punta Cana, Dominican Republic. Association for Computational Linguistics. Ankith Uppunda, Susan Cochran, Jacob Foster, Alina Arseniev-Koehler, Vickie Mays, and Kai-Wei Chang. 2021. Adapting coreference resolution for processing violent death narratives. In Proceedings of the 2021 Conference of the North American Chap- ter of the Association for Computational Linguistics: Human Language Technologies, pages 4553-4559, Online. Association for Computational Linguistics. Ozlem Uzuner, Andreea Bodnari, Shuying Shen, Tyler Forbush, John Pestian, and Brett R South. 2012. Evaluating the state of the art in coreference res- olution for electronic medical records. Journal of the American Medical Informatics Association, 19(5):786-791. Yanshan Wang, Liwei Wang, Majid Rastegar-Mojarad, Sungrim Moon, Feichen Shen, Naveed Afzal, Sijia Liu, Yuqun Zeng, Saeed Mehrabi, Sunghwan Sohn, et al. 2018. Clinical information extraction appli- cations: a literature review. Journal of biomedical informatics, 77:34-49. Zhaofeng Wu and Matt Gardner. 2021. Understanding mention detector-linker interaction in neural coref- erence resolution. In Proceedings of the Fourth Workshop on Computational Models of Reference, Anaphora and Coreference, pages 150-157, Punta Cana, Dominican Republic. Association for Compu- tational Linguistics. Patrick Xia and Benjamin Van Durme. 2021. Moving on from OntoNotes: Coreference resolution model transfer. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 5241-5256, Online and Punta Cana, Domini- can Republic. Association for Computational Lin- guistics. Juntao Yu, Nafise Sadat Moosavi, Silviu Paun, and Massimo Poesio. 2021. Stay together: A system for single and split-antecedent anaphora resolution. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, pages 4174-4184, Online. Association for Compu- tational Linguistics. Michelle Yuan, Patrick Xia, Chandler May, Benjamin Van Durme, and Jordan Boyd-Graber. 2022. Adapt- ing coreference resolution models through active learning. In Proceedings of the 60th Annual Meet- ing of the Association for Computational Linguistics (Volume 1: Long Papers), pages 7533-7549, Dublin, Ireland. Association for Computational Linguistics. Amir Zeldes. 2022. Opinion Piece: Can we Fix the Scope for Coreference? Problems and Solutions for Benchmarks beyond OntoNotes. Dialogue & Dis- course, 13(1):41-62. Amir Zeldes and Shuo Zhang. 2016. When annotation schemes change rules help: A configurable approach to coreference resolution beyond OntoNotes. In Pro- ceedings of the Workshop on Coreference Resolu- tion Beyond OntoNotes (CORBON 2016), pages 92- 101, San Diego, California. Association for Compu- tational Linguistics. Shanheng Zhao and Hwee Tou Ng. 2014. Domain adaptation with active learning for coreference reso- lution. In Proceedings of the 5th International Work- shop on Health Text Mining and Information Analy- sis (Louhi), pages 21-29, Gothenburg, Sweden. As- sociation for Computational Linguistics. Yilun Zhu, Sameer Pradhan, and Amir Zeldes. 2021. Anatomy of OntoGUM-Adapting GUM to the OntoNotes scheme to evaluate robustness of SOTA coreference algorithms. In Proceedings of the Fourth Workshop on Computational Models of Ref- erence, Anaphora and Coreference, pages 141-149, Punta Cana, Dominican Republic. Association for Computational Linguistics. Model (Lee et al. (2018) + SpanBERT) high-prec. c2f (CLS, CLT , MDT ) †Target Anno. ON→i2b2 i2b2→CN CLT MDT LEA MUC B 3 CEAFφ Avg. LEA MUC B 3 CEAFφ Avg. + c2f (CLS, CLT ) 0% 0% 0.47 0.61 0.49 0.24 0.45 0.46 0.68 0.49 0.38 0.50 + c2f (CLS, CLT ) † 25% 0% 0.65 0.75 * 0.68 * 0.50 0.65 * 0.49 0.70 0.51 0.41 0.53 + high-prec. c2f (CLS, MDT ) + Silver 0% 50% 0.49 0.63 0.50 0.15 0.44 0.42 0.70 0.44 0.23 * 0.45 + c2f (CLS, CLT ) † 50% 0% 0.70 0.79 0.72 0.57 0.70 0.47 0.69 0.50 0.40 0.51 + 50% 0% 0.69 0.79 0.72 0.57 0.69 0.52 0.72 0.55 0.45 0.56 + c2f (CLS, MDT ) 0% 100% 0.42 * 0.56 0.44 0.18 0.40 * 0.54 0.77 * 0.56 0.45 0.58 + high-prec. c2f (CLS, MDT ) 0% 100% 0.50 0.63 0.53 * 0.32 * 0.49 0.50 0.77 0.52 0.42 0.55 + high-prec. c2f (CLS, MDT , MLMT ) 0% 100% 0.50 0.63 0.51 0.22 0.47 0.57 0.76 * 0.60 * 0.49 * 0.61 * + c2f (CLS, CLT ) 100% 0% 0.71 0.80 0.74 0.61 0.71 0.77 0.86 0.78 0.71 0.78 Table 5 : 5We report F1 for different models with singletons excluded from system output, varying the type and amount of target domain annotations. Each shade of gray represents a fixed amount of annotator time (e.g. 50% Coreference and 100% Mention annotations takes an equivalent amount of time to produce). When transferring annotation styles (ON→i2b2), coreference annotations are a more efficient use of time, while when transferring within an annotation style (i2b2→CN), mention annotations are more efficient, consistent with results where singletons are included in the system output. Baselines are indicated with † and * denotes statistical significance with p-value < .05Timed Annotation Experiment Mention Detection AgreementAgreement Metric Non-expert Domain-expert Annotators Annotators Krippendorf's alpha 0.405 - Average Precision 0.702 - Average Recall 0.437 - Average F1 0.527 - IAA 0.691 0.97 Timed Annotation Experiment Coreference Agreement Agreement Metric Non-expert Domain-expert Annotators Annotators Krippendorf's alpha 0.371 - Average Precision 0.275 - Average Recall 0.511 - Average F1 0.342 - IAA 0.368 0.73 Table 6 : 6Annotation agreement metrics for timed experiments of mention detection and coreference resolution. Inter-Annotator Agreement (IAA) refers to a metric defined in(Uzuner et al., 2012). For coreference, precision, recall, and F1 are averaged over standard metrics defined in §B. Table 7 7reports measures for inter-annotator agreement for the CN dataset, compared to agreement reported for coreference annotations in OntoNotes.CN Annotation Agreement Agreement Metric Non-expert Annotators OntoNotes MUC 72.0 68.4 CEAF φ4 40.5 64.4 CEAF m 63.4 48.0 B 3 57.8 75.0 Krippendorf's MD alpha 60.5 61.9 Krippendorf's ref. alpha 70.5 - Table 7 : 7Annotation agreement metrics for the CN dataset computed over a random sample of 20 documents. We achieve agreement on par with OntoNotes (Pradhan et al., 2012).TREATMENTphrases that describe procedures, interventions, and substances given to a patient in an effort to resolve a medical problem (e.g. Revascularization, nitroglycerin drip) phrases that describe efforts made to improve outcome for child (e.g. mobile therapy, apologized)i2b2/VA definition CN definition Code is available at https://github.com/ nupoorgandhi/data-eff-coref Annotators were compensated $15/hr and applied for and received permission to access the protected i2b2/VA data. Upon the request of the department, we do not report the name of the county in order to preserve anonymity. AcknowledgementsThanks to Yulia Tsvetkov, Alex Chouldechova, Amanda Coston, David Steier, and the anonymous Department of Human Services for valuable feedback on this work. This work is supported by the Block Center for Technology and Innovation, and A.F. is supported by a Google PhD Fellowship. Modelbased annotation of coreference. Rahul Aralikatte, Anders Søgaard, Proceedings of. nullRahul Aralikatte and Anders Søgaard. 2020. Model- based annotation of coreference. In Proceedings of
21,726,955
KRAUTS: A German Temporally Annotated News Corpus
In recent years, temporal tagging, i.e., the extraction and normalization of temporal expressions, has become a vibrant research area. Several tools have been made available, and new strategies have been developed. Due to domain-specific challenges, evaluations of new methods should be performed on diverse text types. Despite significant efforts towards multilinguality in the context of temporal tagging, for all languages except English, annotated corpora exist only for a single domain. In the case of German, for example, only a narrativestyle corpus has been manually annotated so far, thus no evaluations of German temporal tagging performance on news articles can be made. In this paper, we present KRAUTS, a new German temporally annotated corpus containing two subsets of news documents: articles from the daily newspaper DOLOMITEN and from the weekly newspaper DIE ZEIT. Overall, the corpus contains 192 documents with 1,140 annotated temporal expressions, and has been made publicly available to further boost research in temporal tagging.
[ 3133955, 39011, 12126440, 26201484, 640783, 7691601, 18035773 ]
KRAUTS: A German Temporally Annotated News Corpus Jannik Strötgen Max Planck Institute for Informatics Saarland Informatics Campus SaarbrückenGermany Anne-Lyse Minard Fondazione Bruno Kessler TrentoItaly Univ Rennes CNRS IRISA RennesInriaFrance Lukas Lange llange@mpi-inf.mpg.de Max Planck Institute for Informatics Saarland Informatics Campus SaarbrückenGermany Manuela Speranza Fondazione Bruno Kessler TrentoItaly Bernardo Magnini magnini@fbk.eu Fondazione Bruno Kessler TrentoItaly KRAUTS: A German Temporally Annotated News Corpus temporal taggingcorpus annotationTIMEX3 In recent years, temporal tagging, i.e., the extraction and normalization of temporal expressions, has become a vibrant research area. Several tools have been made available, and new strategies have been developed. Due to domain-specific challenges, evaluations of new methods should be performed on diverse text types. Despite significant efforts towards multilinguality in the context of temporal tagging, for all languages except English, annotated corpora exist only for a single domain. In the case of German, for example, only a narrativestyle corpus has been manually annotated so far, thus no evaluations of German temporal tagging performance on news articles can be made. In this paper, we present KRAUTS, a new German temporally annotated corpus containing two subsets of news documents: articles from the daily newspaper DOLOMITEN and from the weekly newspaper DIE ZEIT. Overall, the corpus contains 192 documents with 1,140 annotated temporal expressions, and has been made publicly available to further boost research in temporal tagging. Introduction Temporal tagging -the extraction and normalization of temporal expressions from texts -is an important task towards improved natural language understanding. Once temporal expressions have been detected in a text, their semantics can be assigned to them in a standard format so that applications can exploit not only the surface forms of temporal expressions, but also their meaning. For instance, applications in event / timeline extraction (Minard et al., 2015;Cornegruta and Vlachos, 2016;Spitz and Gertz, 2016), question answering (Llorens et al., 2015) and (temporal) information retrieval (Kanhabua et al., 2015) can exploit temporal tagging output. Thus, temporal tagging has become a vibrant research area, and several new temporal taggers have been made available and new strategies have been developed. However, as was shown in previous work (Mazur and Dale, 2010;Strötgen and Gertz, 2013;Bethard et al., 2016;Tabassum et al., 2016), different types of documents pose different challenges for temporal tagging such that domain-sensitive normalization strategies are required (Strötgen and Gertz, 2016). To judge the performance of temporal taggers and new methods, evaluations need to be performed on diverse text types, e.g., on news articles and narrative-style Wikipedia documents. In contrast to many natural language processing tasks, there has also been some effort towards multilinguality in the context of temporal tagging, e.g., research competitions were organized not only for English but covered further languages such as Spanish and Italian (Verhagen et al., 2010;Caselli et al., 2014). Despite its importance, German has not been part of any of these challenges so far. In addition, HeidelTime is the only publicly available temporal tagger for German, and only narrative style corpora have been manually annotated so far. Thus, no proper evaluations of German temporal tagging performance on news articles can be carried out. Therefore, HeidelTime's German temporal tagging quality has only been evaluated on narrative texts using the WikiWarsDE and AncientTimes corpora. In this paper, we present our effort in developing KRAUTS, a new temporally annotated corpus in German containing two subsets of news documents: articles from the daily newspaper DOLOMITEN and from the weekly newspaper DIE ZEIT. For annotating temporal expressions in the corpus, we developed annotation guidelines for German temporal tagging by using the guidelines defined for Italian (Caselli and Sprugnoli, 2015) as a starting point. Overall, the corpus contains 192 documents with 1,140 annotated temporal expressions, and the corpus as well as the annotation guidelines have been made publicly available to further boost research in temporal tagging. 1 Related Work The task of temporal processing has gained interest in recent years, in particular thanks to the TempEval tasks at SemEval (Verhagen et al., 2007;Verhagen et al., 2010;Uz-Zaman et al., 2013;Llorens et al., 2015;Bethard et al., 2016). Temporal tagging is a subtask of temporal processing and consists of the identification of temporal expressions in texts and their normalization to some standard format. Strötgen and Gertz (2016) present a complete overview of the task as well as a survey of the resources, tools, etc. They focus on the description of domainsensitive temporal tagging and multilingual taggers. The annotation of temporal expressions follows in most cases the TimeML annotation guidelines (Pustejovsky et al., 2003) developed first for English. They have then been adapted to other languages such as Italian (Caselli and Sprugnoli, 2015), Spanish (Saurí et al., 2009) and French (Bittar, 2010). However until now no adaptation of the guidelines to German has been done. The two corpora of narratives in German, AncientTimes (Strötgen et al., 2014) and WikiWarsDE (Strötgen and Gertz, 2011), have been manually annotated but following the English TimeML guidelines without further specifying language-specific adaptations. WikiWarsDE is the German counterpart of the English WikiWars corpus (Mazur and Dale, 2010) and AncientTimes is a small multilingual corpus containing documents about history. Driven by the above-mentioned shared tasks, many temporal taggers have been developed. Some of these can process several languages, such as TIPSem (Llorens et al., 2010) for English and Spanish, TimePro (Mirza and Minard, 2014) (a module of TextPro 2 ) for English, Italian and French, and HeidelTime (Strötgen and Gertz, 2013) for 13 languages, including German, as well as its automatic extension as a baseline temporal tagger for more than 200 languages (Strötgen and Gertz, 2015). Strötgen et al. (2014) performed an evaluation of HeidelTime on two German corpora of narratives: WikiWarsDE and An-cientTimes. They reported F1-scores of 87.7 and 78.0 for strict match, and value F1-scores 3 of 80.4 and 82.2 on Wiki-WarsDE and AncientTimes, respectively. Our work consists of defining TimeML guidelines for German and annotating a corpus following these guidelines. Using the newly annotated corpus, we report evaluation results for HeidelTime, which to the best of our knowledge is the only temporal tagger available for German. Corpus Description KRAUTS (Korpus of newspapeR Articles with Underlined Temporal expressionS) consists of two subsets: articles of the daily, regional newspaper DOLOMITEN and articles of the nationwide weekly newspaper DIE ZEIT. The corpus is composed of 192 documents with a total of 75,678 tokens. Table 1 contains some statistics about both subsets. Details about annotated temporal expressions will be given in Section 5. Dolomiten The DOLOMITEN subset consists of 142 articles published between 2009 and 2016. DOLOMITEN is a local newspaper from South Tyrol (Italy) written in the local variant of German. Therefore, the articles contain words and phrases which are not used in High German, including the temporal expression heuer which translates to "this year". Two students, supervised by two expert annotators, performed the manual annotation of temporal expressions. 100 DOLOMITEN articles were annotated starting from raw text, while the remaining 42 were first pre-annotated with the HeidelTime tool and then checked and corrected manually in order to speed-up the annotation process. Die Zeit In the context of a Bachelors thesis on a time-centric analysis of German news articles (Lange, 2017), 50 documents of the German weekly newspaper DIE ZEIT were manually annotated by two annotators -but without first adapting the English annotation guidelines to German in a concise way. This resulted in several discussions about nonuniform annotations, and it was concluded that proper an- notation guidelines for German are required to achieve high quality manual annotations. In the context of the collaboration between the Fondazione Bruno Kessler and the Max Planck Institute for Informatics, the 50 articles have been re-annotated following the newly developed annotation guidelines for German (cf. Section 4). Compared to the documents in the DOLOMITEN part of the KRAUTS corpus, the DIE ZEIT articles are very long (an average of 885 tokens vs. 221 per article, respectively). In addition, as they are sometimes rather non-standard news articles, annotating temporal expressions in these documents is probably more challenging, even for humans. German-specific Guidelines As German presents some language-specific phenomena which have to be taken into account when performing any annotation task, it is not possible to apply the English temporal annotation guidelines to German in a straightforward way. The adaptations to be done mainly affect the extent of temporal expressions. In particular, in German there are compounds which can sometimes contain temporal expressions (e.g., Diskussionabende "Evenings of discussion") and there are contractions of prepositions and articles (e.g., im: in + dem "in the"). It is widely accepted that annotations of temporal expressions should always start and end at a token boundary, whereas the specific morphology of German would lead to annotating a subpart of the tokens in the case of compounds and contractions. This illustrates that the English annotation guidelines cannot be directly applied, as these state that articles, when present, are part of temporal expressions while prepositions are not. In order to develop the guidelines needed for the annotation of temporal expressions in German, we selected the It-TimeML guidelines (Caselli and Sprugnoli, 2015) as a reference. The choice was motivated not only by the fact that these guidelines are very well-defined and detailed, but also by the fact that in Italian, as in German, it is also possible to contract articles and prepositions. Thus, the Italian guidelines are a more natural choice than the English ones when adapting annotation guidelines to German. The new guidelines we produced are summarized in the document Examples and Guidelines for Annotation of Temporal Expressions (<TIMEX3>) in German, which is an annex to the It-TimeML guidelines. It is available for download on the It-TimeML website 4 and linked from the KRAUTS website. The annex contains the extensions needed to adapt the It-TimeML guidelines to the specific morpho-syntactic features of German, as well as many German annotated examples to illustrate the application to the German language of the relevant It-TimeML guidelines. The numbering of the examples in the annex is the same as in the It-TimeML guidelines. Compounds Compound words, lexemes that consists of more than one lexical element, are very frequent in German. For example, Werktag is composed of Werk "work" and Tag "day" and means "working day". We can see in this example that compounds can contain lexical elements with temporal meaning. According to our guidelines, a compound containing a temporal trigger has to be annotated if the syntactic head of the compound is a temporal trigger. If the syntactic head is not a temporal trigger, on the other hand, the compound should not be annotated, even if it contains a temporal trigger. For example, Diskussionsabende in (1) is annotated because the syntactic head abend "evening" is a temporal trigger, whereas Monatsblatt in (2) is not annotated (the syntactic head is blatt "leaflet", which is not a temporal trigger). (1) Weiters werden im Jugendtreff <TIMEX3>zwei Diskussionsabende</TIMEX3> veranstaltet. [Furthermore, two evenings of discussion will be organized in the youth center.] (2) Jedes Monatsblatt behandelt ein eigenes Thema. [Each monthly leaflet addresses a specific subject.] Prepositions, Articles, and Contractions Following the general TimeML rule, articles (whatever their case) are included in the extent of the temporal expressions (3), while prepositions are excluded (4); as for contractions of prepositions and definite articles, we adopted the Italian guidelines and so they are not included in the extent of the temporal expression (5). This decision leaves open the possibility of marking contractions in a future step, as they often include prepositions used as indicators of temporal relations. According to the TimeML framework, these are to be marked as SIGNAL if the full task of temporal annotation and not just temporal tagging is the goal. ( SET), quant (quantifier of a SET) and mod (temporal modifier). According to the TimeML annotation guidelines, TIMEX3 tags with no extent (i.e., empty TIMEX3 tags) are introduced, for example, to deal with unspecified time points, which are sometimes needed to anchor durations. Nevertheless, in most prior work, empty TIMEX3 tags have not been used, neither in annotated corpora nor by TIMEX3compliant temporal taggers. However, in order to represent durations in a better way, empty TIMEX3 tags should be annotated. We thus followed the organizers of the Italian temporal tagging challenge EVENTI (Caselli et al., 2014) and the developers of the MEANTIME corpus (Minard et al., 2016), who were the first, and, to the best of our knowledge, the only researchers so far who have annotated empty TIMEX3 tags in documents which have resulted in publicly available corpora (the EVENTI corpus for Italian and the MEANTIME corpus contains English, Dutch, Italian, and Spanish temporally annotated news articles). An example of an empty TIMEX3 tag is given in (6): The duration vor einem Monat "one month ago" (vor is outside of the TIMEX3 tag as it is a preposition) is annotated in the text and an empty TIMEX3 tag of type DATE is added which represent the date of one month ago to anchor the duration. (6) (DCT: 2018-01-17, t0) . . . vor <TIMEX3 tid="t1" type="DURATION" value="P1M" beginPoint="t2" endPoint="t0">einem Monat</TIMEX3> ... <TIMEX3 tid="t2" type="DATE" value="2017-12-17" anchorTimeID="t0"/> [... one month ago ...] In Table 2, we provide information about the distribution of the different types of temporal expressions in the corpus. In total, KRAUTS contains 1,140 (text-consuming) temporal expressions and 71 empty TIMEX3 tags. 64% of the temporal expressions are of type DATE. The DOLOMITEN subset contains 587 temporal expressions, a large proportion of which are dates. We observe a rather high number of temporal expressions of type TIME with regards to the type of texts (newspaper articles). We can explain it by the presence of local event announcements in the DOLOMITEN newspaper. The DIE ZEIT subset contains 553 temporal expressions, with a majority of dates. Compared to the DOLOMITEN subset, it contains few time expressions and many duration and set expressions. The rather low number of time expressions is probably due to the fact that DIE ZEIT is a weekly newspaper so that such fine-granular expressions are less important. Duration and set expressions often occur in articles belonging to categories (such as "travel") that are not typical news categories. This also shows that the DIE ZEIT subset contains very diverse articles and several some of the documents can be considered as rather less typical news articles compared to articles of a daily newspaper -which increases the difficulty of temporal tagging the DIE ZEIT subset of the KRAUTS corpus. Evaluating HeidelTime on KRAUTS In Table 3, we present the evaluation of HeidelTime, performed with the TempEval-3 scorer 5 , on KRAUTS. 6 We have performed the evaluation on three sections of the corpus separately: DOLOMITEN-42 (the subpart of the Dolomiten articles pre-annotated with HeidelTime and revised manually), DOLOMITEN-100 (the Dolomiten articles annotated manually starting from raw texts), and DIE ZEIT. For comparison, the best system for relaxed matching at TempEval-3 on English news documents, SUTime, obtained an F1-score of 90.32 on relaxed match, 79.57 on strict match and 67.38 on value, and the overall best system, HeidelTime, obtained an F1-score of 90.30 on relaxed match, 81.34 on strict match, and 77.61 on value. The results obtained on DOLOMITEN-42 are higher than those obtained on the other two sections; this can be explained considering that the DOLOMITEN-42 articles had been pre-annotated with HeidelTime and consequently the final annotation (after manual revision) might still be slightly biased. The results on the DIE ZEIT articles are the lowest, probably because the articles are very long and thus have a more complex temporal discourse structure. In addition, some articles are written in a narrative rather than news-style fashion -due to the characteristics of the weekly newspaper in general -which led to incorrect normalizations. In Table 4, we give the detailed results obtained for each type of temporal expression. We can observe that the best results are obtained for temporal expressions of type DATE and DURATION. The results for type SET are low, but it should be noticed that the corpus contains very few of them so that few false positives and false negatives lower the score significantly. We have analyzed the annotations made by HeidelTime on the corpus DOLOMITEN-42 (corpus of 42 files preannotated with HeidelTime). We counted three false positives: the age of a person (which is not to be marked as a temporal expression), a four digit number that was not a year, and an occurrence of "Christmas" which did not refer to Christmas as a time period but, more in general, as a subject. As false negatives we found many expressions where index.html 6 "Strict" and "relaxed" refer to the evaluation of the extent of the temporal expressions; "type" and "value" represent the evaluation of the respective attributes. The F1-score is computed taking into account the recognition of the temporal expressions with relaxed match and the identification of the attribute. The TempEval-3 scorer does not evaluate the empty TIMEX3 tags, so they are not part of the presented evaluation. . Introducing a few new rules will prevent the tool from leaving out these temporal expressions in the future. Conclusions In this paper, we defined specific TimeML guidelines for German starting from It-TimeML, the Italian TimeML guidelines. Following these new guidelines we annotated a corpus of newspaper articles: the KRAUTS corpus. It is the first news corpus for temporal tagging in German. It consists of 192 articles from a daily, regional newspaper and a weekly newspaper, with 1,140 annotated temporal expressions. As a benchmark for the evaluation of automatic systems, we have exploited KRAUTS to evaluate HeidelTime, the temporal tagger for German, which has only been evaluated against narrative-style corpora for German so far. On two of the three subparts of the corpus (DOLOMITEN-100 and DIE ZEIT) it obtained F1-scores of around 70 and 80, respectively for strict and relaxed match. KRAUTS contains different kinds of news articles which differ from the length of the documents and the proportion of each type of TIMEX3. This is not enough in order to develop and evaluate a generic temporal tagger. For German, narrative-style temporal annotated documents are also available. It will now be interesting to also annotate some colloquial texts, such as tweets or emails. Acknowledgments We thank Sara Baino and Martina Coser for their contribution, which consisted of manually annotating the DOLOMITEN articles. This work has been partially funded by the EUCLIP RES project under the FESR Program of the Autonomous Province of Bolzano -South Tyrol. Bibliographical References Bethard, S., Savova, G., Chen, W.-T., Derczynski, L., Pustejovsky, J., and Verhagen, M. (2016 Table 1: Statistics about the KRAUTS corpus.DOLOMITEN DIE ZEIT KRAUTS # documents 142 50 192 # tokens 31,422 44,256 75,678 tokens/doc 221 885 394 Table 2 : 2Annotation statistics (in the first part of the table we give the number of text-consuming TIMEX3). Table 3 : 3HeidelTime evaluation results (in terms of F1score) on the three subsets of KRAUTS.Table 4: HeidelTime evaluation results (relaxed match in terms of F-measure) on the three subsets of KRAUTS. the time of day (e.g., "um 20.30 Uhr" [at 20:30]) was not an hour on the dot, as well as expressions with the ordinal number of the week (e.g., "der dritten Woche" [the third week])DATE TIME DURAT. SET DOLOMITEN-42 84.8 77.1 87.5 50.0 DOLOMITEN-100 83.3 51.7 66.1 54.6 DIE ZEIT 78.8 76.2 74.5 37.5 ). SemEval-2016 Task 12: Clinical TempEval. In Proceedings of the 10th International Workshop on Semantic Evaluation, SemEval'16, pages 1052-1062. Association for Computational Linguistics. http://github.com/JannikStroetgen/KRAUTS http://textpro.fbk.eu 3 "value F1-score" consists of evaluating both the recognition of the temporal expressions with relaxed match and the correctness of the normalization value. https://sites.google.com/site/ittimeml/ documents www.cs.york.ac.uk/semeval-2013/task1/ ISO-TimeML Annotation Guidelines for French, version 1.0. A Bittar, Technical reportBittar, A. (2010). ISO-TimeML Annotation Guidelines for French, version 1.0. Technical report. It-TimeML, TimeML Annotation Guidelines for Italian. T Caselli, R Sprugnoli, Technical reportversion 1.4.Caselli, T. and Sprugnoli, R. (2015). It-TimeML, TimeML Annotation Guidelines for Italian, version 1.4. Technical report. EVENTI: EValuation of Events and Temporal INformation at Evalita. T Caselli, R Sprugnoli, M Speranza, M Monachini, Proceedings of the Fourth International Workshop EVALITA, EVALITA 2014. the Fourth International Workshop EVALITA, EVALITA 2014Caselli, T., Sprugnoli, R., Speranza, M., and Monachini, M. (2014). EVENTI: EValuation of Events and Tem- poral INformation at Evalita 2014. In Proceedings of the Fourth International Workshop EVALITA, EVALITA 2014, pages 27-34. Timeline Extraction Using Distant Supervision and Joint Inference. S Cornegruta, A Vlachos, Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, EMNLP'16. the 2016 Conference on Empirical Methods in Natural Language Processing, EMNLP'16Association for Computational LinguisticsCornegruta, S. and Vlachos, A. (2016). Timeline Extrac- tion Using Distant Supervision and Joint Inference. In Proceedings of the 2016 Conference on Empirical Meth- ods in Natural Language Processing, EMNLP'16, pages 1936-1942. Association for Computational Linguistics. Temporal Information Retrieval. N Kanhabua, R Blanco, K Nørvåg, Found. Trends Inf. Retr. 92Kanhabua, N., Blanco, R., and Nørvåg, K. (2015). Tem- poral Information Retrieval. Found. Trends Inf. Retr., 9(2):91-208. Time in Newspaper -A Largescale Analysis of Temporal Expressions in News Corpora. Bachelor's thesis. L Lange, Saarland Informatics CampusUniversität des Saarlandes, Max Planck Institute for InformaticsLange, L. (2017). Time in Newspaper -A Large- scale Analysis of Temporal Expressions in News Cor- pora. Bachelor's thesis, Universität des Saarlandes, Max Planck Institute for Informatics, Saarland Informatics Campus. TIPSem (English and Spanish): Evaluating CRFs and Semantic Roles in TempEval-2. H Llorens, E Saquete, B Navarro, Proceedings of the 5th International Workshop on Semantic Evaluation, SemEval'10. the 5th International Workshop on Semantic Evaluation, SemEval'10Association for Computational LinguisticsLlorens, H., Saquete, E., and Navarro, B. (2010). TIPSem (English and Spanish): Evaluating CRFs and Semantic Roles in TempEval-2. In Proceedings of the 5th Interna- tional Workshop on Semantic Evaluation, SemEval'10, pages 284-291. Association for Computational Linguis- tics. SemEval-2015 Task 5: QA TempEval -Evaluating Temporal Information Understanding with Question Answering. H Llorens, N Chambers, N Uzzaman, N Mostafazadeh, J Allen, J Pustejovsky, Proceedings of the 9th International Workshop on Semantic Evaluation, SemEval'15. the 9th International Workshop on Semantic Evaluation, SemEval'15Association for Computational LinguisticsLlorens, H., Chambers, N., UzZaman, N., Mostafazadeh, N., Allen, J., and Pustejovsky, J. (2015). SemEval-2015 Task 5: QA TempEval -Evaluating Temporal Informa- tion Understanding with Question Answering. In Pro- ceedings of the 9th International Workshop on Seman- tic Evaluation, SemEval'15, pages 792-800. Association for Computational Linguistics. WikiWars: A New Corpus for Research on Temporal Expressions. P Mazur, R Dale, Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, EMNLP'10. the 2010 Conference on Empirical Methods in Natural Language Processing, EMNLP'10Association for Computational LinguisticsMazur, P. and Dale, R. (2010). WikiWars: A New Corpus for Research on Temporal Expressions. In Proceedings of the 2010 Conference on Empirical Methods in Natu- ral Language Processing, EMNLP'10, pages 913-922. Association for Computational Linguistics. . A.-L Minard, M Speranza, E Agirre, I Aldabe, M Van Erp, B Magnini, G Rigau, R Urizar, Minard, A.-L., Speranza, M., Agirre, E., Aldabe, I., van Erp, M., Magnini, B., Rigau, G., and Urizar, R. (2015). TimeLine: Cross-Document Event Ordering. - Semeval, Proceedings of the 9th International Workshop on Semantic Evaluation, SemEval'15. the 9th International Workshop on Semantic Evaluation, SemEval'15Association for Computational Linguistics4SemEval-2015 Task 4: TimeLine: Cross-Document Event Ordering. In Proceedings of the 9th International Workshop on Semantic Evaluation, SemEval'15, pages 778-786. Association for Computational Linguistics. . A.-L Minard, M Speranza, R Urizar, B Altuna, M Van Erp, A Schoen, C Van Son, Minard, A.-L., Speranza, M., Urizar, R., Altuna, B., van Erp, M., Schoen, A., and van Son, C. (2016). MEANTIME, the NewsReader Multilingual Event and Time Corpus. Proceedings of the 10th International Conference on Language Resources and Evaluation, LREC'16. the 10th International Conference on Language Resources and Evaluation, LREC'16ELRAMEANTIME, the NewsReader Multilingual Event and Time Corpus. In Proceedings of the 10th International Conference on Language Resources and Evaluation, LREC'16, pages 4417-4422. ELRA. FBK-HLT-time: A Complete Italian Temporal Processing System for EVENTI-Evalita. P Mirza, A.-L Minard, Proceedings of the Fourth International Workshop EVALITA. the Fourth International Workshop EVALITAMirza, P. and Minard, A.-L. (2014). FBK-HLT-time: A Complete Italian Temporal Processing System for EVENTI-Evalita 2014. In Proceedings of the Fourth In- ternational Workshop EVALITA, EVALITA 2014. TimeML: Robust Specification of Event and Temporal Expressions in Text. J Pustejovsky, J M Castaño, R Ingria, R Saurí, R J Gaizauskas, A Setzer, G Katz, D R Radev, New Directions in Question Answering. Pustejovsky, J., Castaño, J. M., Ingria, R., Saurí, R., Gaizauskas, R. J., Setzer, A., Katz, G., and Radev, D. R. (2003). TimeML: Robust Specification of Event and Temporal Expressions in Text. In New Directions in Question Answering, pages 28-34. Annotating Time Expressions in Spanish TimeML Annotation Guidelines. R Saurí, E Saquete, J Pustejovsky, Technical reportSaurí, R., Saquete, E., and Pustejovsky, J. (2009). Anno- tating Time Expressions in Spanish TimeML Annotation Guidelines. Technical report. Terms over LOAD: Leveraging Named Entities for Cross-Document Extraction and Summarization of Events. A Spitz, M Gertz, Proceedings of the 39th International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR'16. the 39th International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR'16ACMSpitz, A. and Gertz, M. (2016). Terms over LOAD: Lever- aging Named Entities for Cross-Document Extraction and Summarization of Events. In Proceedings of the 39th International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR'16, pages 503-512. ACM. WikiWarsDE: A German Corpus of Narratives Annotated with Temporal Expressions. J Strötgen, M Gertz, Proceedings of the Conference of the German Society for Computational Linguistics and Language Technology, GSCL'11. the Conference of the German Society for Computational Linguistics and Language Technology, GSCL'11Strötgen, J. and Gertz, M. (2011). WikiWarsDE: A Ger- man Corpus of Narratives Annotated with Temporal Ex- pressions. In In Proceedings of the Conference of the German Society for Computational Linguistics and Lan- guage Technology, GSCL'11, pages 129-134. Multilingual and Crossdomain Temporal Tagging. Language Resources and Evaluation. J Strötgen, M Gertz, 47Strötgen, J. and Gertz, M. (2013). Multilingual and Cross- domain Temporal Tagging. Language Resources and Evaluation, 47(2):269-298. A Baseline Temporal Tagger for all Languages. J Strötgen, M Gertz, Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, EMNLP'15. the 2015 Conference on Empirical Methods in Natural Language Processing, EMNLP'15Association for Computational LinguisticsStrötgen, J. and Gertz, M. (2015). A Baseline Temporal Tagger for all Languages. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, EMNLP'15, pages 541-547. Association for Computational Linguistics. Domain-sensitive Temporal Tagging. J Strötgen, M Gertz, Synthesis Lectures on Human Language Technologies. Morgan & Claypool PublishersStrötgen, J. and Gertz, M. (2016). Domain-sensitive Temporal Tagging. Synthesis Lectures on Human Lan- guage Technologies, Morgan & Claypool Publishers, San Rafael, CA. Extending HeidelTime for Temporal Expressions Referring to Historic Dates. J Strötgen, T Bögel, J Zell, A Armiti, T V Canh, M Gertz, Proceedings of the 9th International Conference on Language Resources and Evaluation, LREC'14. the 9th International Conference on Language Resources and Evaluation, LREC'14ELRAStrötgen, J., Bögel, T., Zell, J., Armiti, A., Canh, T. V., and Gertz, M. (2014). Extending HeidelTime for Tempo- ral Expressions Referring to Historic Dates. In Proceed- ings of the 9th International Conference on Language Resources and Evaluation, LREC'14, pages 2390-2397. ELRA. TweeTime : A Minimally Supervised Method for Recognizing and Normalizing Time Expressions in Twitter. J Tabassum, A Ritter, W Xu, Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, EMNLP'16. the 2016 Conference on Empirical Methods in Natural Language Processing, EMNLP'16Association for Computational LinguisticsTabassum, J., Ritter, A., and Xu, W. (2016). TweeTime : A Minimally Supervised Method for Recognizing and Nor- malizing Time Expressions in Twitter. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, EMNLP'16, pages 307-318. As- sociation for Computational Linguistics. SemEval-2013 Task 1: TempEval-3: Evaluating Time Expressions, Events, and Temporal Relations. N Uzzaman, H Llorens, L Derczynski, J Allen, M Verhagen, J Pustejovsky, Proceedings of 7th International Workshop on Semantic Evaluation, SemEval'13. 7th International Workshop on Semantic Evaluation, SemEval'13Association for Computational LinguisticsUzZaman, N., Llorens, H., Derczynski, L., Allen, J., Verhagen, M., and Pustejovsky, J. (2013). SemEval- 2013 Task 1: TempEval-3: Evaluating Time Expres- sions, Events, and Temporal Relations. In Proceedings of 7th International Workshop on Semantic Evaluation, SemEval'13, pages 1-9. Association for Computational Linguistics. SemEval-2007 Task 15: TempEval Temporal Relation Identification. M Verhagen, R Gaizauskas, F Schilder, M Hepple, G Katz, J Pustejovsky, Proceedings of the 4th International Workshop on Semantic Evaluations, SemEval'07. the 4th International Workshop on Semantic Evaluations, SemEval'07Association for Computational LinguisticsVerhagen, M., Gaizauskas, R., Schilder, F., Hepple, M., Katz, G., and Pustejovsky, J. (2007). SemEval-2007 Task 15: TempEval Temporal Relation Identification. In Proceedings of the 4th International Workshop on Se- mantic Evaluations, SemEval'07, pages 75-80. Associ- ation for Computational Linguistics. SemEval-2010 Task 13: TempEval-2. M Verhagen, R Saurí, T Caselli, J Pustejovsky, Proceedings of the 5th International Workshop on Semantic Evaluation, SemEval'10. the 5th International Workshop on Semantic Evaluation, SemEval'10Association for Computational LinguisticsVerhagen, M., Saurí, R., Caselli, T., and Pustejovsky, J. (2010). SemEval-2010 Task 13: TempEval-2. In Pro- ceedings of the 5th International Workshop on Semantic Evaluation, SemEval'10, pages 57-62. Association for Computational Linguistics.
236,937,161
[]
Hybrid Spoken Language Translation Using Sentence Splitting Based on Syntax Structure Satoshi Kamatani satoshi.kamatani@toshiba.co.jp Tetsuro Chino Kazuo Sumita Corporate Research & Development Center Toshiba Corporation Komukai-Toshiba-cho Saiwai-ku Kawasaki 212-8582JAPAN Hybrid Spoken Language Translation Using Sentence Splitting Based on Syntax Structure In this paper, we propose a hybrid spoken language translation method utilizing sentence segmentation. By portioning the sentence using the result of syntax analysis, we can utilize rule-based control of the integration of subtranslations translated by a suitable method for each segment.We also report a preliminary experiment on translation quality of our prototype Japaneseto-English translation system. We confirmed that our method achieved a 13.4% advantage in NIST score for the individual RBMT method, and a 6.0% advantage for the individual EBMT method. Introduction There is a great deal of research on the machine translation, and each of them has achieved surely advantage. There are three typical approaches: Rulebased Machine Translation (RBMT), Examplebased MT (EBMT) and Statistical MT (SMT). RBMT uses many translation rules (Amano et al., 1987): parsing, transfer, generation rules, etc. One part of these rules is described abstractly to overcome various linguistic phenomena, and another part is elaborated concretely to acquire skillful translation. Abstract rules give robustness to a system, but sometimes become a cause of lack of fluency. EBMT is an analogical method based on humantranslated examples (Nagao, 1984). Those examples are directly used as a result or are partially replaced to be matched to an input sentence. So translation tends to be more natural than in the case of RBMT. However, since the domain covered strongly depends on the example database, robustness is often inferior to that of RBMT. SMT generates translation on the basis of statistical models derived from the analysis of bilingual corpora. It can cut development cost dramatically compared with RBMT and generate a natural translation result for a suitable domain. But, in some cases, well-developed RBMT outputs a more suitable translation and covers a larger domain. These strengths and weaknesses of each translation method are not only inherent properties but also complementary properties. We propose a new hybrid translation method based on this complementarity. A characteristic of our proposal is that it divides the input sentence into optimum units based on its syntactic structure generated by RBMT and selects the best translation method for each segment. It is especially effective in translating spoken language that is often breaking off fragmental speech. We think the most suitable approach for spoken language translation (SLT) is to pack such speech fragments into significant groups and translate them by switching the translator. In the following section, using Japanese-to-English translation as a motif, we describe a detailed method. Next, we report on our evaluation experiments. Then, we present a comparison of other relevant studies and conclude the paper with a discussion of future work. Hybrid Translation Method EBMT is a powerful tool when an input sentence is long or idiomatic. But the use of an example match- ing such an input is less frequent. If such a long input can be translated by combination of examples, those shorter examples will be used efficiently. Furthermore, dividing an input into short units can contribute to computational efficiency of SMT. Building on this concept, we design the hybrid SLT method. Figure 1 shows a process flow of our hybrid SLT system. The portion wrapped with a dotted line is the basic EBMT method, and the other portion is the extended RBMT method. 1. Try EBMT for a whole sentence 2. Evaluate Confidence score a) Parse an input sentence b) Split the sentence based on the syntax c) Find an optimum segments' combination 1') Try EBMT for each segment 2') Evaluate confidence scores d) Embed partial EBMT results e) Generate translation of the whole sentence In the remainder of this section, we give a detailed explanation of each splitting step. Parsing We regard the sentence partitioning problem as the finding segment under the following conditions. 1) Each segment can be independently and correctly interpreted. 2) Each segment can be removed without changing a meaning of a remaining part. 3) Translation for a whole input sentence can be generated fluently, even if it is necessary to combine partial translations of each segment. For Japanese, we use a clause as such a segment. A Japanese clause is a small and significant unit consisting of at most one subject and one predicate. To estimate such a clause structure, we utilize the method proposed by (Kamatani et al., 2006). They proposed an analysis method that estimates clause structures by treating input utterances as a sequence of fragmental phrases, and evaluates validities combined with dependency preferences. It allows evaluation of all candidates efficiently and choosing of the totally optimum one. According to their analysis, even spoken language can be analyzed by using a grammar developed on the basis of the following two assumptions. 1) One utterance often consists of fragmental phrases. 2) When some fragments are unified as a clause, its internal structure is quite grammatical. By using their method, we can evaluate all combinations of segments cyclopaedically in real time. We develop an original grammar centered on a clause structure. A part of our grammar is shown in Figure 2. Figure 3 shows a parsing example by GLR parser working with our grammar. For purposes of illustration, the packed shared forest structure 1 (Tomita, 1991)is somewhat simplified and a node has an identifier with its syntactic category. For instance, the node marked (a) has a syntax category "NP" denoting "Noun Phrase". In the figure, the node (e) is shared by other nodes (f) and (g), and node (h) packs local ambiguity <h1> and <h2>. This grammar includes some special treatments to classify a relation between segments. It is used to sensitively translate a relation between each segment. For instance, as shown in Figure 2, we handle a parenthetic expression as a dependency rela-tion between (subordinate) clauses. Sentence Splitting based on the Syntax First, we introduce the following notations and functions to formulate our sentence splitting method. • The parser derives a syntax forest f with a set of nodes N f for one input sentence. • One syntax forest can be divided into each individual syntax tree t ∈ f and its nodes set N t ⊆ N f . • Each node has one syntax category c ∈ C . • Cat(n) gives a syntactic category of a node n ∈ N f . • P rt(n) gives a set of nodes in a partial forest structure dominated by a node n ∈ N f . • T rees(n) gives a set of nodes in trees including a node n ∈ N f . Our hybrid method enumerates two types of splitting candidates. They are "basic segment" and "pairing segment" which are defined as follows. Basic segment candidates : S b = {seg|seg = P rt(n) s.t. Cat(n) ∈ C s } Where C s ⊆ C is a set of syntactic categories predefined to elect the splitting candidate. For Japanese analysis, we use syntactic category "C" and "SC" shown in a Figure2. Here we call this type of segment a "basic segment", and the root node of the segment a "dominator node". In the following explanation, we express Figure 3: Splitting an input sentence a basic segment by using a notation "(n)". That means a basic segment dominated by a dominator node n ∈ N f . 私 は サイズ が 大きい ので 気に入っ た けど やめ ます I size big since like but quit (i) (h) (g) (e) (f) (b) (c) (d) (a) <h1> <h2> (b) (c) (e) (f) (h) (f)-(b) (h)-(b) (f)-(c) (h)-(c) (h)-(e) (d) Pairing segment candidates : S p = {seg|seg = T rees(n j ) ∩ {P rt(n i ) − P rt(n j )} s.t. P rt(n i ), P rt(n j ) ∈ S b ∧P rt(n j ) ⊂ P rt(n i )} When any given two nodes are dominator nodes of a basic segment and one node has a structure dominated by the other node as its substructure, the remainder structure of the two syntactic structures will be chosen as a segment. This structural subtraction stands on a supposition that even if the meaningfully complete segment is removed from another segment, the remainder is understood correctly. Here we call a segment of this type a "pairing segment", the root node of the remaining segment a "dominator node", and the root node of a deleted substructure an "exclusive node". In the following explanation, we express a basic segment by using a notation "(n 1 , n 2 )". That means a paring segment derived from a dominator node n 1 ∈ N f and an exclusive node n 2 ∈ N f . 2 Figure 3 shows an example of sentence splitting. Here, we assume that the nodes marked b, c, d, e, f and h satisfy the condition as a dominator node of a basic segment. For example, the node(b) dominates a syntax structure for the partial input morphemes " /Because the size is large." that can be regarded as a basic segment (b). In the figure, spans of each basic segment are indicated by black arrows. All pairs of two dominator nodes of enumerated basic segments are checked to ascertain whether each of them satisfies a condition of a paring segment. In the example presented in Figure 3, we can consider 5 pairs that satisfy the condition for paring segments: (f, b), (h, b), (f, c), (h, c) and (h, e). For example, the segmented morphemes " /I quit buying it." are found for a pair of the node h and e. In the figure, the spans of morphemes for each basic segment are indicated as white arrows. Clearly, even a discontinuous sequence of input morphemes is detected as a segment. Choose Optimum Split We introduce two additional functions to describe a way to find optimum splitting. • M rp(seg) gives a set of morphemes expressed by one segment(seg). -If seg ∈ S b , M rp(seg) gives a sequence of morphemes dominated by n. -If seg ∈ S p , M rp(n i , n j ) gives a relative complement of morphemes dominated by n i and n j . • Root(seg) gives a dominator node of a basic segment and a paring segment. First, we classify the syntactic categories c ∈ C s . C sc ⊆ C s includes categories given to nodes whose substructures can be translated independently. The other categories c ∈ C s are classified into C c . That means C c = C s − C sc . For Japanese analysis, we use a classification C c = {"C"}, C sc = {"SC"}. Therefore, each segment in S b and S p can be classified into two types. S c = {seg|seg ∈ S b ∪ S p s.t. Cat(Root(seg)) ∈ C c } S c = {seg|seg ∈ S b ∪ S p s.t. Cat(Root(seg)) ∈ C sc } We calculate a combination of segments as the optimum split with the following two strategies. The first strategy chooses as many optimum segments dominated by a category c ∈ C sc node as possible. It can increase the chance of applying EBMT. Split sc = {A p |A p ⊆ S sc s.t. ∀a i ∈ A p (( ai,aj ∈Ap,i =j a i ∩ a j ) = φ) ∧ ∃N t ( a i ∈A p Root(a i ) ⊆ N t )} A set Split sc represents possible segment combinations for a whole input sentence. Opt sc = argmax Ap∈Splitsc seg i ∈A p |seg i | The second strategy chooses optimum segments to maintain the interpretation for the whole utterance and translatability by RBMT. Split c = {A p |A p ⊆ S c s.t. ∀a i ∈ A p (( ai,aj ∈Ap,i =j a i ∩ a j ) = φ) ∧[{( a k ∈Optsc a k ) ∩ ( ai∈Ap a i )} = φ] ∧[∃N t (( a k ∈Opt sc Root(a k )) ∪( ai∈Ap Root(a i )) ⊆ N t )] A set Split c represents possible segment combinations for a partial input sentence that is not covered by Opt sc . Opt c = argmax A p ∈Split c segi∈Ap |M rp(seg i )| These two strategies extract just one combination of segments without evaluating a confidence score of each partial EBMT result and calculating the total score of the translation for a whole utterance. Accordingly, it does not assure that the split sequence can generate the best translation result. Another choosing method is to consider all the pairs of EBMT results and calculate the total confidence scores. But there is trade-off between calculation cost and translation precision, and such a constitutively produced confidence score does not always assure quality. Because we can consider these segments to be briefly evaluated by syntax, it leads to the local maximum at least. For these reasons, we only use this strategy. In the example described in Figure 3, basic segments (b) and (c), and a paring segment (h, e) are elected as an optimum combination that gives the best division of the utterance (Figure 4). Embedding partial EBMT results The segments composing the optimum splitting Opt sc ∪ Opt c are individually translated by EBMT. Then, the EBMT result with sufficient confidence score is used as a partial translation. We utilize an EBMT method proposed by (Wu et al., 2005). They improved quality and example coverage of a translation memory system by taking advantage of sentence-level matching, subsentential matching and pattern-based MT methods. Their proposed method also includes input sentence splitting as a subsentential matching. But segments are estimated statistically and the whole sentence is translated by a single EBMT method. We use a translation result generated by the sentencelevel matching method to embed, because we want to acquire a partial translation of as high quality as possible and evaluate individual performances. We also used the confidence score of EBMT by using sentence similarity defined by them and trigram language model(F (T )) of a target language. F (T ) =   i=1...|T | p(t i |t i−2 , t i−1 )   1 |T | (1) where the T is a target sentence and the t i is a morpheme in it. Score = β 1 · Sim(X, Y ) + β 2 · F (T )(2) where Sim(X,Y) is similarity between an input utterance X and an source sentence Y of an example pair, β 1 and β 2 are weights which are experimentally given, and 0 ≤ Score ≤ 1.0. The detailed definition of Sim(X,Y) is given in their paper. EBMT results with sufficient confidence scores will be elected and embedded as a partial translation of the whole utterance. Two embedding styles are defined and switched by the segment kind. Basic segment : 1. Delete the syntactic structure that has the dominator node as a root node and depends on only the morphemes in this segment. 2. Add a special terminal node that denotes the EBMT translation result. The new terminal node will be unified to a parent node of the dominator node. If the dominator node already has other nodes, the order of these nodes follows the order of input morphemes. Pairing segment : 1. Move the exclusive node to the dominator node as a new parent node. If the dominator node already has other nodes, the order of these nodes follows the order of input morphemes. Figure 6: Embedding EBMT Result to the Tree (2) 2. Delete the syntactic structure that has the dominator node as a root node and depends on only the morphemes in this segment. 3. Add a special terminal node that denotes the EBMT result. The new terminal node will be unified to a parent node of the dominator node. If the dominator node already has other nodes, the order of these nodes follows the order of morphemes in the input sentence. We define these processes so as to regard them as one of the transfers in the RBMT system. So, even after embedding, the syntax tree keeps translatability by RBMT. Now, we assume that a paring segment (h, e) gets the EBMT result "I just can't by it" with a sufficient confidence score, and a basic segment (b) also acquires the EBMT result "It's so big for me". Figure 5 and Figure 6 show examples of the embedding process. The order in which segments are embedded is unrestricted. Integration and Generation Even after embedding an example to a syntax structure, it keeps its characteristics as a syntax tree. So, it is only necessary to develop new rules that handle such a partially translated structure. It is quite a natural task for RBMT. Experiment Experimental Settings We evaluate three systems: individual EBMT(Wu et al., 2005) and RBMT, and hybrid SLT system. The performance of EBMT and RBMT can be a baseline for our proposed hybrid method. Here, we use only the sentence-level matched EBMT result with confidence Score ≥ 0.6 in equation (2) for the hybrid SLT system. The result of EBMT is just to choose a translation with the highest confidence score, regardless of its absolute value. On the other hand, we allow subsentential matched translation result as a result of individual EBMT. Introducing a subsentential matching as a result makes it possible to compare the splitting method between the statistical method and ours. Because we don't have enough bilingual travel domain corpora that we can use freely, we developed 123,819 Japanese-English translation pairs as an example base of EBMT. We prepare two types of test set aside from examples. One is a set of balanced travel domain sentences, and the other is a set of relatively long sentences. As mentioned in section 2, since our hybrid method divides an input sentence into several parts, it works only for the sentences that have a certain level of length. For evaluation, we use NIST score (Doddington, 2002) and BLEU score (Papineni et al., 2002). Each sentence in the test set has one translation reference. Evaluation Result The evaluation result for the balanced test set is shown in Table 2. Since the DB for the same travel domain is used, both NIST and BLEU score higher for EBMT than for RBMT. Moreover, the The evaluation result for long sentences is shown in Table 3. The hybrid method outputs the 44 translations by EBMT, the 125 translations by RBMT, and the 31 translations composed by both EBMT and RBMT. As an input sentence gets longer, it becomes harder to find an example translation matching it, and more complex to parse it. Both EBMT and RBMT get lower scores than the balanced test set. The score of the hybrid SLT is also reduced, but it is still higher than that of the individual MT method. This result highlights the advantage of our method. Table 4 shows some translation results for the second test set, which are translated by composing the partial results of EBMT and RBMT. Three translation results are shown in the table. The row labeled "Ref." means the translation reference that was translated manually and used for the evaluation. And there are three translation results generated by each method for each source sentence (Src.). Related Work In the spoken language domain, research is often focused on determining the end of an utterance and subsentence punctuation predication, such as (Matusov et al., 2007). Such approaches are useful for cutting out a segment to parse, but they are deterministic and do not supply preference of a relation in each segment. From the viewpoint of hybrid machine translation, (Akiba et al., 2006) and (Nakamura , 2006) proposed the multi-engine translation method that evaluates target sentences individually generated by each engine, and chooses the best one. However its evaluation target is a whole sentence. We think it can be used at our estimating step of optimum split. (Bond et al., 2003) introduced a hybrid rule and example-based method for MT. Their system translates an input sentence using the most typical translation example that is similar to the input. Here, an example pair is chosen that both matches the input sentence and has a translation similar to other examples. However, the selection method still uses a whole-sentence translation as a unit. (Doi et al., 2004) proposed a sentence splitting method that generates splitting candidates based on an N-gram model and selects the best one by calculating sentence similarity between the part and an example in the database for EBMT. The splitting model is given as a probability of insertion of segment start and end. (Lavie et al., 1996) and (Langley et al., 2002) defined semantic dialog units that roughly correspond to a speech act and can be translated independently. The dialog units are estimated by acoustic cues and a pre-learned statistical model. Consequently, our method keeps totality between each segment based on a syntax given by the RBMT method. It allows finding discontinuous segmentation and translates a relation between segments appropriately. (Furuse et al., 1998) also proposed input-splitting method for translating spoken language. It can exclude ill-formed expressions from a raw input. It aimed to find the best splitting to be translated efficiently by single translation method. (Mellebeek et al., 2006) and (Rosti et al., 2007) combine translation results from multi-engine MT and find an optimum combination as a final translation result. But each chunk of translation is given for a continuous sequence in an input sentence. So, a dependency between non-continuous morphemes is sometimes missed in a final translation result. Future Work Our hybrid SLT method utilizes a Japanese clause as a unit to switch translation methods. A Japanese clause is small enough to understand a meaning, but it seems a rather big structure to increase a chance of applying EBMT. In particular, some short utterances do not fully benefit from our method, because a simple sentence usually consists of just one clause. For the next step, we are studying use of a phrase. But it is more difficult to embed a phrase translation than a clause translation, since a phrase exhibits diverse behavior and other dependent are usually needed in order to determine translation. Among such segments that are somewhat awkward as units, a noun phrase is comparatively easy to handle. While we are expanding coverage of the hybrid method, we are also examining a method of calculating the confidence score for each EBMT result and the final translation. For the first step, we have to evaluate using a monolingual language model translation for the whole utterance and check whether it gets a corresponding bless in calculation cost. Figure 1 : 1Process Flow of our Hybrid MT Figure 2 : 2Part of our grammar Figure 4 : 4Optimum Splitting Table 1 : 1Test set specificationNumber of Japanese English Sentences Source Reference Balanced 1000 4.9372 0.2403 Long Sentence 200 4.4644 0.1885 Table 2 : 2Hybrid SLT method scored higher than each individual translation method. This result proves the effect of our method. The result of hybrid MT consists of 622 sentences by individual EBMT, 363 sentences by individual RBMT, and 15 sentences by composing partial EBMT and RBMT results.Evaluation for Japanese-to-English Translation of the Balanced Sentences Set System NIST BLEU EBMT 4.9372 0.2403 RBMT 4.4644 0.1885 Hybrid MT 5.0474 0.2511 Table 3: Evaluation for Japanese-to-English Translation of the Relatively Long Sentences Set System NIST BLEU EBMT 3.8798 0.1351 RBMT 3.8191 0.1252 Hybrid MT 4.1127 0.1597 Table 4 : 4Sample Japanese-to-English translations RBMT As long as it is not troublesome, may I push down a little seat? Hybrid If it's not too much trouble. I may push down a little seat. 2)Src. RBMT The person of the next room is noisy. Please change the room. Hybrid People room next door is so noisy. RBMT Would you start, if the time of a meal comes? Hybrid If the time of a meal comes Excuse me, let me wake.1)Src. Ref. If it's not much trouble, can I put my seat back a little? EBMT Annoying, may I lower my seat a lit- tle? Ref. Please change my room because the people next door are noisy. EBMT Many people next door room noisy, to give room. Would you me a different room? 3)Src. Ref. Would you wake me up at meal time, please? EBMT Trains wake be dinner at the same time? We simply call this structure a syntax forest in this paper. ConclusionIn this paper, we propose a hybrid spoken language translation (SLT) method which divides input sentence into some parts and translates them by switching RBMT and EBMT for each part. A characteristic of our method is that it splits an utterance based on its syntactic structure.We also report fundamental experimental results. In the evaluation for balanced test set, our method achieves a 13.0% advantage in NIST score for the individual RBMT method and a 2.2% advantage for the baseline EBMT method. For long sentences, our hybrid method achieves a 6.0% advantage for the conventional EBMT and RBMT systems. Using Multiple Edit Distances to Automatically Grade Outputs from Machine Translation Systems. Yasuhiro Akiba, Kenji Imamura, Eiichiro Sumita, Hiromi Nakaiwa, Seiichi Yamamoto, G Hiroshi, Okuno, The IEEE Transactions on Speech and Audio Processing. 14Yasuhiro Akiba, Kenji Imamura, Eiichiro Sumita, Hi- romi Nakaiwa, Seiichi Yamamoto, Hiroshi G. Okuno. 2006. Using Multiple Edit Distances to Automati- cally Grade Outputs from Machine Translation Sys- tems. The IEEE Transactions on Speech and Audio Processing. Vol.14, Issue.2, pages 393-402. TAURAS: the Toshiba machine translation system. Hideki Shin-Ya Amano, Yoshinao Hirakawa, Tsutsumi, Proc. of MT Summit. of MT SummitShin-ya Amano, Hideki Hirakawa and Yoshinao Tsut- sumi. 1987. TAURAS: the Toshiba machine trans- lation system. In Proc. of MT Summit, pages 15-23. A Hybrid Rule and Example based Method for Machine Translation. Francis Bond, Satoshi Shirai, Recent Advances in Example-based Machine Translation. Michael Carl and Andy WayKluwer Academic PublishersFrancis Bond and Satoshi Shirai. 2003. A Hybrid Rule and Example based Method for Machine Translation. In Michael Carl and Andy Way, eds., Recent Advances in Example-based Machine Translation,Kluwer Aca- demic Publishers. Chapter 7, pages 211-224. A statistical approach to machine translation. F Peter, John Brown, Stephen A Della Cocke, Vincent J Della Pietra, Fredrick Pietra, John D Jelinek, Robert L Lafferty, Paul S Mercer, Roossin, Computational Linguistics. 162Peter F. Brown, John Cocke, Stephen A. Della Pietra, Vincent J. Della Pietra, Fredrick Jelinek, John D. Laf- ferty, Robert L. Mercer and Paul S. Roossin, 1990. A statistical approach to machine translation. Computa- tional Linguistics, Volume 16, Issue 2, pages 79-85. Splitting Input Sentence for Machine Translation Using Language Model with Sentence Similarity. Takao Doi, Eiichiro Sumita, Proc. of International Committee on Computational Linguistics. of International Committee on Computational LinguisticsTakao Doi and Eiichiro Sumita. 2004. Splitting In- put Sentence for Machine Translation Using Language Model with Sentence Similarity. In Proc. of In- ternational Committee on Computational Linguistics. pages 113-120. Splitting Long or Ill-formed Input for Robust Spoken-language Translation. Osamu Furuse, Setsuo Yamada, Kazuhide Yamamoto, Proc. of the 36th Annual Meeting of the Association for Computational Linguistics and 17th International Conference on Computational Linguistics. of the 36th Annual Meeting of the Association for Computational Linguistics and 17th International Conference on Computational LinguisticsOsamu Furuse, Setsuo Yamada and Kazuhide Yamamoto. 1998. Splitting Long or Ill-formed Input for Ro- bust Spoken-language Translation. In Proc. of the 36th Annual Meeting of the Association for Computa- tional Linguistics and 17th International Conference on Computational Linguistics. pages 421-427. Automatic Evaluation of Machine Translation Quality Using N-gram Co-Occurrence Statistics. George Doddington, Proc. of the Second International Conference on Human Language Technology. of the Second International Conference on Human Language TechnologyGeorge Doddington. 2002. Automatic Evaluation of Machine Translation Quality Using N-gram Co- Occurrence Statistics. In Proc. of the Second Inter- national Conference on Human Language Technology. pages 138-145. Forest Driven Dependency Analysis Enhanced by Japanese Clause Structure Estimation. Satoshi Kamatani, Kentaro Furihata, Tetsuro Chino, Proc. of The 20th Pacific Asia Conference on Language, Information and Computation. of The 20th Pacific Asia Conference on Language, Information and ComputationSatoshi Kamatani, Kentaro Furihata and Tetsuro Chino. 2006. Forest Driven Dependency Analysis Enhanced by Japanese Clause Structure Estimation. In Proc. of The 20th Pacific Asia Conference on Language, Infor- mation and Computation. pages 265-273. Spoken Language Parsing Using Phrase-Level Grammars and Trainable Classifiers. Chad Langley, Alon Lavie, Lori Levin, Proc. of the Workshop on Speech-to-speech Translation: Algorithms and Systems. of the Workshop on Speech-to-speech Translation: Algorithms and SystemsDorcas Wallace, Donna Gates and Key PetersonChad Langley, Alon Lavie, Lori Levin, Dorcas Wal- lace, Donna Gates and Key Peterson. 2002. Spo- ken Language Parsing Using Phrase-Level Grammars and Trainable Classifiers. In Proc. of the Workshop on Speech-to-speech Translation: Algorithms and Sys- tems. pages 15-22. Input Segmentation of Spontaneous Speech in JANUS: A Speech-to-speech Translation System. Alon Lavie, Donna Gates, Noah Coccaro, Lori S Levin, Proc. of the Workshop on Dialogue Processing in Spoken Language Systems. of the Workshop on Dialogue essing in Spoken Language SystemsAlon Lavie, Donna Gates, Noah Coccaro and Lori S. Levin. 1996. Input Segmentation of Spontaneous Speech in JANUS: A Speech-to-speech Translation System. In Proc. of the Workshop on Dialogue Pro- cessing in Spoken Language Systems. pages 86-99. Improving Speech Translation with Automatic Boundary Prediction. Evgeny Matusov, Dustin Hillard, Mathew Magimai-Doss, Dilek Hakkani-Tur, Mari Ostendorf, Hermann Ney, Proc. of Interspeech. of InterspeechEvgeny Matusov, Dustin Hillard, Mathew Magimai- Doss, Dilek Hakkani-Tur, Mari Ostendorf and Her- mann Ney. 2007. Improving Speech Translation with Automatic Boundary Prediction. In Proc. of Inter- speech. pages 2449-2452. Multi-Engine Machine Translation by Recursive Sentence Decomposition. Bart Mellebeek, Karolina Owczarzak, Josef Van Genabith, Andy Way, Proc. of the 7th Conference of the AMTA. of the 7th Conference of the AMTABart Mellebeek, Karolina Owczarzak, Josef Van Gen- abith and Andy Way. 2006. Multi-Engine Machine Translation by Recursive Sentence Decomposition. In Proc. of the 7th Conference of the AMTA. pages 110- 118. A freamwork of mechanical translation between Japanese and English by analogy principle. Nagao Makoto, Artificial and Human Inteligence. Elithorn & BanerjiElsevierNagao Makoto. 1984. A freamwork of mechanical trans- lation between Japanese and English by analogy prin- ciple. In Elithorn & Banerji, eds., Artificial and Hu- man Inteligence, Elsevier. pages 179-180. The ATR Multilingual Speech-to-Speech Translation System. Satoshi Nakamura, Konstantin Markov, Hiromi Nakaiwa, Genichiro Kikui, Hisashi Kawai, Takatoshi Jitsuhiro, Jin-Song Zhang, Hirofumi Yamamoto, Eiichiro Sumita, Seiichi Yamamoto, IEEE Trans. on Audio, Speech and Language Processing. 142Satoshi Nakamura, Konstantin Markov, Hiromi Nakaiwa, Genichiro Kikui, Hisashi Kawai, Takatoshi Jit- suhiro, Jin-Song Zhang, Hirofumi Yamamoto, Eiichiro Sumita, and Seiichi Yamamoto. 2006. The ATR Mul- tilingual Speech-to-Speech Translation System. In IEEE Trans. on Audio, Speech and Language Process- ing. Vol. 14, No. 2, pp. 365-376. BLEU: a Method for Automatic Evaluation of Machine Translation. Kishore Papineni, Salim Roukos, Todd Ward, Weiing Zhu, Proc. of the 40th. of the 40thKishore Papineni, Salim Roukos, Todd Ward and Weiing Zhu. 2002. BLEU: a Method for Automatic Eval- uation of Machine Translation. In Proc. of the 40th Annual Meeting of the Association for Computational Linguistics. Annual Meeting of the Association for Computational Linguistics pages 311-318. Combining Outputs from Multiple Machine Translation Systems. I Antti-Veikko, Necip Rosti, Bing Fazil Ayan, Spyros Xiang, Richard Matsoukas, Bonnie J Schwartz, Dorr, Proc. of NAACL HLT. of NAACL HLTAntti-Veikko I. Rosti, Necip Fazil Ayan, Bing Xiang, Spyros Matsoukas, Richard Schwartz and Bonnie J. Dorr. 2007. Combining Outputs from Multiple Ma- chine Translation Systems. In Proc. of NAACL HLT 2007 pages 228-235. Generalized LR Parsing. Masaru Tomita, Kluwer Academic PublishersNorwel, MassachusettsMasaru Tomita. 1991. Generalized LR Parsing. Kluwer Academic Publishers, Norwel, Massachusetts. Improving Translation Memory with Word Alignment Information. Wu Hua, Wang Haifeng, Liu Zhangyi, Tang Kai, Proc. of MT Summit X. of MT Summit XWu Hua, Wang Haifeng, Liu Zhangyi and Tang Kai. 2005. Improving Translation Memory with Word Alignment Information. In Proc. of MT Summit X, pages 364-371.
28,533,405
Semantic Frame Labeling with Target-based Neural Model
This paper explores the automatic learning of distributed representations of the target's context for semantic frame labeling with target-based neural model. We constrain the whole sentence as the model's input without feature extraction from the sentence. This is different from many previous works in which local feature extraction of the targets is widely used. This constraint makes the task harder, especially with long sentences, but also makes our model easily applicable to a range of resources and other similar tasks. We evaluate our model on several resources and get the state-of-the-art result on subtask 2 of SemEval 2015 task 15. Finally, we extend the task to word-sense disambiguation task and we also achieve a strong result in comparison to state-of-the-art work.
[ 12520385, 2486369, 15026764, 11174540, 17553490, 16526134, 1957433, 5508859, 2905151 ]
Semantic Frame Labeling with Target-based Neural Model Association for Computational LinguisticsCopyright Association for Computational LinguisticsAugust 3-4, 2017. 2017 Yukun Feng yukunfg@gmail.com Beijing Language and Culture University Dong Yu yudong@blcu.edu.cn Jian Xu jianxu1@mail.ustc.edu.cn Beijing Language and Culture University University of Science and Technology of China Chunhua Liu chunhualiu596@gmail.com Beijing Language and Culture University Semantic Frame Labeling with Target-based Neural Model Proceedings of the 6th Joint Conference on Lexical and Computational Semantics (*SEM 2017) the 6th Joint Conference on Lexical and Computational Semantics (*SEM 2017)Vancouver, CanadaAssociation for Computational LinguisticsAugust 3-4, 2017. 2017 This paper explores the automatic learning of distributed representations of the target's context for semantic frame labeling with target-based neural model. We constrain the whole sentence as the model's input without feature extraction from the sentence. This is different from many previous works in which local feature extraction of the targets is widely used. This constraint makes the task harder, especially with long sentences, but also makes our model easily applicable to a range of resources and other similar tasks. We evaluate our model on several resources and get the state-of-the-art result on subtask 2 of SemEval 2015 task 15. Finally, we extend the task to word-sense disambiguation task and we also achieve a strong result in comparison to state-of-the-art work. Introduction and Related Work Semantic frame labeling is the task of selecting the correct frame for a given target based on its semantic scene. A target is often called lexical unit which evokes the corresponding semantic frame. The lexical unit can be a verb, adjective or noun. Generally, a semantic frame describes how the lexical unit is used and specifies its characteristic interactions. There are many semantic frame resources, such as FrameNet (Baker et al., 1998), VerbNet (Schuler, 2006), Prop-Bank (Palmer et al., 2005) and Corpus Pattern Analysis (CPA) frames (Hanks, 2012). However, most existing frame resources are manually created, which is time-consuming and expensive. Automatic semantic frame labeling can lead to the development of a broader range of resources. * The corresponding author Early works for semantic frame labeling mainly focus on FrameNet, PropBank and VerbNet resources. But most of them focus only one resource and rely heavily on feature engineering (e.g., Honnibal and Hawker 2005;Abend et al. 2008). Recently, there are some works on learning CPA frames based on a new semantic frame resource, the Pattern Dictionary of English Verbs (PDEV) (El Maarouf and Baisa, 2013;El Maarouf et al., 2014). The above two works also rely on features and both are only tested on 25 verbs. Most works aim at constructing the context representations of the target with explicit rules based on some basic features, e.g., Parts Of Speech (POS), Named Entities (NE) and dependency relations related to the target. Currently, some deep learning models have been applied with dependency features. Hermann et al. (2014) used the direct dependents and dependency path to extract the context representation based on distributed word embeddings on English FrameNet. Inspired by the work, Zhao et al. (2016) used a deep feed forward neural network on Chinese FrameNet with similar features. This is different from our goal where we want to explore an appropriate deep learning architecture without complex rules to construct the context representations. Feng et al. (2016) used a multilayer perceptrons (MLP) model on CPA frames without extra feature extraction, but the model is quite simple and has an input window which is not convenient. In this paper, we present a target-based neural model which takes the whole target-specific sentence as input and gives the semantic frame label as output. Our goal is to make the model light without explicit rules to construct context representations and applicable to a range of resources. To cope with variable-length sentences under our constraint, a simple idea is to use recurrent neural networks (RNN) to process the sentences. But noise caused by irrelevant words in long sentences may hinder learning. In fact, the arguments related to the target are usually distributed near the target because when we write or speak, we will focus mainly on arguments that are in the immediate context of a core word. We use two RNNs each of which processes one part of the sentence split by the target. The model takes the target as the center and we call it the target-based recurrent networks (TRNN). In fact, TRNN itself is not novel enough, but according to our knowledge, no related research has focused on this topic. We will show that TRNN is quite suitable for learning the context of the target. Figure 1: Architecture of TRNN with an example sentence whose target word is in bold. In our model we select long short-term memory (LSTM) networks, a type of RNN designed to avoid the vanishing and exploding gradients. The overall structure is illustrated Figure 1. w t is the t-th word in the sentence the length of which is T and target is the index of the target. x t is obtained by mapping w t into a fixed vector through well pre-trained word vectors. The model has two LSTMs each of which processes one part of the sentence split by the target. The model can automatically learn the distributed representation of target's context from w with few manual design. Context Representations An introduction about LSTM can be found in the work of Hochreiter and Schmidhuber (1997). The parameters of LSTM are W x * , W h * and b * where * stands for one of several internal gates. W x * is the matrix between the input vector x t and gates, W h * is the matrix between the output h t of LSTM and gates and b * is the bias vector on gates. The formulas of LSTM are: it = σ(Wxixt + W hi ht−1 + bi) ft = σ(W xf xt + W hf ht−1 + b f ) ct = ft ct−1 + it tanh(Wxcxt + W hc ht−1 + bc) ot = σ(Wxoxt + W ho ht−1 + bo) ht = ot tanh(ct) where σ is the sigmoid function and represents the element-wise multiplication. i t , f t c t and o t are the output of input gates, forget gates, cell states and output gates, respectively. In our model, two LSTMs share the same parameters. At last, the target's context representations cr are added by the outputs of two LSTMs: cr = h target−1 + h target The dimension of cr is decided by the number of hidden units in LSTM, which is a hyper parameter in our model, and is usually much lower than that of one word vector. Here we make some intuitions behind the above formulas. The gradients from last layer flow equally on the (target − 1)-th LST-M box and the target-th LSTM box and then the two flows go to both ends. As it is quite common in deep learning models, the gradients usually become ineffective as the depth of the flow increases especially when the sentence is very long. The gradients on words far from the target get less impact than those near the target. As a whole, more data are usually required to learn the arguments far from the target than those near the target. If the real arguments are distributed near the target, this model will be suitable as its architecture is designed to take care of the local context of the target. Output Layer We use Softmax layer as the output layer on the context representations. The output layer computes a probability distribution over the semantic frame labels. During the training, the cost we minimize is the negative log likelihood of the model: L = − M m=1 logp tm Here M is number of the training sentences, t m is the index of the correct frame label for the m-th sentence and p is the probability. Experiments Datasets We simply divide all the datasets in two types: per-target and non per-target. Per-target semantic frame resources define a different set of frame labels for each target and we train one model for each target; different targets may share some semantic frame labels in non per-target resources and we train a single model for such resources. We use the Semlink project to create our datasets 1 . Semlink aims to link together different lexical resources via a set of mappings. We use its corpus which annotates FrameNet and Propbank frames for the WSJ section of the Penn Treebank. Another resource we use is PDEV 2 which is quite new and has CPA frame annotated examples on British National Corpus. All the original instances are sentence-tokenized and the punctuation was removed. The details of creating the datasets are as follows: • FrameNet: Non per-target type. We get FrameNet annotated instances through Semlink. If one FrameNet frame label contains more than 300 instances, we divide it proportionately: 70%, 20% and 10%. Then we respectively accumulate the three parts by each frame label to create the training, test and validation set. • PropBank: Per-target type. The creation process is same as FrameNet except that we finally get training, test and validation set for each target and the cutoff is set to 70 instead of 300. • PDEV: Same as PropBank but with the cutoff set to 100 instead of 70. Since the performance of our model is almost decided by the training data we empirically choose the cutoff above to keep the instances of each label enough. Summary statistics of the above datasets are in Table 2. Models and Training We compare our model with the following baselines.: 1 The current version of the Semlink project has some problems to get the right position of targets in WSJ section of Penn Treebank. Instead, we use annotations of PropBank corpus, also annotated in WSJ section of Penn Treebank, to index targets. 2 http://pdev.org.uk/ Sentences Frame Names In Moscow they kept asking us things like why do you make 15 different corkscrews Activityongoing It said it has taken measures to continue shipments during the work stoppage. Activityongoing But the Army Corps of Engineers expects the river level to continue falling this month. Processcontinue The oil industry's middling profits could persist through the rest of the year. • MF: The most frequent (MF) method selects the most frequent semantic frame label seen in training instances for each instance in the test dataset. MF is actually a strong baseline for per-target dataset because we observed that most targets have one main frame label. Processcontinue • Target-Only: For FrameNet dataset, we use Target-Only method: if the target in the test instance has a unique frame label in the training data we give this frame label to current test instance; if the target has multiple frame labels in the training data we select the most frequent one in these labels; if the target is not seen in the training data, we select the most frequent label from the whole training data. This baseline is especially for FrameNet because we observed that each frame label has a set of targets but only a few targets have multiple frame labels. It may be easy to predict the frame label for test instances only according to the target. • LSTM: The standard LSTM model. • MaxEnt: The Maximum Entropy model. We use the Stanford CoreNLP module 3 to ex-tract features for MaxEnt toolkit 4 . All dependents related to the target, their POS tags, dependency relations, lemmas, NE tags and the target itself will be extracted as features. The number of the iterations for MaxEnt is decided by the validation set. For simplicity, we set the learning rate to 1.0 for TRNN and LSTM. The number of hidden units is tested on validation data with the values {35, 45, 55} for per-target resource and {80, 100, 120} for non per-target resource. We use the publicly available word2vec vectors, a dimensionality of 300, that were trained through the GloVe model (Pennington et al., 2014) on Wikipedia and Gigaword. For words not appeared in the vector model, their word vectors are all set to zero vectors. We train these models by stochastic gradient descent with minibatches. The minibatch is set to 10 for per-target resource and 50 for non per-target resource. We keep the word vectors static since no obvious improvement has been observed. Training will stop when the zeroone loss is zero over training data. Results The results of the above datasets are in Table 3. Target-Only gets very high scores on FrameNet dataset. FrameNet dataset has 55 targets which has multiple frame labels in the training data and these targets have 1981 instances in the test data. We get 0.769 F-score on these instances and 0.393 F-score on 64 unseen targets with 77 test instances. This can be the extreme case that the main feature for the correct frame is the target itself. Despite this simple fact, standard LSTM performs very badly on FrameNet. The main reason is that sentences in FrameNet dataset are too long and standard LSTM can not learn well due to the large number of irrelevant words that appear in long sentences. To show this, we select the size of truncation window for original FrameNet sentences and we get the best size of 5 on validation data with each 2 words surrounding the target. Finally, we get 0.958 F-score on FrameNet test data which is still lower than TRNN on full sentences. As for PropBank and PDEV dataset, we train one model for each target so the final F-score is the average of all targets. However, the number of training instances per target is limited. TRNN will usually not perform well when it tries to learn some frames which consist of many different concepts and especially when the frame has a few training instances. Considering the sentence 4 of Table 4 as an example, it is difficult to TRNN to learn what is 'Activity' in the correct frame because this concept is huge. TRNN may need lots of data to learn something related to this concept. However, this correct frame only has 6 instances in our training data. The second reason of TRNN's failure is lack of knowledge due to unseen words in test data. The sentence 1 of Table 4 shows TRNN will make the right decision since we observe that it has seen the word 'cow' in the training data and knows this word belongs to the concept 'Animate or Plant' in the correct frame. But TRNN does not know the word 'Elegans' in sentence 3 so it usually selects the most frequent frame seen in the training data. However, in many cases, the unseen words can be captured by well trained word embeddings as the sentence 2 shows where 'ducks', 'chickens' and 'geese' are all unseen words. Table 3: Results on several semantic frame resources. The format of cell value is "Fscore/hidden unit" for TRNN and LSTM and "Fscore/iteration" for MaxEnt toolkit. CPA Experiment Corpus Pattern Analysis (CPA) is a new technique for identifying the main patterns in which a word is used in text and is currently being used to build the PDEV resource as we mentioned above. It is also a shared task in SemEval-2015 task 15 (Baisa et al., 2015). The task is divided into three subtasks: CPA parsing, CPA clustering and CPA lexicography. We only introduce the first two related subtasks. CPA parsing aims at identifying the arguments of the target and tagging predefined semantic meaning on them; CPA clustering clusters the instances to obtain CPA frames based on the result of CPA parsing. However, the first step results seem unpromising (Feng et al., 2015;Mills and Levow, 2015;Elia, 2016) which will influence the process of obtaining CPA frames. Since our model can be applied on sentence-level input without feature extraction we can directly evaluate Word Sense Disambiguation Experiment Finally, we choose Word Sense Disambiguation (WSD) task to extend our experiment. As our benchmark for WSD task, we choose English Lexical Sample WSD tasks of SemEval-2007task 17 (Pradhan et al., 2007. We use cross-validation on the training set and we observe the model performs better when we update the word vectors which is different from the preceding experimental setup. The number of hidden units is set to 55. The result is in Table 6. The rows from 4 to 6 come from Iacobacci et al. (2016). They inte-grate word embeddings into IMS (It Makes Sense) system (Zhong and Ng, 2010) which uses support vector machine as its classifier based on some standard WSD features and they get the best result; they use an exponential decay function, also designed to give more importance to close context, to compute the word representation, but their method need manually choose the window size of the target word and one parameter of their exponential decay function. Both with word vectors only, our model is comparable with the sixth row. Conclusion In this paper, we describe an end-to-end neural model to target-specific semantic frame labeling. Without explicit rule construction to fit for some specific resources, our model can be easily applied to a range of semantic frame resources and similar tasks. In the future, non-English semantic frame resources can be considered to extend the coverage of our model and our model can integrate the best features explored in the state-of-the-art work to see how many improvements our model can make. Table 1 : 1Non per-target examples. Frames are from FrameNet and the target words are in bold.FrameNet PropBank PDEV Per-target No 153 targets 407 targets Train 41206 31212 (204) 152218 (374) Test 11762 8568 (56) 42328 (104) Valid. 5871 4131 (27) 20350 (50) Frame 33 443 (2.89) 2197 (5.39) Words/sent. 23 23 12 Table 2 : 2Summary statistics for the datasets. The average numbers per target are shown in the parentheses for per-target resources. Table 4 : 4Case study for CPA frames. The target words are in bold.our model on CPA clustering. Unfortunately, the datasets provided by CPA clustering is a per-target resource for our model and the targets in training and test set are not the same. Since this task is not limited to use extra resources, we use the training set of FrameNet, a type of non per-target, mentioned in section 3.1 to solve this problem. The hyper parameters are the same as before. C-PA clustering is evaluated by B-cubed F-score, a metric for clustering problem, so we do not need to convert the FrameNet frame label to CPA frame label. The result is inTable 5. All the models are supervised except for baseline and DULUTH.Feng et al. (2016) used the MLP to classify fixedlength local text of the target based on distributed word embeddings. But the representation of the target's context is simply constructed with concatenated word embeddings and the length of local context has to be chosen manually. Besides, MLP may fail to train or predict well when some key words are out of its input window.System B-cubed F-score BOB90(Best in SemEval 2015) 0.741 SemEval 2015 baseline 0.588 DULUTH 0.525 Feng et al. (2016) 0.70 This paper 0.763 Table 5 : 5Results on Microcheck dataset of CPA clustering. Table 6 : 6Result on Lexical Sample task of SemEval-2007 task 17 http://stanfordnlp.github.io/CoreNLP/ https://github.com/lzhang10/maxent AcknowledgmentsWe would like to thank the anonymous reviewers and Li Zhao for their helpful suggestions and comments. The work was supported by the National High Technology Development 863 Program of China (No.2015AA015409). A supervised algorithm for verb disambiguation into verbnet classes. Omri Abend, Roi Reichart, Ari Rappoport, Proceedings of the 22nd International Conference on Computational Linguistics. the 22nd International Conference on Computational Linguistics1Association for Computational LinguisticsOmri Abend, Roi Reichart, and Ari Rappoport. 2008. A supervised algorithm for verb disambiguation into verbnet classes. In Proceedings of the 22nd Inter- national Conference on Computational Linguistics- Volume 1. Association for Computational Linguis- tics, pages 9-16. Semeval-2015 task 15: A cpa dictionaryentry-building task. Vít Baisa, Jane Bradbury, Silvie Cinkova, Ismail El Maarouf, Adam Kilgarriff, Octavian Popescu, Proceedings of the 9th International Workshop on Semantic Evaluation. the 9th International Workshop on Semantic EvaluationDenver, ColoradoAssociation for Computational LinguisticsVít Baisa, Jane Bradbury, Silvie Cinkova, Ismail El Maarouf, Adam Kilgarriff, and Octavian Popes- cu. 2015. Semeval-2015 task 15: A cpa dictionary- entry-building task. In Proceedings of the 9th International Workshop on Semantic Evaluation (SemEval 2015). Association for Computational Linguistics, Denver, Colorado, pages 315-324. http://www.aclweb.org/anthology/S15-2053. The berkeley framenet project. F Collin, Baker, J Charles, John B Fillmore, Lowe, Proceedings of the 17th international conference on Computational linguistics. the 17th international conference on Computational linguistics1Association for Computational LinguisticsCollin F Baker, Charles J Fillmore, and John B Lowe. 1998. The berkeley framenet project. In Proceed- ings of the 17th international conference on Compu- tational linguistics-Volume 1. Association for Com- putational Linguistics, pages 86-90. Automatic classification of patterns from the pattern dictionary of english verbs. Ismaıl El Maarouf, Vıt Baisa, Joint Symposium on Semantic Processing. Ismaıl El Maarouf and Vıt Baisa. 2013. Automatic classification of patterns from the pattern dictionary of english verbs. In Joint Symposium on Semantic Processing.. Disambiguating verbs by collocation: Corpus lexicography meets natural language processing. Ismail El Maarouf, Jane Bradbury, Vít Baisa, Patrick Hanks, LREC. Ismail El Maarouf, Jane Bradbury, Vít Baisa, and Patrick Hanks. 2014. Disambiguating verbs by col- location: Corpus lexicography meets natural lan- guage processing. In LREC. pages 1001-1006. Syntactic and semantic classification of verb arguments using dependency-based and rich semantic features. Francesco Elia , arX- iv:1604.05747arXiv preprintFrancesco Elia. 2016. Syntactic and semantic classi- fication of verb arguments using dependency-based and rich semantic features. arXiv preprint arX- iv:1604.05747 . Blcunlp: Corpus pattern analysis for verbs based on dependency chain. Yukun Feng, Qiao Deng, Dong Yu, Proceedings of the 9th International Workshop on Semantic Evaluation (SemEval 2015). Association for Computational Linguistics. the 9th International Workshop on Semantic Evaluation (SemEval 2015). Association for Computational LinguisticsDenver, ColoradoYukun Feng, Qiao Deng, and Dong Yu. 2015. Bl- cunlp: Corpus pattern analysis for verbs based on dependency chain. In Proceedings of the 9th International Workshop on Semantic Evalua- tion (SemEval 2015). Association for Computation- al Linguistics, Denver, Colorado, pages 325-328. http://www.aclweb.org/anthology/S15-2054. An endto-end approach to learning semantic frames with feedforward neural network. Yukun Feng, Yipei Xu, Dong Yu, Proceedings of the NAACL Student Research Workshop. the NAACL Student Research WorkshopSan Diego, CaliforniaAssociation for Computational LinguisticsYukun Feng, Yipei Xu, and Dong Yu. 2016. An end- to-end approach to learning semantic frames with feedforward neural network. In Proceedings of the NAACL Student Research Workshop. Association for Computational Linguistics, San Diego, California, pages 1-7. http://www.aclweb.org/anthology/N16- 2001. How people use words to make meanings: Semantic types meet valencies. Input. Patrick Hanks, Process and Product: Developments in Teaching and Language Corpora. ess and Product: Developments in Teaching and Language CorporaPatrick Hanks. 2012. How people use words to make meanings: Semantic types meet valencies. Input, Process and Product: Developments in Teaching and Language Corpora pages 54-69. Semantic frame identification with distributed word representations. Karl Moritz Hermann, Dipanjan Das, Jason Weston, Kuzman Ganchev, Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics. the 52nd Annual Meeting of the Association for Computational LinguisticsBaltimore, Maryland1Long Papers). Association for Computational LinguisticsKarl Moritz Hermann, Dipanjan Das, Jason Weston, and Kuzman Ganchev. 2014. Semantic frame iden- tification with distributed word representations. In Proceedings of the 52nd Annual Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers). Association for Computational Lin- guistics, Baltimore, Maryland, pages 1448-1458. http://www.aclweb.org/anthology/P/P14/P14-1136. Long short-term memory. Sepp Hochreiter, Jürgen Schmidhuber, Neural computation. 98Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural computation 9(8):1735-1780. Identifying framenet frames for verbs from a real-text corpus. Matthew Honnibal, Tobias Hawker, Proceedings of Australasian Language Technology Workshop. Australasian Language Technology WorkshopMatthew Honnibal and Tobias Hawker. 2005. Identi- fying framenet frames for verbs from a real-text cor- pus. In Proceedings of Australasian Language Tech- nology Workshop. Embeddings for word sense disambiguation: An evaluation study. Ignacio Iacobacci, Mohammad Taher Pilehvar, Roberto Navigli, Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. the 54th Annual Meeting of the Association for Computational LinguisticsIgnacio Iacobacci, Mohammad Taher Pilehvar, and Roberto Navigli. 2016. Embeddings for word sense disambiguation: An evaluation study. In Proceed- ings of the 54th Annual Meeting of the Association for Computational Linguistics. pages 897-907. Combining lexical resources: mapping between propbank and verbnet. Edward Loper, Szu-Ting Yi, Martha Palmer, Proceedings of the 7th International Workshop on Computational Linguistics. the 7th International Workshop on Computational LinguisticsTilburg, the NetherlandsEdward Loper, Szu-Ting Yi, and Martha Palmer. 2007. Combining lexical resources: mapping between propbank and verbnet. In Proceedings of the 7th In- ternational Workshop on Computational Linguistics, Tilburg, the Netherlands. Cmills: Adapting semantic role labeling features to dependency parsing. Chad Mills, Gina-Anne Levow, Proceedings of the 9th International Workshop on Semantic Evaluation. the 9th International Workshop on Semantic EvaluationDenver, ColoradoAssociation for Computational LinguisticsChad Mills and Gina-Anne Levow. 2015. Cmill- s: Adapting semantic role labeling features to dependency parsing. In Proceedings of the 9th International Workshop on Semantic Evaluation (SemEval 2015). Association for Computational Linguistics, Denver, Colorado, pages 433-437. http://www.aclweb.org/anthology/S15-2075. The proposition bank: An annotated corpus of semantic roles. Martha Palmer, Daniel Gildea, Paul Kingsbury, Computational linguistics. 311Martha Palmer, Daniel Gildea, and Paul Kingsbury. 2005. The proposition bank: An annotated corpus of semantic roles. Computational linguistics 31(1):71- 106. Glove: Global vectors for word representation. Jeffrey Pennington, Richard Socher, Christopher D Manning, EMNLP. 14Jeffrey Pennington, Richard Socher, and Christopher D Manning. 2014. Glove: Global vectors for word representation. In EMNLP. volume 14, pages 1532- 1543. Semeval-2007 task 17: English lexical sample, srl and all words. Edward Sameer S Pradhan, Dmitriy Loper, Martha Dligach, Palmer, Proceedings of the 4th International Workshop on Semantic Evaluations. the 4th International Workshop on Semantic EvaluationsAssociation for Computational LinguisticsSameer S Pradhan, Edward Loper, Dmitriy Dligach, and Martha Palmer. 2007. Semeval-2007 task 17: English lexical sample, srl and all words. In Pro- ceedings of the 4th International Workshop on Se- mantic Evaluations. Association for Computational Linguistics, pages 87-92. VerbNet: A Broad-Coverage, Comprehensive Verb Lexicon. Karin Kipper Schuler, University of PennsylvaniaPh.D. thesisKarin Kipper Schuler. 2006. VerbNet: A Broad-Coverage, Comprehensive Verb Lex- icon. Ph.D. thesis, University of Penn- sylvania. http://verbs.colorado.edu/ kip- per/Papers/dissertation.pdf. Chinese frame identification with deep neural network. Hongyan Zhao, Ru Li, Sheng Zhang, Liwen Zhang, 3075Hongyan Zhao, Ru Li, Sheng Zhang, and Liwen Zhang. 2016. Chinese frame identification with deep neural network 30(6):75. It makes sense: A wide-coverage word sense disambiguation system for free text. Zhi Zhong, Hwee Tou Ng, Proceedings of the ACL 2010 System Demonstrations. the ACL 2010 System DemonstrationsAssociation for Computational LinguisticsZhi Zhong and Hwee Tou Ng. 2010. It makes sense: A wide-coverage word sense disambiguation system for free text. In Proceedings of the ACL 2010 Sys- tem Demonstrations. Association for Computational Linguistics, pages 78-83.
22,319,731
GW QA at SemEval-2017 Task 3: Question Answer Re-ranking on Arabic Fora
This paper describes our submission to SemEval-2017 Task 3 Subtask D, "Question Answer Ranking in Arabic Community Question Answering". In this work, we applied a supervised machine learning approach to automatically re-rank a set of QA pairs according to their relevance to a given question. We employ features based on latent semantic models, namely WTMF, as well as a set of lexical features based on string length and surface level matching. The proposed system ranked first out of 3 submissions, with a MAP score of 61.16%.
[ 10887722, 11204982 ]
GW QA at SemEval-2017 Task 3: Question Answer Re-ranking on Arabic Fora Association for Computational LinguisticsCopyright Association for Computational LinguisticsAugust 3 -4, 2017. 2017 Nada Almarwani Department of Computer Science The George Washington University Mona Diab mtdiab@gwu.edu Department of Computer Science The George Washington University GW QA at SemEval-2017 Task 3: Question Answer Re-ranking on Arabic Fora Proceedings of the 11th International Workshop on Semantic Evaluations (SemEval-2017) the 11th International Workshop on Semantic Evaluations (SemEval-2017)Vancouver, CanadaAssociation for Computational LinguisticsAugust 3 -4, 2017. 2017 This paper describes our submission to SemEval-2017 Task 3 Subtask D, "Question Answer Ranking in Arabic Community Question Answering". In this work, we applied a supervised machine learning approach to automatically re-rank a set of QA pairs according to their relevance to a given question. We employ features based on latent semantic models, namely WTMF, as well as a set of lexical features based on string length and surface level matching. The proposed system ranked first out of 3 submissions, with a MAP score of 61.16%. Introduction Nowadays Community Question Answering (CQA) websites provide a virtual place for users to share and exchange knowledge about different topics. In most cases, users freely express their concerns and hope for some reliable answers from specialists or other users. In addition, they can search for an answer from previously posted question-answers (QA) that are similar to their question. Although posting a question and looking for a direct or related answer in CQA sounds appealing, the number of unanswered questions are relatively high. According to Baltadzhieva and Chrupała (2015) the number of unanswered questions in Stack Overflow 1 and Yahoo! Answers 2 are approximately 10.9% and 15%, respectively. Interestingly, as noted in (Asaduzzaman et al., 2013), the high percentage of unanswered questions is due to the duplicate question problem, i.e. the existence of a similar question that had been addressed before, which 1 A programming CQA forum 2 A community-driven question-and-answer site makes users not re-address the question again. Hence, it is the asker's role to review the site looking for an answer before posting a new question. This is a task that requires searching related questions from a hundred others posted on a daily basis. Thus, in a good forum there should be an automatic search functionality to retrieve the set of QA that are more likely to be related to the new question being asked. As a result, the number of duplications and unanswered questions will be limited. In order to find a solution to this and other problems in CQA, the SemEval 2015, 2016, and 2017 Task 3 have been dedicated to dealing with "Answer Selection in Community Question Answering" (Nakov et al., 2017(Nakov et al., , 2016AlessandroMoschitti et al., 2015). There are 5 different subtasks, one of which has been proposed for Arabic. The specific task for Arabic in the SemEval 2016-2017 Task 3, subtask D, was to re-rank the possible related question-answer pairs to a given question. The Arabic task is especially difficult due to its challenging characteristics. Arabic is one of the most complex languages to process due to its morphological richness, with relative free word order, and its diglossic nature (where the standard and the dialects mix in most genres of data). The rest of this paper is organized as follows: Section 2 gives an overview of the task and data, Section 3 describes the proposed system, Section 4 presents a discussion of the experiments and results, Section 5 outlines the error analysis, and Section 6 concludes. Task and Data Description Arabic by nature has different characteristics that make it one of the most challenging languages to process from an NLP perspective. It is a morphologically rich language, flexible word order, and in most typical genres and domains available online, we note a significant mix of the standard form of Arabic (MSA) and dialectical variants (DA). In fact, the use of dialectical Arabic in fora such as the CQA presents a special challenge for processing Arabic. The SemEval 2017 subtask D targets the Arabic language. In particular, the task is to re-rank a given set of QA pairs with respect to their relatedness to a given query. Therefore, the top of the ranked list is either a directly related pair, "Direct"; a "Relevant" pair, which is not directly related but includes relevant information; or an "Irrelevant" pair, at the end of the list. These are the three labels used for the task. The organizers cast the task as both a ranking problem with the three possible ranks as well as a binary classification problem where they grouped the labels Direct and Relevant as true, while Irrelevant is deemed False. The Arabic dataset was extracted from medical fora, where users ask question(s) about medical concerns and the answers are generally from doctors. The dataset contains: a training of 1,031 questions and 30,411 potentially related QA pairs, a development set of 250 questions and 7,385 potentially related QA pairs, and a test set of 1400 questions associated with 8 to 9 potentially related QA pairs for each. 3 Approach In this work, we are interested in studying the effect of using semantic textual similarity (STS) based on latent semantic representations and surface level similarity features derived from the given triple: User new Question Q u , and the retrieved Question Answer (QA) pairs which we will refer to as R Q and R A , respectively. Therefore, we casted the problem as a ranking problem that orders the QA pairs according to their relatedness to a given query Q u . We used a supervised framework SV M rank (Manning et al., 2008). In order to extract the features set between the Q u and QA pair, we extracted a set of features shared between the (Q u , R Q ) and shared between the (Q u , R A ) and then we used the concatenation of both as a feature vector for each triple. In the following subsection, we describe in detail the preprocessing steps we applied to the raw data and the set of features we used in the submit-3 For more details refer to the task description paper at (Nakov et al., 2017) ted model. Preprocessing and Features Text Preprocessing Text preprocessing is especially important for this CQA dataset. Therefore, in this section we briefly outline the preprocessing we applied before the feature extraction. First of all, we used SPLIT (Al-Badrashiny et al., 2016) to check if a token is a number, date, URL, or punctuation. All URLs and punctuation are removed and numbers and dates are normalized to Num and Date, respectively. Alef and Yaa characters are normalized each to a single form which is typical in large scale Arabic NLP applications to overcome and avoid writing variations. For tokenization, lemmatization and stemming we used MADAMIRA (Pasha et al., 2014) (a D3 tokenization scheme which segments determiners as well as proclitics and enclitics). Finally, we removed stop words based on a list. 4 Features 1 . Latent Semantics Features: a latent semantic representation transforms the high dimensional representation of text into a low dimensional latent space and thus overcomes the problem of standard bag-of-words representation by assigning a semantic profile to the text, which captures implicit syntactic and semantic information. There are various models such as Latent Dirichlet Allocation (LDA) (Blei et al., 2003), which rely on observed words to find text distribution over "K" topics. These models in general are applied to relatively lengthy pieces of text or documents. However, texts such as question and answer pairs found in CQA are relatively short pieces of text with two to three sentences on average. Therefore, we used the Weighted Textual Matrix Factorization (WTMF) (Guo and Diab, 2012) We used the implementation of WTMF, 5 with a modification in the preprocessing pipeline to accommodate Arabic, i.e. we used the same preprocessing steps in 3.1.1. We used the stems of the word as the level of representation. To train the model we used a sample data from Arabic Gigaword (Parker et al., 2011) with the UNANNOTATED Arabic data provided in the task website. 6 We used the default parameters except for the number of dimensions, which we set to 500. Table 1 shows Training data statistics. For feature generation, we first generated vector representation for Q u , R Q , and R A using the above model. Then, we used Euclidean distance, Manhattan distance, and Cosine distance to calculate the overall semantic relatedness scores between ( Q u ,R Q ) and between ( Q u ,R A ). . Lexical Features: similar pairs are more likely to share more words and hence they are more likely to be related. Following this assumption, the following set of features are used to record the length information of a given pair using the following measures:|B − A|, |A ∩ B|, (|B|−|A|) |A| , (|A|−|B|) |B| , |A∩B| |B| where |A| represents the number of unique instances in A, |B − A| refers to the number of unique instances that are in B but not in A, and |A ∩ B| represents the number of instances that are in both A and B. To account for word forms variations, we applied them at the token, lemma and stem levels. Experiments and Results Our ranking system is a supervised model using SV M rank , a variation of SVM (Hearst et al., 1998) for ranking. We tested different types of kernels, and the best result was obtained using a linear kernel, which we used to train our model. Furthermore, we tuned the cost factor parameter C of the linear kernel on the development set and we obtained the best result with C=3, which we set during the testing of our model. The outputs of the SV M rank are mainly used for ordering and they do not have any meaning of relatedness. 7 For binary classification, "Direct" and "Relevant" are mapped to "True" and "Irrelevant" is mapped to "False" for the classification task. We employed a logistic regression (LR) classifier, LI-BLINEAR classifier with the default parameters, implemented using WEKA package (Witten and Frank, 2005). We report results on the development tuning set, DEV, and TEST set. Furthermore, we report the results of different experimental setups to show the performance over different feature sets. We report results using lexical features (LEX), using WTMF features (WTMF), and with combined features (WTMF+LEX). The latter is our primary submission to the SemEval-2017 subtask D. It is worth noting that we only officially participated in the ranking task. In addition, we report the binary classification results, which we did not officially submit. Furthermore, we compare our results to subtask D baselines and we report the results using the official metrics. As can be seen in Table 2, the combined WTMF+LEX setting outperformed the other settings, WTMF and LEX, individually. This indicates that the combination of LEX features with WTMF provide complementary information about the relatedness at the explicit matching level for the model. Specifically, the WTMF+LEX based system improved the MAP by about 1% increase from the WTMF and the LEX based system. Furthermore, we obtain a significant improvement over the baselines for the DEV set and relatively modest improvements in the TEST set, with MAP 45.73 and 61.16, respectively. Table 3 on the other hand, presents the results of the binary classification on the TEST set using the WTMF+LEX setting along with the baseline and the results submitted by the two other participants. As can be seen in the the table, we achieved the best result on all metrics except for precision. ( ) ( ) [-] )( ) ) For a while I have been suffering from itching in my hands and legs resulting in redness [-]Knowing that when I put my hand on the itch place I find it burning and swelling Table 4: 1 is an example of Mixed Languages and 2 is an example of Mixed between Dialectal,words between parentheses, and Modern Standard Arabic. Both types of mix resulted in wrong prediction of the relatedness relation FN categories. For example, words describing personal information such as weight, age, or gender are not directly related to the medical concern being asked and are considered noise. Therefore, this data needed a hand crafted list to be used for cleaning. Conclusion We have presented in this paper the submission of the GW QA team in SemEval-2017 Task 3 subtask D on Arabic CQA ranking. We used a supervised machine learning ranker based on a combination of latent Semantics based similarity and lexical features. We submitted a primary result using the SV M rank and we used Logistic regression for the binary classification setting, not an official submission. Our primary submission MAP official score ranked first for the Arabic subtask D. Furthermore, we analyzed the performance of our model and outlined the limitations that caused false positive and false negative predictions. Table 3 : 3Mixed Arabic variants and Mixed Languages: this is one of the challenges proposed by the task. Table4 shows an example of this from the SemEval-2017 test data. The mix in either dialect with standard Arabic, or Arabic with a foreign language (English), or both. This affected FP and FN cases produced by our system as follows:(a) WTMF Model: we had a mismatch between the data genre used to train the WTMF model and our test data resulting in a high out of vocabulary (OOV) rate in the pair of text snippets compared; (b) . Lexical feature: mixes in either dialect/standard, or Arabic with foreign language, or both resulted in a low overlap between the pair.2 . Noise: even though we removed a list of stop words, there are other words that are considered noise words in this task that affect the overlap similarities in both the FP andBinary Classification Results us- ing our LR classifier with combined features WTMF+LEN on the Test set 5 Error Analysis There were different challenges faced during the ranking and classification of a given question. We observed that False positive (FP) and False neg- ative (FN) examples fall in one of the following categories: 1 . 1 to- tal sperm 300 millions sperm [-] S-second h 60% My husband was checked and the result was total sperm 300 milions sperm [-]S- second h 60% does this check up sound correct? 2 https://pypi.python.org/pypi/many-stop-words http://www.cs.columbia.edu/ weiwei/code.html 6 http://alt.qcri.org/semeval2016/task3/data/uploads/Arabic. DataDump.txt.gz https://www.cs.cornell.edu/people/tj/svm light /svm rank.html Split: Smart preprocessing (quasi) language independent tool. Mohamed Al-Badrashiny, Arfath Pasha, Mona Diab, Nizar Habash, Owen Rambow, 10th International Conference on Language Resources and Evaluation (LREC'16). European Language Resources Association (ELRA). Portorož, SloveniaWael Salloum, and Ramy EskanderMohamed Al-Badrashiny, Arfath Pasha, Mona Diab, Nizar Habash, Owen Rambow, Wael Salloum, and Ramy Eskander. 2016. Split: Smart preprocessing (quasi) language independent tool. In 10th Interna- tional Conference on Language Resources and Eval- uation (LREC'16). European Language Resources Association (ELRA), Portorož, Slovenia. James Preslavnakov Lluısmarquez Walidmagdy Alessandro-Moschitti, Bilal Glass, Randeree, Semeval-2015 task 3: Answer selection in community question answering. 269PreslavNakov LluısMarquez WalidMagdy Alessandro- Moschitti, James Glass, and Bilal Randeree. 2015. Semeval-2015 task 3: Answer selection in commu- nity question answering. SemEval-2015 269. Answering questions about unanswered questions of stack overflow. Muhammad Asaduzzaman, Ahmed Shah Mashiyat, Kevin A Roy, Schneider, Mining Software Repositories (MSR), 2013 10th IEEE Working Conference on. IEEE. Muhammad Asaduzzaman, Ahmed Shah Mashiyat, Chanchal K Roy, and Kevin A Schneider. 2013. An- swering questions about unanswered questions of stack overflow. In Mining Software Repositories (MSR), 2013 10th IEEE Working Conference on. IEEE, pages 97-100. Question quality in community question answering forums: a survey. Antoaneta Baltadzhieva, Grzegorz Chrupała, Acm Sigkdd Explorations Newsletter. 171Antoaneta Baltadzhieva and Grzegorz Chrupała. 2015. Question quality in community question answer- ing forums: a survey. Acm Sigkdd Explorations Newsletter 17(1):8-13. Latent dirichlet allocation. M David, Blei, Y Andrew, Michael I Jordan Ng, Journal of machine Learning research. 3David M Blei, Andrew Y Ng, and Michael I Jordan. 2003. Latent dirichlet allocation. Journal of ma- chine Learning research 3(Jan):993-1022. Modeling sentences in the latent space. Weiwei Guo, Mona Diab, Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Long Papers-volume. the 50th Annual Meeting of the Association for Computational Linguistics: Long Papers-volume1Weiwei Guo and Mona Diab. 2012. Modeling sen- tences in the latent space. In Proceedings of the 50th Annual Meeting of the Association for Compu- tational Linguistics: Long Papers-volume 1. Associ- ation for Computational Linguistics, pages 864-872. Support vector machines. Marti A Hearst, Susan T Dumais, Edgar Osuna, John Platt, Bernhard Scholkopf, IEEE Intelligent Systems and their Applications. 134Marti A. Hearst, Susan T Dumais, Edgar Osuna, John Platt, and Bernhard Scholkopf. 1998. Support vec- tor machines. IEEE Intelligent Systems and their Applications 13(4):18-28. Introduction to information retrieval. D Christopher, Prabhakar Manning, Hinrich Raghavan, Schütze, Cambridge university press1CambridgeChristopher D Manning, Prabhakar Raghavan, Hinrich Schütze, et al. 2008. Introduction to information re- trieval, volume 1. Cambridge university press Cam- bridge. SemEval-2017 task 3: Community question answering. Preslav Nakov, Doris Hoogeveen, Lluís Màrquez, Alessandro Moschitti, Hamdy Mubarak, Timothy Baldwin, Karin Verspoor, Proceedings of the 11th International Workshop on Semantic Evaluation. Association for Computational Linguistics. the 11th International Workshop on Semantic Evaluation. Association for Computational LinguisticsVancouver, Canada17Preslav Nakov, Doris Hoogeveen, Lluís Màrquez, Alessandro Moschitti, Hamdy Mubarak, Timothy Baldwin, and Karin Verspoor. 2017. SemEval-2017 task 3: Community question answering. In Proceed- ings of the 11th International Workshop on Semantic Evaluation. Association for Computational Linguis- tics, Vancouver, Canada, SemEval '17. Abed Alhakim Freihat, Jim Glass, and Bilal Randeree. Preslav Nakov, Lluís Màrquez, Alessandro Moschitti, Walid Magdy, Hamdy Mubarak, Proceedings of the 10th International Workshop on Semantic Evaluation. Association for Computational Linguistics. the 10th International Workshop on Semantic Evaluation. Association for Computational LinguisticsSan Diego, California, Se-mEval16SemEval-2016 task 3: Community question answeringPreslav Nakov, Lluís Màrquez, Alessandro Moschitti, Walid Magdy, Hamdy Mubarak, Abed Alhakim Freihat, Jim Glass, and Bilal Randeree. 2016. SemEval-2016 task 3: Community question answer- ing. In Proceedings of the 10th International Work- shop on Semantic Evaluation. Association for Com- putational Linguistics, San Diego, California, Se- mEval '16. Arabic gigaword fifth edition ldc2011t11. Philadelphia: Linguistic Data Consortium. Robert Parker, David Graff, Ke Chen, Junbo Kong, Kazuaki Maeda, Robert Parker, David Graff, Ke Chen, Junbo Kong, and Kazuaki Maeda. 2011. Arabic gigaword fifth edition ldc2011t11. Philadelphia: Linguistic Data Consor- tium . Madamira: A fast, comprehensive tool for morphological analysis and disambiguation of arabic. Arfath Pasha, Mohamed Al-Badrashiny, Mona T Diab, Ahmed El Kholy, Ramy Eskander, Nizar Habash, Manoj Pooleery, Owen Rambow, Ryan Roth, LREC. 14Arfath Pasha, Mohamed Al-Badrashiny, Mona T Diab, Ahmed El Kholy, Ramy Eskander, Nizar Habash, Manoj Pooleery, Owen Rambow, and Ryan Roth. 2014. Madamira: A fast, comprehensive tool for morphological analysis and disambiguation of ara- bic. In LREC. volume 14, pages 1094-1101. Data Mining: Practical machine learning tools and techniques. H Ian, Eibe Witten, Frank, Morgan KaufmannIan H Witten and Eibe Frank. 2005. Data Mining: Practical machine learning tools and techniques. Morgan Kaufmann.
7,212,585
Using Context Information for Dialog Act Classification in DNN Framework
Previous work on dialog act (DA) classification has investigated different methods, such as hidden Markov models, maximum entropy, conditional random fields, graphical models, and support vector machines. A few recent studies explored using deep learning neural networks for DA classification, however, it is not clear yet what is the best method for using dialog context or DA sequential information, and how much gain it brings. This paper proposes several ways of using context information for DA classification, all in the deep learning framework. The baseline system classifies each utterance using the convolutional neural networks (CNN). Our proposed methods include using hierarchical models (recurrent neural networks (RNN) or CNN) for DA sequence tagging where the bottom layer takes the sentence CNN representation as input, concatenating predictions from the previous utterances with the CNN vector for classification, and performing sequence decoding based on the predictions from the sentence CNN model. We conduct thorough experiments and comparisons on the Switchboard corpus, demonstrate that incorporating context information significantly improves DA classification, and show that we achieve new state-of-the-art performance for this task.
[ 9672033, 477778 ]
Using Context Information for Dialog Act Classification in DNN Framework Association for Computational LinguisticsCopyright Association for Computational LinguisticsSeptember 7-11, 2017. 2017 Yang Liu Applied Machine Learning Facebook Kun Han Applied Machine Learning Facebook Zhao Tan Applied Machine Learning Facebook Yun Lei Applied Machine Learning Facebook Using Context Information for Dialog Act Classification in DNN Framework Natural Language Processing Copenhagen, DenmarkAssociation for Computational LinguisticsSeptember 7-11, 2017. 2017 Previous work on dialog act (DA) classification has investigated different methods, such as hidden Markov models, maximum entropy, conditional random fields, graphical models, and support vector machines. A few recent studies explored using deep learning neural networks for DA classification, however, it is not clear yet what is the best method for using dialog context or DA sequential information, and how much gain it brings. This paper proposes several ways of using context information for DA classification, all in the deep learning framework. The baseline system classifies each utterance using the convolutional neural networks (CNN). Our proposed methods include using hierarchical models (recurrent neural networks (RNN) or CNN) for DA sequence tagging where the bottom layer takes the sentence CNN representation as input, concatenating predictions from the previous utterances with the CNN vector for classification, and performing sequence decoding based on the predictions from the sentence CNN model. We conduct thorough experiments and comparisons on the Switchboard corpus, demonstrate that incorporating context information significantly improves DA classification, and show that we achieve new state-of-the-art performance for this task. Introduction Dialog act (DA) represents a function of a speaker's utterance in either human-to-human or human-to-computer conversations. Correct identification of DAs is important for understanding hu-man conversations, as well as for developing intelligent human-to-computer dialog systems (either written or spoken dialogs). For example, recognizing DAs can help identify questions and answers in meetings, customer service, online forum, etc. Many machine learning techniques have been investigated and shown reasonable performance for DA classification, for example, (Ang et al., 2005;Ji and Bilmes, 2005;Kalchbrenner and Blunsom, 2013;Ribeiro et al., 2015), just to name a few. Intuitively we would expect that leveraging dialog context can help classify the current utterance. For example, if the previous sentence is a question, then there is a high probability that the current sentence is a response to that question. Such context information has been explored in some previous methods, for example, hidden Markov models (HMM), conditional random fields (CRF), dynamic Bayesian networks (DBN). Given the recent success of the deep learning framework in various language processing tasks, in this work we also employ neural networks for DA classification. In fact, such models have been used in some recent studies for DA classification, e.g., (Rojas-Barahona et al., 2016;Kalchbrenner and Blunsom, 2013;Zhou et al., 2015); however, previous work has not thoroughly evaluated the use of context information for this task, and there is still a lack of good understanding about how we can use context information and how useful it is. This is the question we aim to answer in this work. The contributions of this paper are: 1) We propose several ways to incorporate context information for DA classification over the baseline method of using convolutional neural networks (CNN) for sentence classification, including: (a) a hierarchical RNN/LSTM and CNN to model the utterance sequence in the conversation, where the input to the higher level LSTM and CNN unit is the sentence vector from the sentence level CNN model; (b) a two-step approach where the predicted DA results for the previous utterances, either labels or probability distributions, are concatenated with the sentence CNN vector for the current utterance as the new input for classification; (c) sequence level decoding based on the predicted DA probabilities and the transition probabilities between DA labels. Some of these methods have not been exploited previously for this task. 2) We perform a detailed and thorough analysis of different modeling approaches and some impacting factors in the models (such as the context length, representations and quality of the predictions). This is the first study with such kind of comparisons. 3) We achieve new state-of-the-art results. Related work Previous work has investigated different machine learning techniques for DA classification such as Maximum entropy, DBN, HMM, and SVM (Ang et al., 2005;Ji and Bilmes, 2005;Venkataraman et al., 2003;Webb et al., 2005;Fernandez and Picard, 2002;Mast et al., 1996;Liu, 2006;Kral and Cerisara, 2014). Different features have been explored in these models, including lexical, syntactic features, prosodic cues, and speaker interactions. In particular, context information has been previously used in some methods. For example, some early studies used HMMs (Venkataraman et al., 2003;Stolcke et al., 2000), where the "hidden" states are the DA tags, which generate the sequence of words as observations. The observation probabilities are obtained by DA specific wordbased language models, and a DA tag based ngram language model provides the transition probabilities between the DA tags. (Ji and Bilmes, 2005;Dielmann and Renals, 2008) used DBN for sequence decoding and examined both the generative and the conditional modeling approaches. CRF, as a powerful sequence labeling method, has also been widely used to incorporate context information for DA classification (Kim et al., 2010;Quarteroni et al., 2011;Chen and Eugenio, 2013;Dielmann and Renals, 2008). It is worth noting that (Ribeiro et al., 2015) used different configurations to capture information from previous context in the SVM classifiers, such as n-grams or DA predictions. This is similar to our work in that we also evaluate using the previous utterances, and the predicted DAs for them. However, our modeling approaches are all based on DNNs, as de-scribed in more details in Section 3, and the interaction between utterances and DA labels is modeled in the hierarchical models in a more principled way. Recently deep learning has been widely adopted in many language processing tasks, including DA classification. Context or sequence information is also explored in this framework. For example, (Rojas-Barahona et al., 2016) proposed to use DNN for DA classification and slot filling, and evaluated on two different sets. They showed that their proposed CNN+LSTM model has negligible gain on one data set, and significant improvement on the other one for the joint DA classification and slot filling task. (Kalchbrenner and Blunsom, 2013) proposed methods for discourse decomposition, and investigated using recurrent CNN for DA classification, reporting some positive results, e.g., 2.9% improvement over the LM-HMM baseline. In this paper we propose different methods in the deep learning framework to incorporate context information. Our hierarchical LSTM and CNN method has some similarities to that used in (Rojas-Barahona et al., 2016;Kalchbrenner and Blunsom, 2013), but unlike those that focus on just one method, we propose a few approaches and perform comparisons among them for a deeper understanding of different methods and their contributing factors. The discussions above are limited to DA classification using speech/text data. Other knowledge sources have also been used in a multimodal setting (e.g., haptic actions in (Chen and Eugenio, 2013)). In this study we just rely on textual information. Also note that in some scenarios, for example, speech conversations where transcripts are from speech recognition systems, DA segmentation is also needed. This problem has been addressed in some previous work, for example, (Lendvai, 2007;Quarteroni et al., 2011;Ang et al., 2005), which often uses a classification or sequence labeling setup for the segmentation task, or performs joint DA segmentation and classification. We use pre-segmented utterances and focus just on the DA classification task in this work. DA Classification Methods Task Our task is to classify each utterance in a conversation into a predefined DA tag set. We use Switchboard data in our experiments (see Section 4.1 for additional information on the data). There are different granularities of the tag sets. In this work we use 42 tags (Jurafsky et al., 1997), which has been widely used in previous studies of DA classification on this data set. Table 1 shows an example of some utterances in a Switchboard conversation. We can see that the 'answer' DA follows the 'question' one, which is quite intuitive. Our goal is thus to model such sequential information for DA classification. Again in this work we only use the transcriptions of the utterances along with the speaker information (i.e., if the current utterance is from the same or different speaker as the previous one), without any speech related features. CNN for utterance classification All of our methods are built based on the basic CNN sentence representation, which has been widely used recently in sentence as well as document classification (Collobert et al., 2011;Kim, 2014), therefore we first briefly describe this baseline. Figure 1 shows the context independent CNN-based classification method. Let w [1...n] represent the word embedding sequence for a sentence with n words, where w i ∈ R d is the ddimensional embedding vector for the i th word. A temporal convolution operation is applied to the sentence: c [1...n] =w [1...n] * f wherew [1...n] denotes the sequence w [1...n] with zero padding, and f is a filter map for the convolution operation. A max pooling layer is then applied over the resulting sequence c [1...n] to obtain one value for the sentence. If we use l window sizes and k filters for each window, then l × k convolutional sequences are generated for each sentence, and after max pooling, we obtain a fixedlength vector s with a dimension of l × k. This is the feature vector representation for the sentence, which is then used as the input in a multi-layer perceptron (MLP) or feedforward neural network for sentence classification. We only use one layer MLP in this work. This baseline CNN model learns textual information in each sentence for DA classification. We can incorporate additional features into this model, for example, if the current sentence is from the same speaker as the previous one. Figure 1 shows the use of such additional features -they are concatenated with the CNN-based textural vector, and then fed to the MLP for DA classification. In the rest of the paper, when there is no confusion, we also use CNN for the cases when additional features are concatenated with the standard CNN for sentence-level representation. We use this CNN model as a baseline, and in the following will explore several methods using context information for DA classification. Use history DA information As discussed earlier, we expect there is valuable sequential information among the DA tags, therefore in the first approach, we combine the history DA information with the current utterance to classify its DA tag. This is represented as additional features concatenated with the CNN sentence representation, as shown in Figure 1. We evaluate different configurations in this framework. • Use DA labels. We compare using reference and system predicted DA labels in training and testing. Note that using reference labels in testing is not a real testing setup. This is just meant to provide an upper bound and understand the performance degradation due to prediction errors. • Use probabilities for system predictions. Instead of taking the hard decisions from the system's predictions, we evaluate using the posterior probabilities from the system in order to capture more information. • History length. We compare using DA information from different number of previous utterances. Note that for most of these setups above when system's predicted DA information is used, we need to go through the following procedure: • train a context-independent sentence CNN model • use it to generate predictions, for training and test data • add the corresponding history DA information in the training set to retrain a model • add the history DA information in the test set and apply the new model The only scenario where these steps are not required is when reference DA tags are used in both training and testing. There is one additional caveat that is worth pointing out -when generating the DA predictions for the training data, ideally we need to perform cross validation for the training set such that all the training sentences are labeled by a model trained from data that does not include this sentence, and thus we have matched information used in training and testing; however, we noticed that our model does not overfit the training data very much, and the training accuracy is not significantly different from the test accuracy, therefore we simply apply the trained CNN model to the training set itself to obtain the DA predictions for all the training sentences, and train the new model. CNN + DA transition decoding In this approach, we perform conversation level decoding that combines the probabilities from the context-independent CNN model and the DA tag transition probabilities. The DA classification problem can be represented as: Y = argmaxP (Y |X) = argmaxP (Y )P (X|Y ) = argmaxP (Y ) i P (x i |y i ) where Y is the DA tag sequence, and X contains the entire conversation, i.e., sequence of sentences. P (Y ) can be computed for the DA tag sequence (similar to word-based n-gram language model, here "words" are DA tags), and the probability of a tag given the utterance (P (x i |y i )) can be obtained from the rescaled probability from the CNN model (that is P (y i |x i )). For decoding, we can use either Viterbi decoding to find the most likely DA sequence (as shown above) or forwardbackward decoding to determine the best tag for each utterance in the sequence. This model is similar to the HMM model used previously for this task (Stolcke et al., 2000), and the difference is in that the probability of a DA given the sentence is estimated by the CNN model, a discrim- inative model, in contrast to the word-based language model that is a generative model. Hierarchical model: CNN+CNN Once we have the sentence vector representation built based on the baseline CNN model, we use another CNN to incorporate context information of an utterance for its classification. Figure 2 shows this method. The sequence of sentences is represented by a sequence of fixed length vectors s [1...m] , where m is the number of sentences in the conversation, and s i is the vector representation for sentence i from the baseline CNN model. Similar to the CNN model for word sequence, we apply a temporal convolutional layer with different filters to s [1...m] . Different from the sentence CNN model for word sequences, here we do not perform pooling for the entire dialog sequence, as the classification task is for each sentence, not the whole conversation (sentence sequence). Instead, for each sentence, the output of every convolutional filter is concatenated to form the sentence's representation, and then an MLP is used for its classification. This approach can be thought as a hierarchical neural network, where the high level CNN is used to capture context information. Hierarchical model: CNN+RNN The hierarchical CNN method uses the neighboring sentences to learn the dependencies among consecutive utterances. A different method to model the sequential information is via an RNN that is intrinsically capable of learning the temporal dynamics, which is suitable for the problem. In this hierarchical model, the representation for each sentence is still learned by the CNN as in the base- line, while the dialog-level sequence information among sentences is modeled by the RNN. Here, we use bidirectional-LSTM (BLSTM) to learn the context before and after the current sentence. The left-to-right LSTM output and the one from the reverse direction are concatenated and input to a hidden layer for classification. BLSTM has been widely used recently for various sequence labeling problems (such as part-of-speech tagging, named entity recognition) and achieved state-of-the-art performance. Figure 3 shows the structure of the model. Note that the difference between these last two models and the one using history DA information is in that DA labels are not explicitly represented in these hierarchical models. Experiments Data We use Switchboard data in our experiments. This corpus has been widely used in the community for DA classification. In this data, two people talked over the phone about a given topic for several minutes. 1155 conversations have been manually labeled with DAs. 40 conversations were held out for testing and development. However, there is no standard as to what are the test ones (it is unknown from the earliest paper using this data (Stolcke et al., 2000)). Therefore we randomly split the set into two, 20 conversations in each, with similar amount of utterances. We use one set as the development set and evaluate on the other set. As mentioned earlier, we do not use speech features, and only use textual information and speaker change feature in this study. For all the experiments, we use human transcripts. This setup is expected to be applicable to written conversations/dialogs. Ta-ble 2 shows the basic statistics of the data. Table 3 shows the baseline classification accuracy results when no context information is used, for three setups: the baseline sentence CNN model with the pretrained embeddings, when speaker change information is added, and when no pretrained embeddings are used. We can see the slight performance change because of the added speaker change feature. When no pretrained embeddings are used, i.e., no additional information is used from other resources, there is a performance degradation of 2-3%. Note that these results are better or at least comparable to stateof-the-art performance. In fact, we also implemented a CRF tagging model for this data set, where we used bag-of-word features for each utterance, therefore the information is similar to that used in the DNN framework (but the CRF does model DA tag sequential information). This CRF model has an accuracy of about 74% for the two sets combined. The CNN model without using pretrained embeddings has worse results than the CRF system that is trained just using the Switchboard data, confirming that when using word embeddings as word representations, pretrained embeddings are beneficial when the training size is small. However, the CNN model can effectively leverage word embedding information (obtained from unlabeled data), whereas it is not straightforward to use such information in the CRF classifiers. This shows an advantage of the DNN-based method. Table 3: DA classification accuracy (%) when using the baseline CNN without context information. Hierarchical models: CNN+CNN/RNN For the hierarchical models described in Section 3.5 and 3.6, i.e., adding CNN and BLSTM on top of the baseline sentence CNN, we kept the same model parameters in the sentence CNN part. The dimension is 64 for both the higher level CNN and LSTM. For these sequence labeling tasks, we use stochastic gradient descent (SGD), with a learning rate of 0.01. We observed this yielded better performance than Adagrad learning. Table 4 shows the results for different setups in these two models to evaluate the impact of context information. For LSTM, we compare using LSTM and BLSTM; for CNN, we show results when using different context window sizes in the top layer. From the table we can see that using LSTM and CNN to model context information for DA classification is effective, both models significantly outperforming the baseline. Regarding the effect of context, in general there is slightly more gain when more context is used, as in BLSTM, or larger windows in CNN. For CNN, when we increase the window more, to beyond 4, there is no further improvement. The greatest difference comes from using context vs. not using it at all. CNN + DA prediction As described in Section 3.3, another method to incorporate context information is to use the DAs from previous utterances. We perform a detailed analysis to examine three factors under this framework: • context history: we use a window of up to 3, i.e., information from the previous one, two, or three utterances; • representation of the DA information, whether it is DA label or probabilities; • reference vs. system predicted DA labels during training and testing. Using the reference DA labels in testing is expected to give an oracle or upper bound performance for this set of experiments. Table 5 shows the results for these setups. The predictions for the utterances are generated using the baseline CNN model, with the pretrained embeddings and speaker information (i.e., the best utterance classification model). The model parameters in the second-round CNN training (when additional history DA information is included) are the same as the baseline CNN. From Table 5 we can see that in terms of the representation of the history DA information, using hard labels and soft predictions achieves similar performance. For model training, it is better to have matched information in training and testing. Using reference DA labels during training and system predictions in testing (second row in the results) is less effective compared to using both system predictions in training and testing. The quality of the prediction also affects the usefulness of the DA prediction information, as demonstrated by the better performance when the reference labels are used compared to using system predicted DAs, which is expected. The immediate previous utterance has the largest impact on the prediction of the current utterance (comparing to not using context at all), and adding longer context helps less. In addition, using the reference previous DA labels (ref train and ref test condition) benefits more than using system predicted DA labels when longer history is used, suggesting that more predicted DAs, when used together, become more noisy and bring less gain. Table 6: DA classification results (%) using different systems. Overall results when context information is used. All the methods using context yield significant improvement over the baseline (statistically significant based on t-test). Comparing representing context information via the DA labels of the previous utterances vs. using the hierarchical CNN or RNN model, we see there is not much difference. This observation is somewhat different from that found in (Ribeiro et al., 2015;Kim et al., 2010) where using previous DA predictions yields more gain than adding n-gram features from the previous utterances. We believe one reason for this difference is the use of the DNN framework to model the utterance sequences. Given the current data size and the oracle performance in Table 5, we expect that when more data is available, using larger neural networks will further improve the performance. Furthermore, we want to mention that overall these results represent new state-of-the-art performance for this task ( (Kalchbrenner and Blunsom, 2013) reported 73.9% accuracy using recurrent CNN, though the results are not directly comparable since they only evaluated on 19 test conversations). Final remarks As expected, our experimental results demonstrate that we can effectively incorporate context information to improve DA classification. We conducted some analyses to see what errors are corrected when we use the context models compared to the baseline results. Due to space limit, we show one positive example below where adding context changes the prediction from 'backchannel' to 'answer'. • Example: -Is this a mail order parts house that specializes in parts for parts for uh old imports? -right It is clear that using context can help disambiguate and better predict the DAs for the current utterance. In fact, we noticed that close to 5% of errors are correctly changed from 'back channel' to 'reply' when context information is used. One of the most frequent errors we notice the system makes is the mislabels between 'statement' and 'statement-opinion'. To correctly identify statement-opinion DAs, we can perform some opinion or subjectivity recognition, but that is out of the scope of this study. Another frequent error is the confusion between backchannel and agreement. For example, 'right' and 'yeah' are common words for both categories, and even with context information, they are still hard to disambiguate for the current models. Finally it is worth pointing out that our work uses an offline setting where we perform DA tagging for the entire conversation. In real world applications, an online setting may be needed; however, information from previous utterances can still be used there. In fact, most of the performance gain from incorporating context information comes from the previous utterances (e.g., the difference between the hierarchical LSTM and BLSTM is very small). Our findings about the effectiveness of context information are applicable to the online setting. Conclusions We proposed several approaches to incorporate context information in the deep learning framework for DA classification in conversations, including expanding the sentence CNN vector with the predicted DA information from previous utterances to train another model, hierarchical models based on CNN or LSTM to model the DA sequence on top of the sentence CNN representation, or dialog level decoding once the sentence CNN generates its hypothesis. Compared to the baseline using CNN for utterance classification, our proposed methods effectively leverage context information and achieve significantly better performance. We observe that there is very small difference among different approaches. Our results represent the state-of-the-art for DA classification on the Switchboard data. We conducted thorough evaluations to understand the impact of different factors, and our results shed lights on the use of context information for similar tasks. In our future work, we plan to apply these approaches to other tasks, such as intent recognition and slot filling in language understanding. Figure 1 : 1Baseline context-independent CNN-based DA classification method. Figure 2 : 2Hirarchical CNN: sequence CNN on top of sentence CNN for DA classification. Figure 3 : 3RNN/Bi-LSTM on top of sentence CNN for DA classification. Table 2 : 2Data information.4.2 Results 4.2.1 Baseline CNN For all the DNN models, we did not tune model parameters very much. Most of the parameters were chosen based on literature or our experience with other DNN-based text classification tasks. We used pretrained embeddings (dimension 200) to initialize word vectors to use in CNN, and then update them during training. 1 To avoid overfitting, we use a dropout of 0.5. The baseline CNN uses three windows: 1, 2, and 3, and 100 filter maps for each. The output hidden layer dimension is 100. For learning, we use Adagrad with a learning rate of 0.01. Table 4 : 4DA classification results (%) when using the hierarchical structure: sentence CNN followed by dialog sequence level CNN or RNN/BLSTM. Table 5 : 5DA classification results (%) when incorporating history DA information in the current utterance in the CNN method. Three factors are examined: context history length, DA representations, and where DA information is from. Table 6 6summarizes the results for different systems, including the baseline CNN model without using context information (this baseline uses pretrained embeddings and speaker change feature), and four different ways of using context: (a) predicted DA information (posterior probabilities) is combined with the current sentence's CNN-based representation; (b) applying a BLSTM on top of the sentence CNN representation; (c) hierarchical CNN that combines the current sentence's CNN representation with its neighbors; (d) sequence decoding by combining CNN posteriors with DA transition scores.From the results, we can see the positive effectset 1 set 2 CNN baseline, no context 74.73 77.12 CNN + DA predictions 76.73 79.9 CNN + RNN/BLSTM 76.91 79.7 CNN + CNN 77.15 79.74 CNN prob + DA transition 76.70 79.69 The embeddings we used are generated based on our collected web data. We compared it to other embeddings, e.g., Senna, and found the performance difference is very small. AcknowledgmentsThe authors thank Yandi Xia for preparing the Switchboard data, Xian Qian, Antoine Raux and Benoit Dumoulin for various discussions. Automatic dialog act segmentation and classification in multiparty meetings. Jeremy Ang, Yang Liu, Elizabeth Shriberg, Proc. of ICASSP. of ICASSPJeremy Ang, Yang Liu, and Elizabeth Shriberg. 2005. Automatic dialog act segmentation and classifica- tion in multiparty meetings. In Proc. of ICASSP. Multimodality and dialogue act classification in the robohelper project. Lin Chen, Barbara Di Eugenio, Proceddings of Sigdial. eddings of SigdialLin Chen and Barbara Di Eugenio. 2013. Multimodal- ity and dialogue act classification in the robohelper project. In Proceddings of Sigdial. Natural language processing (almost) from scratch. Ronan Collobert, Jason Weston, Léon Bottou, Michael Karlen, Koray Kavukcuoglu, Pavel Kuksa, Journal of Machine Learning Research. 12Ronan Collobert, Jason Weston, Léon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from scratch. Journal of Machine Learning Research 12(Aug):2493-2537. Recognition of dialogue acts in multiparty meetings using a switching dbn. Alfred Dielmann, Steve Renals, IEEE Transactions on Audio, Speech and Language Processing. 16Alfred Dielmann and Steve Renals. 2008. Recogni- tion of dialogue acts in multiparty meetings using a switching dbn. IEEE Transactions on Audio, Speech and Language Processing 16. Dialog act classification from prosodic features using support vector machines. Raul Fernandez, Rosalind Picard, Proc. of Speech Prosody. of Speech ProsodyRaul Fernandez and Rosalind Picard. 2002. Dialog act classification from prosodic features using support vector machines. In Proc. of Speech Prosody. Dialog act tagging using graphical models. Gang Ji, Jeff Bilmes, Proc. of ICASSP. of ICASSPGang Ji and Jeff Bilmes. 2005. Dialog act tagging us- ing graphical models. In Proc. of ICASSP. Switchboard swbd-damsl shallow-discourse-function annotation coders manual. D Jurafsky, L Shriberg, D Biasca, University of Colorado at BoulderTechnical reportD. Jurafsky, L. Shriberg, and D. Biasca. 1997. Switch- board swbd-damsl shallow-discourse-function an- notation coders manual. Technical report, Univer- sity of Colorado at Boulder. Recurrent convolutional neural networks for discourse compositionality. Nal Kalchbrenner, Phil Blunsom, Nal Kalchbrenner and Phil Blunsom. 2013. Recurrent convolutional neural networks for discourse compo- sitionality. Classifying dialog acts in one-on-one live chats. Nam Su, Lawrence Kim, Timothy Cavedon, Baldwin, Proceedings of EMNLP. EMNLPSu Nam Kim, Lawrence Cavedon, and Timothy Bald- win. 2010. Classifying dialog acts in one-on-one live chats. In Proceedings of EMNLP. Convolutional neural networks for sentence classification. Yoon Kim, Proc. of EMNLP. of EMNLPYoon Kim. 2014. Convolutional neural networks for sentence classification. In Proc. of EMNLP. pages 1746-1751. Automatic dialogue act recognition with syntactic features. Pavel Kral, Chrisophe Cerisara, Proceedings of LREC. LRECPavel Kral and Chrisophe Cerisara. 2014. Automatic dialogue act recognition with syntactic features. In Proceedings of LREC. Token-based chunking of turninternal dialogue act sequences. Piroska Lendvai, Piroska Lendvai. 2007. Token-based chunking of turn- internal dialogue act sequences. Using svm and error-correcting codes for multiclass dialog act classification in meeting corpus. Yang Liu, InterspeechYang Liu. 2006. Using svm and error-correcting codes for multiclass dialog act classification in meeting corpus. In Interspeech. Dialog act classification with the help of prosody. Marion Mast, Ralf Kompe, Stefan Harbeck, Andreas Kiebling, Heinrich Niemann, Elmar Noth, Proc. of ICSLP. of ICSLPMarion Mast, Ralf Kompe, Stefan Harbeck, Andreas Kiebling, Heinrich Niemann, and Elmar Noth. 1996. Dialog act classification with the help of prosody. In Proc. of ICSLP. Simultaneous dialog segmentation and classification from human-human spoken conversations. Silvia Quarteroni, Alexei V Ivanov, Giuseppe Riccardi, Proceedings of ICASSP. ICASSPSilvia Quarteroni, Alexei V. Ivanov, and Giuseppe Ric- cardi. 2011. Simultaneous dialog segmentation and classification from human-human spoken conversa- tions. In Proceedings of ICASSP. The influence of context on dialogue act recognition. Eugenio Ribeiro, Ricardo Ribeiro, David Martins De Matos, Eugenio Ribeiro, Ricardo Ribeiro, and David Martins de Matos. 2015. The influence of context on dia- logue act recognition . Exploiting sentence and context representation in deep neural models for spoken language understanding. Lina M Rojas-Barahona, Milica Gasic, Nikola Mrksic, Pei-Hao Su, Stefan Ultes, Steve Tsung-Hsien, Young, Lina M. Rojas-Barahona, Milica Gasic, Nikola Mrksic, Pei-Hao Su, Stefan Ultes, Tsung-Hsien, and Steve Young. 2016. Exploiting sentence and context rep- resentation in deep neural models for spoken lan- guage understanding. Dialog act modeling for automatic tagging and recognition of conversational speech. Andreas Stolcke, Klaus Ries, Noah Coccaro, Elizabeth Shriberg, Rebecca Bates, Daniel Jurafsky, Paul Taylor, Rachel Martin, Carol Van Ess-Dykema, Marie Meteer, Computational LinguisticsAndreas Stolcke, Klaus Ries, Noah Coccaro, Eliza- beth Shriberg, Rebecca Bates, Daniel Jurafsky, Paul Taylor, Rachel Martin, Carol Van Ess-Dykema, and Marie Meteer. 2000. Dialog act modeling for au- tomatic tagging and recognition of conversational speech. Computational Linguistics . Training a prosody based dialog act tagger from unlabeled data. Anand Venkataraman, Lucianna Ferrer, Proc. of ICASSP. of ICASSPAndreas Stolcke, and Elizabeth ShribergAnand Venkataraman, Lucianna Ferrer, Andreas Stol- cke, and Elizabeth Shriberg. 2003. Training a prosody based dialog act tagger from unlabeled data. In Proc. of ICASSP. Dialog act classification based on intra-utterances features. Nick Webb, Mark Hepple, Yorick Wilks, AAAI workshop on Spoken Language Understanding. Nick Webb, Mark Hepple, and Yorick Wilks. 2005. Di- alog act classification based on intra-utterances fea- tures. In AAAI workshop on Spoken Language Un- derstanding. Combining heterogeneious deep neural networks with conditional random fields for chinese dialogue act recognition. Yucan Zhou, Qinghua Hu, Jie Liu, Yuan Jia, Neurocomputing. 168Yucan Zhou, Qinghua Hu, Jie Liu, and Yuan Jia. 2015. Combining heterogeneious deep neural net- works with conditional random fields for chinese di- alogue act recognition. Neurocomputing 168.
5,674,957
A phoneme clustering algorithm based on the obligatory contour principle
This paper explores a divisive hierarchical clustering algorithm based on the wellknown Obligatory Contour Principle in phonology. The purpose is twofold: to see if such an algorithm could be used for unsupervised classification of phonemes or graphemes in corpora, and to investigate whether this purported universal constraint really holds for several classes of phonological distinctive features. The algorithm achieves very high accuracies in an unsupervised setting of inferring a consonantvowel distinction, and also has a strong tendency to detect coronal phonemes in an unsupervised fashion. Remaining classes, however, do not correspond as neatly to phonological distinctive feature splits. While the results offer only mixed support for a universal Obligatory Contour Principle, the algorithm can be very useful for many NLP tasks due to the high accuracy in revealing consonant/vowel/coronal distinctions.
[ 2858601, 10986188, 18613906, 5406324, 2438936, 18714586, 11703771, 2870367 ]
A phoneme clustering algorithm based on the obligatory contour principle Association for Computational LinguisticsCopyright Association for Computational LinguisticsCoNLL 2017. August 3 -August 4, 2017. 2017 Mans Hulden mans.hulden@colorado.edu Department of Linguistics University of Colorado A phoneme clustering algorithm based on the obligatory contour principle Proceedings of the 21st Conference on Computational Natural Language Learning the 21st Conference on Computational Natural Language LearningVancouver, CanadaAssociation for Computational LinguisticsCoNLL 2017. August 3 -August 4, 2017. 2017 This paper explores a divisive hierarchical clustering algorithm based on the wellknown Obligatory Contour Principle in phonology. The purpose is twofold: to see if such an algorithm could be used for unsupervised classification of phonemes or graphemes in corpora, and to investigate whether this purported universal constraint really holds for several classes of phonological distinctive features. The algorithm achieves very high accuracies in an unsupervised setting of inferring a consonantvowel distinction, and also has a strong tendency to detect coronal phonemes in an unsupervised fashion. Remaining classes, however, do not correspond as neatly to phonological distinctive feature splits. While the results offer only mixed support for a universal Obligatory Contour Principle, the algorithm can be very useful for many NLP tasks due to the high accuracy in revealing consonant/vowel/coronal distinctions. 1 Introduction 1 It has long been noted in phonology that there seems to be a universal cross-linguistic tendency to avoid redundancy or repetition of similar speech features within a word or morpheme, especially if the phonemes are adjacent to one another. Many different names are given to variants of this general phenomenon in the linguistic literature: "identity avoidance" (Yip, 1998), "similar place avoidance" (Pozdniakov and Segerer, 2007), "obligatory contour principle" (OCP) (Leben, 1973), and "dissimilation" (Hempl, 1893). Some special cases such as haplology (avoidance of adjacent identical syllables) also fall in this general category of avoiding repetition along some dimension. The general phenomenon itself is supported by robust, although inconsistent, evidence across a number of languages. An early example is the observation of Spitta-Bey (1880), 2 that the Arabic language tends to favor combination of consonant segments (phonemes) in morphemes that have different places of articulation; this was also later pointed out by Greenberg (1950) and those Semitic root outliers that deviate from this pattern were analyzed in depth in Frajzyngier (1979). In Proto-Indo-European (PIE) roots, which are mostly structured CVC, stop-V-stop combinations have been found to be statistically underrepresented (Iverson and Salmons, 1992). That is, PIE seems to obey a cross-linguistic constraint that disfavors two similar consonants in a root. Another specific example comes from Japanese, where the phenomenon called Lyman's law-which effectively says that a morpheme may consist of maximally one voiced obstruent-can also be interpreted as avoidance (Itô and Mester, 1986). In light of such evidence, proposals have been put forth to define the concept of phoneme by distributional properties alone as opposed to the prevalent distinctive feature systems which are largely based on articulatory features (Fischer-Jørgensen, 1952). Elsewhere, after finding a statistical tendency to avoid similar place of articulation in word-initial and word-medial consonants, Pozdniakov and Segerer (2007) offer the argument that this phenomenon of "Similar Place Avoidance" is a statistical universal. This phenomenon is often filed under the generic heading "obligatory contour principle" (Leben, 1973;McCarthy, 1986;Yip, 1988;Odden, 1988;Meyers, 1997;Pierrehumbert, 1993;Rose, 2000;Frisch, 2004). Originally, the OCP was applied as a theoretical constraint only to tone languages, with the argument that adjacent identical tones in underlying forms were rare, and this reflected an obligatory contour principle. The usage has since spread, and is assumed to account for segmental features other than tone. It is unclear why the phenomenon is so widespread and why it manifests itself in the diverse ways it does. Accounts range from information compression to a diachronically visible hypercorrection by listeners who misperceive the signal and make the assumption that repetition is unlikely (Ohala, 1981). This paper explores the simplest incarnation of the idea of similarity avoidance; namely, that two adjacent segments are preferably different in some way and that this difference reveals itself globally. That is, it is not assumed that the constraint is absolute; rather, an algorithm is developed that induces grouping of unknown phoneme symbols so as to maximize potential alternation of clusters in a sequence of symbols, i.e. a corpus. If the OCP holds for phonological or phonetic features-primarily places of articulation-such a clustering algorithm could group phonemes along the lines of distinctive features. While, as we shall see, the observations do not support the presence of a strong universal OCP effect, the top-level clusters discovered by the algorithm correspond nearly 100% to the distinction of consonants and vowels-or syllabic and non-syllabic elements if expressed in terms of features. Furthermore, a tierbased variant of the algorithm additionally groups consonants somewhat reliably into coronal/noncoronal places of articulation, and also often distinguishes front vowels from back vowels. This is true even if the algorithm is run on alphabetic representations. An evaluation of the ability to detect C/V distinction against a data set of 503 Bible translations (Kim and Snyder, 2013) is included, improving upon earlier work that attempts to distinguish between consonants and vowels in an unsupervised fashion (Kim and Snyder, 2013;Goldsmith and Xanthos, 2009;Moler and Morri-son, 1983;Sukhotin, 1962). The algorithm is also more robust than earlier algorithms that perform consonant-vowel separation and works with less data, something that is also briefly evaluated. This paper is structured as follows: an overview of previous work is given in section 2, mostly related to the simpler task of grouping consonants and vowels without labeled data, rather than identifying distinctive features. Following that, the general algorithm is developed in section 3, after which the experiments on both phonemic and graphemic representations in section 4 are reported. Four experiments are evaluated. The first uses phonemic data from 9 languages for clustering and evaluates clustering along distinctive feature lines. The second is a graphemic experiment that uses a data set of Bible translations in 503 languages where the task is to distinguish the vowels from the consonants; here, results are compared to Kim and Snyder (2013) on the same data set. That data is slightly noisy, motivating the third experiment, which is also graphemic and evaluates consonant-vowel distinctions on vetted word lists from data taken from the ACL SIG-MORPHON shared task on morphological reinflection (Cotterell et al., 2016). The ability of a tierbased variant of the algorithm to separate coronals from non-coronals is evaluated in a fourth experiment where Universal Dependencies corpora (Nivre et al., 2017) are used. The main results are presented in section 5. Given the high accuracy of the algorithm in C/V distinction with very little data and its consequent potential applicability to decipherment tasks, a small practical example application is evaluated which analyzes a fragment of text, a manuscript of only 54 characters. Related Work The statistical experiments of Andrey Markov (1913) on Alexander Pushkin's poem Eugene Onegin constitute what is probably one of the earliest discoveries of the fact that significant latent structure can be found by examining immediate co-occurrence of graphemes in text. Examining a 20,000-letter sample of the poem, Markov found a strong statistical bias that favored alternation of consonants and vowels. A number of computational approaches have since been investigated that attempt to reveal phonological structure in corpora. Often, orthography is used as a proxy for phonology since textual data is easier to come by. A spectral method was introduced by Moler and Morrison (1983) with the explicit purpose of distinguishing consonants from vowels by a dimensionality reduction on a segment co-occurrence matrix through singular value decomposition (SVD). An almost identical SVD-based approach was later applied to phonological data by Goldsmith and Xanthos (2009). Hidden Markov Models coupled with the EM algorithm have also been used to learn consonant-vowel distinctions (Knight et al., 2006) as well as other latent structure, such as vowel harmony (Goldsmith and Xanthos, 2009). Kim and Snyder (2013) use Bayesian inference supported by simultaneous language clustering to infer C/V-distinctions in a large number of scripts simultaneously. We compare our results against a data set published in conjunction with that work. More directly related to the current work are Mayer et al. (2010) and Mayer and Rohrdantz (2013) who work with models for visualizing consonant co-occurrence in a corpus. Sukhotin's algorithm Sukhotin's algorithm (Sukhotin, 1962(Sukhotin, , 1973) is a well-known algorithm for separating consonants from vowels in orthographic data; good descriptions of the algorithm are given in Guy (1991) and Sassoon (1992). The idea is to start with the assumption that all segments in a corpus are consonants, then repeatedly and greedily find the segment that co-occurs most with other segments, and declare that a vowel. This is performed until a stopping condition is reached. The algorithm is known to perform surprisingly well (Foster, 1992;Goldsmith and Xanthos, 2009), although it is limited to the task it was designed to do-inferring a C/V-distinction (with applications to decipherment) without attempting to reveal any further structure in the segments. All the syllabic/nonsyllabic distinction results in the current work are compared with the performance of Sukhotin's algorithm. General OCP-based algorithm At the core of the new clustering algorithm is the OCP-observation alluded to above, already empirically established in (Markov, 1913(Markov, , 2006, that there is a systematic bias toward alternating adjacent segments along some dimension. To reveal this alternation, one can assume that there is a natural grouping of all segments into two initial sets, called 0 and 1, in such a way that the total number of 0-1 or 1-0 alternations between adjacent segments in a corpus is maximized. For example, consider a corpus of a single string abc. This can be split into two nonempty subsets in six different ways: 0 = {ab} and 1 = {c}; 0 = {a} and 1 = {bc}; 0 = {ac} and 1 = {b}, and their symmetric variants which are produced by swapping 0 and 1. Out of these, the best assignment is 0 = {ac} and 1 = {b}, since if reflects an alternation of sets where abc → 010. The 'score' of this assignment is based on the number of adjacent alternations, in this case 2 (01 and 10). Outside of such small examples which split perfectly into alternating sets, once this optimal division of all segments into 0 and 1 is found, there may remain some residue of adjacent segments in the same class (0-0 and 1-1). The sets 0 and 1 can then be partitioned anew into subsets 00, 01 (from 0) and 10 and 11 (from 1). Again, there may be some residue, and the partitioning procedure can be applied recursively until no further splitting is possible, i.e. until all of the adjacent segments fall into different clusters in the hierarchy. More formally, given a corpus of words w 1 , . . . , w n and where each word is a sequence of symbols s 1 , . . . , s m , this top-level objective function that we want to maximize can be expressed as w i 1(Group(s i ) = Group(s i+1 ))(1) where Group(s) is the set that segment s is in. Given a suggested split of all the segments in a corpus into, say, the top-level disjoint sets 0 and 1, we obviously do not need to examine the whole corpus to establish the score but can do so by simply examining bigram counts of the corpus. Still, finding just the top-level split of segments into 0 and 1 is computationally expensive if done by brute force by trying all the possible assignments of segments into 0 and 1 and evaluating the score for each assignment. Since there are 2 n ways of partitioning a set of segments into two subsets (ignoring the symmetry of 0 and 1), such an approach is feasible in reasonable time only for small alphabets (< 25, roughly). To address the computational search space problem, the algorithm is implemented by a type of simulated annealing (Kirkpatrick et al., 1983;Cernỳ, 1985) to quickly find the optimum. The algorithm for the top-level split proceeds as follows: (1) Randomly divide the set S into S and S (2) Draw an integer p from Uniform(1. . . K), where K depends on the cooling schedule (3) Swap p random segments between S and S (4) If score is higher after swap, keep swap else discard swap. Go to (2). The idea is to begin with an arbitrary partition of S into S and S , then randomly trying successively smaller and smaller random swaps of segments between the two sets according to a cooling schedule, always keeping the swap if the score improves. The cooling schedule was tested against corpora that use smaller alphabets where the answer is known beforehand by a brute-force calculation. The cooling was made slow enough to give the correct answer in 100/100 tries on such development corpora. In practice, this yields an annealing schedule where early swaps (the size of K) are sometimes as large as |S|, ending in K equaling 1 for several iterations before termination. This splitting is repeated recursively to produce new sub-splits until no splitting is possible, i.e. the score cannot improve by splitting a set into two subsets. A tier-based variant Many identity avoidance effects have been documented that seem to operate not by strict adjacency, but over intervening material, such as consonants and vowels, as discussed in the introduction. For example, Rose (2000) argues that OCP effects apply to adjacent consonants across intervening vowels in Semitic languages. This motivates a tier-based variant of the algorithm. In this modification, instead of repeatedly splitting sets based on a residue of adjacent segments that belong to the same set, we instead modify the corpus, removing segments after each split. Each time we split a set S into S and S based on a corpus C, we also create new corpora C and C where segments in S are removed from C and segments in S are removed from C . Splitting then resumes recursively for S and S , where S uses the corpus C and S the corpus C . where each character is assigned a top-level grouping, after which the corpus is modified to remove characters in the respective sets 0 and 1. The algorithm is then applied recursively to the modified corpora. The resulting clustering is shown in (b). corpus C = telaka, and the initial segment set S = {a, e, k, l, t} is split into S = {a, e} and S = {k, l, t} on a first iteration. Likewise, the corpus is now modified by removing the S and S segments from C and C respectively, yielding new corpora C = eaa and C = tlk, and splitting proceeds on these subcorpora. This way, if, say, consonants and vowels operate on different tiers and get split first into top-level sets, the remaining consonants will become adjacent to each other on the next iteration, as will the vowels. Experiments Four experiments are evaluated; the first experiment performs a full hierarchical clustering on phonemic data in 9 typologically divergent languages. The clusters are evaluated according to the following simple criterion: counting the number of splits in the tree that correspond to a split that could be expressed through a single phonological ± feature. For example, if the top level split in the tree produced corresponds to exactly the consonants and vowels, it is counted as a 1, since this corresponds to the partitioning that would be produced by the phonological feature [±syllabic]. If there is no way to express the split through a single distinctive feature, it is counted as a 0. A standard phonological feature set like that given in sources such as Hayes (2011) (Gervain and Erra, 2012) idZ nintS j6j dE tSEtSE hol 6 montSik6 hol v6n 6 montSi itt 6 . . . Italian Wikipedia + g2p tSitta eterna kon abitanti e il komune piu popoloso ditalia . . . Polish (Boruta and Jastrzebska, 2012) gdýie jest bartuC gdýie jest ñe ma xodý tu a kuku ţo xovaS . . . Spanish (Taulé et al., 2008) + g2p un akueRdo entRe la patRonal i los sindikatos fRanTeses sobRe . . . Table 1: The data used for the phonemic clustering experiment, with sources indicated and a sample. found. Both the non-tier algorithm and the tierbased algorithm is evaluated. In the second experiment, the capacity of the algorithm to distinguish between consonants and vowels is evaluated, this time with graphemic data. To separate consonants from vowels-the most significant dimension of alternation between adjacent segments-the algorithm is run only for the top-level split, and it is assumed that the top two subsets will represent the consonants and vowels. Here, the results are compared with those of Kim and Snyder (2013), who train a hierarchical Bayesian model to perform this distinction over all the 503 languages at the same time. Sukhotin's algorithm is also used as another baseline. In the third experiment, the capacity to distinguish consonants and vowels in graphemic data in the form of word lists-i.e. where no frequency data is known-is evaluated compared against Sukhotin's algorithm. Phonemic splitting Nine languages from a diverse set of sources were used for this experiment (see Table 1). Some of the language data were already represented as phonemes (English, Hungarian, and Polish), while for the others, which have close-to-phonemic writing systems, a number of grapheme-to-phoneme (g2p) rules were created manually to convert the data into an International Phonetic Alphabet (IPA) representation. The conversion was on the level of the phoneme-actual allophones (such as /n/ being velarized to [N] before /k/ in most languages or /d/ being pronounced [D] intervocalically in Spanish) were not modeled. Table 1 summarizes the data and gives a sample of each corpus. For this data, the clustering algorithm was run as described above and each split was annotated with information about whether the split could be defined in terms of a single distinctive feature. Figure 2 shows the output of such a tree produced by the algorithm, with manual feature annotations. The percentage of correctly identified top-level splits (which are syllabic/non-syllabic segments) is also given, together with the corresponding results from Sukhotin's C/V-inference algorithm, and Moler & Morrison's SVD-based algorithm. C/V distinction in Bible translations This experiment relies on word lists and frequency counts from Bible translations covering 503 distinct languages. Of these, 476 use a Latin alphabet, 26 a Cyrillic alphabet, and one uses Greek. The data covers a large number of language groups, and has been used before by Kim and Snyder (2013) to evaluate accuracy in unsupervised C/V-distinction. The algorithms were evaluated in two different ways: one, on a task where each C and V set is inferred separately for each language, and two, in a task where all languages' consonants and vowels are learned at once, as if the corpus were one language, for clearer comparison with earlier work. Both token-level accuracy and type-level accuracy are given, again, for comparability reasons. For this data set, Sukhotin's C/V-algorithm and Moler & Morrison's algorithm were used as baselines in addition to the results of Kim and Snyder (2013). C/V-distinction with word lists An additional experiment evaluates the algorithm's capacity to perform C/V-distinction against Sukhotin's algorithm on a data set of 10 morphologically complex languages where lists of inflected forms were taken from the ACL SIGMOR-PHON shared task data (Cotterell et al., 2016). In this case, we have no knowledge of the frequency of the forms given, but need to rely only on type information. The Arabic data was transliterated into a latinate alphabet (by DIN 31635), with vowels marked. For the other languages, the native alphabet was used. Per-type accuracy is reported. Results On the first task, which uses phonemic data, consonant/vowel distinction accuracy is 100% throughout (see Table 2). Sukhotin's algorithm also performs very well in all except two languages. English, in particular, is a surprising outlier, with Sukhotin's algorithm only classifying 21.62% correctly. This is probably due to there existing a proportionately large number of syllabic phonemes in English (13/37). Moler & Morrison's algorithm has less than perfect accuracy in three languages. There is great variation in the OCP algorithm's capacity to produce splits that coincide with phonological features in both the tierbased and non-tier variants. Roughly speaking, the larger the phoneme inventory, the less likely it is for the splits to align themselves in accordance with phonological features. Also, since the tierbased variant naturally leads to more splits, the figures appear higher since splits in lower levels of the tree, which contain few phonemes, can almost always be done along distinctive feature lines. The depth of the induced tree also correlates with the variety of syllable types permitted in the language. An extreme example of this is Hawaiian (Figure 3), which only permits V and CV syllables, yielding a very shallow tree where no consonants are split beyond the first level. English and Polish lie at the other extreme, with 37 splits each. This circumstance may perhaps be further leveraged to infer syllable types from unknown scripts. On the C/V inference task for 503 languages, the OCP algorithm outperforms Sukhotin's algorithm and Kim and Snyder (2013) (K&S) when each language is inspected individually (see Figure 3). However, for the case where we learn all distinctions at once, the OCP algorithm produces an identical result with Sukhotin. Here the token level accuracy also exceeds K&S with 99.89 vs. 98.55. The already high accuracy rate of the OCP algorithm on the Bible translation data is probably in reality even higher, especially when all languages are inspected at the same time. Out of the 343 grapheme types, OCP and Sukhotin only misclassify 7, and upon closer manual inspection, it is found that only two of these are bona fide errors. Five are errors in the gold standard-all in the Cyrillic-based data (see Table 5 for an overview of the errors in the gold standard or the classifications). The first actual error, Cyrillic s, only occurs in five word types in the entire corpus, and is always surrounded by other consonants. The other error,ǒ, is more difficult to interpret-it occurs in three typologically different languages: Akoose (bss), Northern Grebo (gbo), and Peñoles Mixtec (mil). On the third task, where only word lists are available from grapheme classification into C/V, the OCP algorithm performs equally to Sukhotin's algorithm, except for one language (Navajo), Table 3: Results on the 503-language Bible translations on consonant-vowel distinction. Both type and token accuracy are included. The Individual column shows the macro-averaged results on running all languages individually, and the Allcolumn shows the results of running all data at once. Here, 'OCP' is the current algorithm; 'Sukhotin' is Sukhotin's algorithm, 'M&M' is the SVD-method in Moler & Morrison (1983), and 'K&S' is the method given in Kim & Snyder (2013). where the OCP algorithm misclassifies one symbol less (see Figure 4). Application to text fragments: the arrow of the gods Given that the algorithm performs very well on consonant-vowel distinctions and groups segments along distinctive features better with small alphabets, an additional experiment was performed on a small manuscript to get a glimpse of potential application to cryptography and the decipherment of substitution ciphers. In this experiment, the writing system is known to be alphabetic (in fact Cyrillic), and the purpose is to examine the clustering induced by so little available data. The birch bark letter number 292 found in 1957 in excavations in Novgorod, Russia, is the oldest known document in a Finnic language (Karelian), stemming most likely from the early 13th century (Haavio, 1964). The document consists of only 54 symbols, written in Cyrillic. 3 The clustering method (see Figure 4) identifies the vowels and consonants, except for the grapheme y (/u/). This is probably because the short manuscript renders the word nuoli (Latinized form) 'arrow' inconsistently in three different ways, with Cyrillic y = /u/ occurring in different places, making the segment difficult for the algorithm. The high vowels /i/ and /u/ (left) are also separated from the non-high vowels (right) /a/, /o/, and /e/ (the Cyrillic soft sign also falls in this group). Sukhotin's algorithm, which only infers the consonants and vowels, makes one more mistake than the current algorithm. Identifying coronal segments with the tier-based variant Although the only really robust pattern reliably discovered by the algorithm is the distinction between consonants and vowels, there are strong patterns within some of the clusters that appear to be cross-linguistically constant, specifically with the tier-based variant. The first is that, whenever a five-vowel system is present (such as in Basque, Spanish, and Italian), after the topmost split which divides up the vowels and the consonants, the first split within the vowel group is almost always {a, o, u} and {e, i}. A second pattern concerns coronal segments. The first split within the consonant group tends to divide the segments into coronal/non-coronal segments. This is not an absolute trend, but happens far above chance. This is also true when running the algorithm on graphemic data, where coronals can be identified. Table 6 gives an overview of how crosslinguistically coherent the resulting first consonant splits are. The data set is a selection of 14 languages from the Universal Dependencies 2.0 data (Nivre et al., 2017). Conclusion & future work This paper has reported on a simple algorithm that rests on the assumption that languages tend to exhibit hierarchical alternation in adjacent phonemes. While such alternation does not always occur for any individual adjacent segment pair, on Portuguese ç j l n (ñ) r s x 24 Slovak cď j lľ nň r sš zž 26 Table 6: The second consonant grouping found using the tier-based OCP algorithm. This is the split below the top-level consonant/vowel split. The characters in this set largely correspond to coronal sounds. The data comes from 14 languages in the Universal Dependencies 2.0 data set. Shown in parentheses are symbols outside the native orthography of the language (most likely from named entities and borrowings found in the corpora). The rightmost column shows the total number of identified consonants in the language. In particular, l, n, and r are always in this set, while s is nearly always present. the corpus level this alternation largely holds and serves to reveal interesting structure in phonological organization. The top cluster discovered by the algorithm is also a highly reliable indicator of syllabic vs. non-syllabic segments, i.e. consonants and vowels, and improves upon the stateof-the-art in this unsupervised task. Interestingly, Sukhotin's C/V algorithm, which has similar performance (Sukhotin, 1962), can be interpreted as a greedy approximation of the first iteration in the current algorithm. A tier-based variant of the algorithm tends to detect front/back vowel contrasts and coronal/non-coronal contrasts as well, although this is more of a robust trend rather than an absolute. Lower levels in the clustering approach are less reliable indicators of classical feature alternation, but can serve effectively to reveal aspects of syllable structure. For example, it is obvious from the Hawaiian clustering that the predominant syllable in the language is CV. One is led to conclude that the obligatory contour principle may be manifest in larger classes of segments (such as [±syllabic]), but not necessarily in on the fine-grained level. Some resulting cluster splits such as for example {m,p} vs. {b,f,t} (example from Basque) are often not only inseparable by a single feature split, but are not separable by any combination of features. This lack of evidence for a strong OCP may be in line with the vigorous debate in the phonological literature on the universal role of the OCP (see e.g. McCarthy (1986); Odden (1988)). Some languages (such as Finnish and Hawaiian) yield splits that almost always coincide with a single phonological feature, whereas other languages do not. Smaller inventories typically yield more robust results, although this may be partly due to chance factors-there are more ways to split a small set according to distinctive features than large sets. Of interest is the utility of the extracted clusters in various supervised and semi-supervised NLP applications. For example, in algorithms that learn to inflect words from annotated examples (Ahlberg et al., 2015;Cotterell et al., 2016), it is often useful to have a subdivision of the segments that alternate, since this allows one to generalize behavior of classes of segments or graphemes, similar to the way e.g. Brown clusters (Brown et al., 1992) generalize over classes of words. Labeling segments with the position in a clustering tree and using that as a feature, for instance, is a cheap and straightforward way to inject this kind of knowledge into supervised systems designed to operate over many languages. Figure 1 : 1Figure 1 shows an example of this. Here, the initial Illustration of the tier-based variant of the clustering algorithm. The left-hand side (a) shows the original corpus (the single word telaka), Figure 2 : 2Resulting Finnish clusters with manual annotation of the distinctive feature splits. Figure 3 : 3Hawaiian clusters reveal a predominantly CV/V syllable type since the non-syllabic branch of the tree is shallow. Figure 4 : 4Clustering the graphemes in the 54symbol birch bark letter 292 manuscript (a), with transcription given in (b), and the results of OCP clustering (c). Also given are the C/V classifications produced by theMoler and Morrison (1983) algorithm (d),Sukhotin's algorithm (e), and the OCP algorithm (f), with errors marked with red boxes. Po ka Pōlelo hawaiPi ka Pōlelo makuahine a ka poPe maoli . . . Hungarianor PHOIBLE (Moran et al., 2014) is assumed. As mentioned above, the hypothesis under examination is that if the OCP is a strong universal principle, some non- significant number of subclusters coinciding with single phonological distinctive features should be Language Source Sample Arapaho (Cowell and Moss Sr, 2008) towohei hiiTetiP tohnookeP tootheiPeihoo . . . Basque Wikipedia + g2p meSikoko iriburuko espetSe batean sartu zuten eta meSiko . . . English (Brent and Cartwright, 1996) ju want tu si D@ bUk lUk DErz @ bOI wID hIz haet . . . Finnish (Aho, 1884) + g2p vai oli eilen kolmekymmentae kotoapaeinkø se matti ajelee . . . Hawaiian Wikipedia + g2p Table 4 : 4Per type accuracy on C/V-distinction on word lists. Listed are the number of misclassifica- tions, and the accuracy per type. Table 5 : 5The only misclassified segments in the 503-Bible test. The column Class gives this 'incorrect' classification of the OCP algorithm. Most of these are errors in the data/gold standard. Only the Cyrillic s which occurs four times in the data (always adjacent to other consonants) and theǒ-symbol are actually incorrect. All code data sets used are available at https:// github.com/cvocp/cvocp Nun hat, wie schon längst bemerkt ist, die arabische Sprache die Neigung, solche Buchstaben in einem Worte zu vereinigen, deren Organe weit von einander entfernt liegen, wie Kehllaute und Dentale. Translation: Now, the Arabic language, as has long been noted, has the tendency to combine such letters in a word where the place of articulation is distant, such as gutturals and dentals(Spitta-Bey, 1880, p. 15). The exact translation of the contents is a matter of dispute; the first translation given by Yuri Yeliseyev in 1959 reads as follows(Haavio, 1964): God's arrow ten [is] your name // This arrow is God's own // [The] God directs judgment. AcknowledgementsThanks to Andy Cowell for help with the Arapaho and Hawaiian datasets, Mike Hammond and Miikka Silfverberg for comments on an earlier version of this paper, and Francis Tyers for sharing his knowledge of Cyrillic writing systems and comments regarding the error analysis. Thanks also to Zygmunt Frajzyngier and Sharon Rose for general OCP-related discussion and comments. Several anonymous reviewers raised helpful points. This work has been partly sponsored by DARPA I20 in the program Low Resource Languages for Emergent Incidents (LORELEI) issued by DARPA/I20 under Contract No. HR0011-15-C-0113.LanguageSecond Consonant Group #C Basque (c) l n (ñ) r s x z 21Catalan l n r s x z 22Irish d l n r s 13Dutch h l n r x z 19Estonian h l n r s 16Finnish h l n r s (š) (x) (z) 21German j l n r s x z 21Indonesian l n r s z 20Italian h l n r s (y) 21Latin d h l n r s 16Latvianč j ķ l ļ n ņ r s zž 24Lithuanian j l n r sš zž 19 Paradigm classification in supervised learning of morphology. Malin Ahlberg, Markus Forsberg, Mans Hulden, Proceedings of NAACL-HLT. Association for Computational Linguistics. NAACL-HLT. Association for Computational LinguisticsDenver, ColoradoMalin Ahlberg, Markus Forsberg, and Mans Hulden. 2015. Paradigm classification in supervised learning of morphology. In Proceedings of NAACL-HLT. Association for Computational Lin- guistics, Denver, Colorado, pages 1024-1029. http://www.aclweb.org/anthology/N15-1107. Juhani Aho. 1884. Rautatie [The Railroad. Juhani Aho. 1884. Rautatie [The Railroad]. . Werner-Söderström, Porvoo, FinlandWerner- Söderström, Porvoo, Finland. A phonemic corpus of Polish child-directed speech. Luc Boruta, Justyna Jastrzebska, Proceedings of the eighth international conference on Language Resources and Evaluation (LREC). the eighth international conference on Language Resources and Evaluation (LREC)Luc Boruta and Justyna Jastrzebska. 2012. A phone- mic corpus of Polish child-directed speech. In Pro- ceedings of the eighth international conference on Language Resources and Evaluation (LREC). Distributional regularity and phonotactic constraints are useful for segmentation. R Michael, Timothy A Brent, Cartwright, Cognition. 611Michael R. Brent and Timothy A. Cartwright. 1996. Distributional regularity and phonotactic constraints are useful for segmentation. Cognition 61(1):93- 125. Class-based n-gram models of natural language. F Peter, Peter V Brown, Robert L Desouza, Vincent J Mercer, Jenifer C Della Pietra, Lai, Computational Linguistics. 184Peter F. Brown, Peter V. deSouza, Robert L. Mer- cer, Vincent J. Della Pietra, and Jenifer C. Lai. 1992. Class-based n-gram models of natural lan- guage. Computational Linguistics 18(4):467-479. Thermodynamical approach to the traveling salesman problem: An efficient simulation algorithm. Vladimírčernỳ, Journal of Optimization Theory and Applications. 451VladimírČernỳ. 1985. Thermodynamical approach to the traveling salesman problem: An efficient simula- tion algorithm. Journal of Optimization Theory and Applications 45(1):41-51. The SIGMORPHON 2016 shared taskmorphological reinflection. Ryan Cotterell, Christo Kirov, John Sylak-Glassman, David Yarowsky, Proceedings of the 2016 Meeting of SIGMORPHON. the 2016 Meeting of SIGMORPHONBerlin, GermanyAssociation for Computational LinguisticsJason Eisner, and Mans HuldenRyan Cotterell, Christo Kirov, John Sylak-Glassman, David Yarowsky, Jason Eisner, and Mans Hulden. 2016. The SIGMORPHON 2016 shared task- morphological reinflection. In Proceedings of the 2016 Meeting of SIGMORPHON. Association for Computational Linguistics, Berlin, Germany. The Arapaho Language. Andrew Cowell, Alonzo Moss Sr, University Press of ColoradoAndrew Cowell and Alonzo Moss Sr. 2008. The Ara- paho Language. University Press of Colorado. On the definition of phoneme categories on a distributional basis. Eli Fischer-Jørgensen, Acta linguistica. 71-2Eli Fischer-Jørgensen. 1952. On the definition of phoneme categories on a distributional basis. Acta linguistica 7(1-2):8-39. A comparison of vowel identification methods. C Caxton, Foster, Cryptologia. 163Caxton C. Foster. 1992. A comparison of vowel iden- tification methods. Cryptologia 16(3):282-286. Notes on the R 1 R 2 R 2 stems in Semitic. Zygmunt Frajzyngier, Journal of Semitic Studies. 241Zygmunt Frajzyngier. 1979. Notes on the R 1 R 2 R 2 stems in Semitic. Journal of Semitic Studies 24(1):1-12. Language processing and segmental OCP effects. Stefan A Frisch, Phonetically Based Phonology. Bruce Hayes, Robert Martin Kirchner, and Donca SteriadeCambridge University PressStefan A. Frisch. 2004. Language processing and seg- mental OCP effects. In Bruce Hayes, Robert Mar- tin Kirchner, and Donca Steriade, editors, Pho- netically Based Phonology, Cambridge University Press, pages 346-371. The statistical signature of morphosyntax: A study of Hungarian and Italian infant-directed speech. Judit Gervain, Ramón Guevara Erra, Cognition. 1252Judit Gervain and Ramón Guevara Erra. 2012. The sta- tistical signature of morphosyntax: A study of Hun- garian and Italian infant-directed speech. Cognition 125(2):263-287. Learning phonological categories. John Goldsmith, Aris Xanthos, Language. 851John Goldsmith and Aris Xanthos. 2009. Learning phonological categories. Language 85(1):4-38. The patterning of root morphemes in Semitic. Joseph H Greenberg, 6Joseph H. Greenberg. 1950. The patterning of root morphemes in Semitic. Word 6(2):162-181. Vowel identification: an old (but good) algorithm. B M Jacques, Guy, Cryptologia. 153Jacques B. M. Guy. 1991. Vowel identification: an old (but good) algorithm. Cryptologia 15(3):258-262. The oldest source of Finnish mythology: Birchbark letter no. 292. Martti Haavio, Journal of the Folklore Institute. 11/2Martti Haavio. 1964. The oldest source of Finnish mythology: Birchbark letter no. 292. Journal of the Folklore Institute 1(1/2):45-66. Introductory Phonology. Bruce Hayes, John Wiley & SonsBruce Hayes. 2011. Introductory Phonology. John Wi- ley & Sons. Loss of r in English through dissimilation. George Hempl, Dialect Notes. 1George Hempl. 1893. Loss of r in English through dis- similation. Dialect Notes (1):279-281. The phonology of voicing in Japanese: Theoretical consequences for morphological accessibility. Junko Itô, Ralf-Armin Mester, Linguistic Inquiry. Junko Itô and Ralf-Armin Mester. 1986. The phonol- ogy of voicing in Japanese: Theoretical conse- quences for morphological accessibility. Linguistic Inquiry pages 49-73. The phonology of the Proto-Indo-European root structure constraints. K Gregory, Joseph C Iverson, Salmons, Lingua. 874Gregory K. Iverson and Joseph C. Salmons. 1992. The phonology of the Proto-Indo-European root struc- ture constraints. Lingua 87(4):293-320. Unsupervised consonant-vowel prediction over hundreds of languages. Young-Bum Kim, Benjamin Snyder, Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics. the 51st Annual Meeting of the Association for Computational LinguisticsSofia, BulgariaAssociation for Computational Linguistics1Long Papers)Young-Bum Kim and Benjamin Snyder. 2013. Unsu- pervised consonant-vowel prediction over hundreds of languages. In Proceedings of the 51st Annual Meeting of the Association for Computational Lin- guistics (Volume 1: Long Papers). Association for Computational Linguistics, Sofia, Bulgaria, pages 1527-1536. Optimization by simulated annealing. S Kirkpatrick, Jr C D Gelatt, M P Vecchi, Science. 2204598S. Kirkpatrick, Jr. C. D. Gelatt, and M. P. Vecchi. 1983. Optimization by simulated annealing. Sci- ence 220(4598):671-680. Unsupervised analysis for decipherment problems. Kevin Knight, Anish Nair, Nishit Rathod, Kenji Yamada, Proceedings of the COL-ING/ACL. Association for Computational Linguistics. the COL-ING/ACL. Association for Computational LinguisticsKevin Knight, Anish Nair, Nishit Rathod, and Kenji Yamada. 2006. Unsupervised analysis for deci- pherment problems. In Proceedings of the COL- ING/ACL. Association for Computational Linguis- tics, pages 499-506. Suprasegmental Phonology. William Ronald Leben, Massachusetts Institute of TechnologyPh.D. thesisWilliam Ronald Leben. 1973. Suprasegmental Phonol- ogy. Ph.D. thesis, Massachusetts Institute of Tech- nology. Primer statisticheskogo issledovaniya nad tekstom "Evgeniya Onegina", illyustriruyuschij svyaz ispytanij v cep. A A Markov, Izvestiya Akademii Nauk Ser. 63A. A. Markov. 1913. Primer statisticheskogo issle- dovaniya nad tekstom "Evgeniya Onegina", il- lyustriruyuschij svyaz ispytanij v cep. Izvestiya Akademii Nauk Ser. 6(3):153-162. An example of statistical investigation of the text "Eugene Onegin" concerning the connection of samples in chains. A A Markov, Science in Context. 194A. A. Markov. 2006. An example of statistical inves- tigation of the text "Eugene Onegin" concerning the connection of samples in chains. Science in Context 19(4):591-600. Phon-Matrix: Visualizing co-occurrence constraints of sounds. Thomas Mayer, Christian Rohrdantz, Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics: System Demonstrations. Association for Computational Linguistics. the 51st Annual Meeting of the Association for Computational Linguistics: System Demonstrations. Association for Computational LinguisticsSofia, BulgariaThomas Mayer and Christian Rohrdantz. 2013. Phon- Matrix: Visualizing co-occurrence constraints of sounds. In Proceedings of the 51st Annual Meet- ing of the Association for Computational Linguis- tics: System Demonstrations. Association for Com- putational Linguistics, Sofia, Bulgaria, pages 73-78. Consonant co-occurrence in stems across languages: Automatic analysis and visualization of a phonotactic constraint. Thomas Mayer, Christian Rohrdantz, Frans Plank, Peter Bak, Miriam Butt, Daniel A Keim, Proceedings of the 2010 Workshop on NLP and Linguistics: Finding the Common Ground. the 2010 Workshop on NLP and Linguistics: Finding the Common GroundAssociation for Computational LinguisticsThomas Mayer, Christian Rohrdantz, Frans Plank, Pe- ter Bak, Miriam Butt, and Daniel A Keim. 2010. Consonant co-occurrence in stems across languages: Automatic analysis and visualization of a phonotac- tic constraint. In Proceedings of the 2010 Work- shop on NLP and Linguistics: Finding the Common Ground. Association for Computational Linguistics, pages 70-78. OCP effects: Gemination and antigemination. J John, Mccarthy, Linguistic Inquiry. 172John J. McCarthy. 1986. OCP effects: Gemination and antigemination. Linguistic Inquiry 17(2):207-263. OCP effects in optimality theory. Scott Meyers, Natural Language & Linguistic Theory. 154Scott Meyers. 1997. OCP effects in optimality theory. Natural Language & Linguistic Theory 15(4):847- 892. Singular value analysis of cryptograms. Cleve Moler, Donald Morrison, American Mathematical Monthly. Cleve Moler and Donald Morrison. 1983. Singular value analysis of cryptograms. American Mathe- matical Monthly pages 78-87. PHOIBLE Online. Max Planck Institute for Evolutionary Anthropology. Steven Moran, Daniel McCloy, and Richard WrightLeipzigSteven Moran, Daniel McCloy, and Richard Wright, editors. 2014. PHOIBLE Online. Max Planck Institute for Evolutionary Anthropology, Leipzig. http://phoible.org/. Universal dependencies 2.0. LIN-DAT/CLARIN digital library at the Institute of Formal and Applied Linguistics. Joakim Nivre, Željko Agić, Lars Ahrenberg, Maria Jesus Aranzabe, Masayuki Asahara, Aitziber Atutxa, Charles University in PragueJoakim Nivre,Željko Agić, Lars Ahrenberg, Maria Je- sus Aranzabe, Masayuki Asahara, Aitziber Atutxa, et al. 2017. Universal dependencies 2.0. LIN- DAT/CLARIN digital library at the Institute of For- mal and Applied Linguistics, Charles University in Prague. Anti antigemination and the OCP. David Odden, Linguistic Inquiry. 193David Odden. 1988. Anti antigemination and the OCP. Linguistic Inquiry 19(3):451-475. The listener as a source of sound change. John Ohala, Papers from the Parasession on Language and Behavior. Carrie S. Masek, Roberta A. Hendrick, and Mary Frances MillerChicago Linguistic SocietyJohn Ohala. 1981. The listener as a source of sound change. In Carrie S. Masek, Roberta A. Hendrick, and Mary Frances Miller, editors, Papers from the Parasession on Language and Behavior, Chicago Linguistic Society, pages 178-203. Dissimilarity in the Arabic verbal roots. Janet Pierrehumbert, Proceedings of NELS. NELS23Janet Pierrehumbert. 1993. Dissimilarity in the Arabic verbal roots. In Proceedings of NELS. volume 23, pages 367-381. Similar place avoidance: A statistical universal. Konstantin Pozdniakov, Guillaume Segerer, Linguistic Typology. 112Konstantin Pozdniakov and Guillaume Segerer. 2007. Similar place avoidance: A statistical universal. Lin- guistic Typology 11(2):307-348. Rethinking geminates, longdistance geminates, and the OCP. Sharon Rose , Linguistic Inquiry. 311Sharon Rose. 2000. Rethinking geminates, long- distance geminates, and the OCP. Linguistic Inquiry 31(1):85-122. The application of. T George, Sassoon, George T. Sassoon. 1992. The application of Sukhotin's algorithm to certain non-English languages. Cryptologia. 162Sukhotin's algorithm to certain non-English lan- guages. Cryptologia 16(2):165-173. Grammatik des arabischen Vulgärdialectes von Aegypten. Wilhelm Spitta, - Bey, Hinrichs, LeipzigWilhelm Spitta-Bey. 1880. Grammatik des arabischen Vulgärdialectes von Aegypten. Hinrichs, Leipzig. Eksperimental'noe vydelenie klassov bukv s pomoshch'ju EVM. Problemy strukturnoj lingvistiki pages. Boris V Sukhotin, Boris V. Sukhotin. 1962. Eksperimental'noe vydelenie klassov bukv s pomoshch'ju EVM. Problemy struk- turnoj lingvistiki pages 198-206. Méthode de déchiffrage, outil de recherche en linguistique. Boris V Sukhotin, T. A. Informations. Boris V. Sukhotin. 1973. Méthode de déchiffrage, outil de recherche en linguistique. T. A. Informations pages 1-43. AnCora: Multilevel annotated corpora for Catalan and Spanish. Mariona Taulé, Maria Antònia Martí, Marta Recasens, Proceedings of the sixth international conference on Language Resources and Evaluation (LREC). the sixth international conference on Language Resources and Evaluation (LREC)Mariona Taulé, Maria Antònia Martí, and Marta Re- casens. 2008. AnCora: Multilevel annotated cor- pora for Catalan and Spanish. In Proceedings of the sixth international conference on Language Re- sources and Evaluation (LREC). The obligatory contour principle and phonological rules: A loss of identity. Moira Yip, Linguistic Inquiry. 191Moira Yip. 1988. The obligatory contour principle and phonological rules: A loss of identity. Linguistic Inquiry 19(1):65-100. Identity avoidance in phonology and morphology. Moira Yip, Morphology and its Relation to Phonology and Syntax. Steven Lapointe, Diane Brentari, and Patrick FarrellStanfordMoira Yip. 1998. Identity avoidance in phonology and morphology. In Steven Lapointe, Diane Brentari, and Patrick Farrell, editors, Morphology and its Re- lation to Phonology and Syntax, CSLI, Stanford, pages 216-246.
6,205,777
Tokenizing, POS Tagging, Lemmatizing and Parsing UD 2.0 with UDPipe
We present an update to UDPipe 1.0(Straka et al., 2016), a trainable pipeline which performs sentence segmentation, tokenization, POS tagging, lemmatization and dependency parsing. We provide models for all 50 languages of UD 2.0, and furthermore, the pipeline can be trained easily using data in CoNLL-U format.For the purpose of the CoNLL 2017 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies, the updated UDPipe 1.1 was used as one of the baseline systems, finishing as the 13th system of 33 participants. A further improved UDPipe 1.2 participated in the shared task, placing as the 8th best system, while achieving low running times and moderately sized models.The tool is available under open-source Mozilla Public Licence (MPL) and provides bindings for C++, Python (through ufal.udpipe PyPI package), Perl (through UFAL::UDPipe CPAN package), Java and C#.
[ 2512012, 219307246, 246647, 14579508, 11616343, 7417943, 17954486, 13156058, 10901371 ]
Tokenizing, POS Tagging, Lemmatizing and Parsing UD 2.0 with UDPipe Milan Straka straka@ufal.mff.cuni.cz Faculty of Mathematics and Physics Institute of Formal and Applied Linguistics Charles University Jana Straková strakova@ufal.mff.cuni.cz Faculty of Mathematics and Physics Institute of Formal and Applied Linguistics Charles University Tokenizing, POS Tagging, Lemmatizing and Parsing UD 2.0 with UDPipe Proceedings of the CoNLL 2017 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies, pages 88-99, Vancouver, Canada, August 3-4, 2017. We present an update to UDPipe 1.0(Straka et al., 2016), a trainable pipeline which performs sentence segmentation, tokenization, POS tagging, lemmatization and dependency parsing. We provide models for all 50 languages of UD 2.0, and furthermore, the pipeline can be trained easily using data in CoNLL-U format.For the purpose of the CoNLL 2017 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies, the updated UDPipe 1.1 was used as one of the baseline systems, finishing as the 13th system of 33 participants. A further improved UDPipe 1.2 participated in the shared task, placing as the 8th best system, while achieving low running times and moderately sized models.The tool is available under open-source Mozilla Public Licence (MPL) and provides bindings for C++, Python (through ufal.udpipe PyPI package), Perl (through UFAL::UDPipe CPAN package), Java and C#. Introduction The Universal Dependencies project (Nivre et al., 2016) seeks to develop cross-linguistically consistent treebank annotation of morphology and syntax for many languages. The latest version of UD (Nivre et al., 2017a) consists of 70 dependency treebanks in 50 languages. As such, the UD project represents an excellent data source for developing multi-lingual NLP tools which perform sentence segmentation, tokenization, POS tagging, lemmatization and dependency tree parsing. The goal of the CoNLL 2017 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies (CoNLL 2017 UD Shared Task) is to stimulate research in multi-lingual dependency parsers which process raw text only. The overview of the task and the results are presented in Zeman et al. (2017). This paper describes UDPipe (Straka et al., 2016) 1 -an open-source tool which automatically generates sentence segmentation, tokenization, POS tagging, lemmatization and dependency trees, using UD version 2 treebanks as training data. The contributions of this paper are: • Description of UDPipe 1.1 Baseline System, which was used to provide baseline models for CoNLL 2017 UD Shared Task and preprocessed test sets for the CoNLL 2017 UD Shared Task participants. UDPipe 1.1 provided a strong baseline for the task, placing as the 13 th (out of 33) best system in the official ranking. The UDPipe 1.1 Baseline System is described in Section 3. • Description of UDPipe 1.2 Participant System, an improved variant of UDPipe 1.1, which was used as a contestant system in the CoNLL 2017 UD Shared Task, finishing 8 th in the official ranking, while keeping very low software requirements. The UDPipe 1.2 Participant System is described in Section 4. • Evaluation of search-based oracle and several transition-based system on UD 2.0 dependency trees (Section 5). Related Work There is a number of NLP pipelines available, e.g., Natural Language Processing Toolkit 2 (Bird et al., 1 http://ufal.mff.cuni.cz/udpipe 2 NLTK, http://nltk.org 2009) or OpenNLP 3 to name a few. We designed yet another one, UDPipe, with the aim to provide extremely simple tool which can be trained easily using only a CoNLL-U file without additional resources or feature engineering. Deep neural networks have recently achieved remarkable results in many areas of machine learning. In NLP, end-to-end approaches were initially explored by Collobert et al. (2011). With a practical method for precomputing word embeddings (Mikolov et al., 2013) and routine utilization of recurrent neural networks (Hochreiter and Schmidhuber, 1997;Cho et al., 2014), deep neural networks achieved state-of-the-art results in many NLP areas like POS tagging , named entity recognition (Yang et al., 2016) or machine translation (Vaswani et al., 2017). The wave of neural network parsers was started recently by Chen and Manning (2014) who presented fast and accurate transition-based parser. Many other parser models followed, employing various techniques like stack LSTM , global normalization (Andor et al., 2016), biaffine attention (Dozat and Manning, 2016) or recurrent neural network grammars (Kuncoro et al., 2016), improving LAS score in English and Chinese dependency parsing by more than 2 points in 2016. UDPipe 1.1 Baseline System UDPipe 1.0 (Straka et al., 2016) 4 is a trainable pipeline performing sentence segmentation, tokenization, POS tagging, lemmatization and dependency parsing. It is fully trainable using CoNLL-U version 1 files and the pretrained models for UD 1.2 treebanks are provided. For the purpose of the CoNLL 2017 UD Shared Task, we implemented a new version UDPipe 1.1 which processes CoNLL-U version 2 files. UD-Pipe 1.1 was used as one of the baseline systems in the shared task. UDPipe 1.1 Baseline System was trained and tuned in the training phase of CoNLL 2017 UD Shared Task on the UD 2.0 training data and the trained models and outputs were available to the participants. In this Section, we describe the UDPipe 1.1 Baseline System, focusing on the differences to the previous version described in (Straka et al., 2016): the tokenizer (Section 3.1), the tagger (Sec-tion 3.2), the parser (Section 3.3), the hyperparameter search support (Section 3.4), the training details (Section 3.5) and evaluation (Section 3.6). Tokenizer In UD and in CoNLL-U files, the text is structured on several levels -a document consists of paragraphs composed of (possibly partial) sentences, which are sequences of tokens. A token is also usually a word (unit used in further morphological and syntactic processing), but a single token may be composed of several syntactic words (for example, token zum consists of words zu and dem in German). The original text can be therefore reconstructed as a concatenation of tokens with adequate spaces, but not as a concatenation of words. Sentence Segmentation and Tokenization Sentence segmentation and tokenization is performed jointly (as it was in UDPipe 1.0) using a single-layer bidirectional GRU network which predicts for each character whether it is the last one in a sentence, the last one in a token, or not the last one in a token. Spaces are usually not allowed in tokens and therefore the network does not need to predict end-of-token before a space (it only learns to separate adjacent tokens, like for example Hi! or cannot). Multi-Word Token Splitting In UDPipe 1.0, a case insensitive dictionary was used to split tokens into words. This approach is beneficial if there is a fixed number of multi-word tokens in the language (which is the case for example in German). In UDPipe 1.1 Baseline System we also employ automatically generated suffix rules -a token with a specific suffix is split, using the non-matching part of the token as prefix of the first words, and a fixed sequence of first word suffix and other words (e.g, in Polish we create a rule łem → ł + em). The rules are generated automatically by keeping all such rules present in the training data, which do not trigger incorrectly too often. The contribution of suffix rules is evaluated in Section 5. Documents and Paragraphs We use an improved sentence segmenter in UD-Pipe 1.1 Baseline System. The segmenter learns sentence boundaries in the text in a standard way as in UDPipe 1.1 Baseline System, but it omits the sentence breaks at the end of a paragraph or a document. The reason for excluding these boundaries from the training data is that the ends of paragraphs and documents are frequently recognized by layout (e.g. newspaper headlines) and if the recognizer is trained to recognize these sentence breaks, it tends to erroneously split regular sentences. Additionally, we now also mark paragraph boundaries (recognized by empty lines) and document boundaries (corresponding to files being processed, storing file names as document ids) when running the segmenter. Spaces in Tokens Additional feature allowed in CoNLL-U version 2 files is presence of spaces in tokens. If spaces in tokens are allowed, the GRU tokenizer network must be modified to predict token breaks in front of spaces. On the other side, many UD 2.0 languages do not allow spaces in tokens (and in such languages a space in a token might confuse the following systems in the pipeline), therefore, it is configurable whether spaces in tokens are allowed, with the default being to allow spaces in tokens if there is any token with spaces in the training data. Precise Reconstruction of Spaces Unfortunately, neither CoNLL-U version 1 nor version 2 provide a standardized way of storing inter-token spaces which would allow reconstructing the original plain text. Therefore, UDPipe 1.1 Baseline System supports several UDPipe-specific MISC fields that are used for this purpose. CoNLL-U defines SpaceAfter=No MISC feature which denotes that a given token is not followed by a space. We extend this scheme in a compatible way by introducing SpacesAfter=spaces and SpacesBefore=spaces fields. These fields store the spaces following and preceding this token, with SpacesBefore by default empty and SpacesAfter being by default empty or one space depending on SpaceAfter=No presence. Therefore, these fields are not needed if tokens are separated by no space or a single space. The spaces are encoded by a means of a C-like escaping mechanism, with escape sequences \s, \t, \r, \n, \p, \\ for space, tab, CF, LF, | and \ characters, respectively. If spaces in tokens are allowed, these spaces cannot be represented faithfully in the FORM field which disallows tabs and new line characters. Therefore, UDPipe utilizes an additional MISC field SpacesInToken=token with spaces representing the token with original spaces. Once again, with the default value being the value of the FORM field, the field is needed only if the token spaces cannot be represented in the FORM field. All described MISC fields are generated automatically by UDPipe 1.1 Baseline System tokenizer, with SpacesBefore used only at the beginning of a sentence. Furthermore, we also provide an optional way of storing the document-level character offsets of all tokens, using TokenOffset MISC field. The values of this field employ Python-like start:end format. Detokenization To train the tokenizer, the original plain texts of the CoNLL-U files are required. These plain texts can be reconstructed using the SpaceAfter=No feature. However, very little UD version 1 corpora contains this information. Therefore, UDPipe 1.0 offers a way of generating these features using a different raw text in the concerned language (Straka et al., 2016). Fortunately, most UD 2.0 treebanks do include the SpaceAfter=No feature. We perform detokenization only for Dannish, Finnish-FTB and Slovenian-SST. Inference When employing the segmenter and tokenizer GRU network during inference, it is important to normalize spaces in the given text. The reason is that during training, tokens were either adjacent or separated by a single space, so we need to modify the network input during inference accordingly. During inference, we precompute as much network operations on character embeddings as possible 5 (to be specific, we cache 6 matrix products for every character embedding in each GRU). Consequently, the inference is almost twice as fast. Tagger The tagger utilized by UDPipe 1.1 Baseline System is nearly identical to the previous version in UDPipe 1.0. A guesser generates several (UPOS, XPOS, FEATS) triplets for each word according to its last four characters, and an averaged perceptron tagger with a fixed set of features disambiguates the generated tags (Straka et al., 2016;Straková et al., 2014). The lemmatizer is analogous. A guesser produces (lemma rule, UPOS) pairs, where the lemma rule generates a lemma from a word by stripping some prefix and suffix and prepending and appending new prefix and suffix. To generate correct lemma rules, the guesser generates the results not only according to the last four characters of a word, but also using word prefix. Again, the disambiguation is performed by an averaged perceptron tagger. We prefer to perform lemmatization and POS tagging separately (not as a joint task), because we found out that utilization of two different guessers and two different feature sets improves the performance of our system (Straka et al., 2016). The only change in UDPipe 1.1 Baseline System is a possibility to store lemmas not only as lemma rules, i.e., relatively, but also as "absolute" lemmas. This change was required by the fact that some languages such as Persian contain a lot of empty lemmas which are difficult to encode using relative lemma rules, and because Latin-PROIEL treebank uses greek.expression lemma for all Greek forms. Dependency Parsing UDPipe 1.0 utilizes fast transition-based neural dependency parser. The parser is based on a simple neural network with just one hidden layer and without any recurrent connections, using locallynormalized scores. The parser offers several transition systemsa projective arc-standard system (Nivre, 2008), partially non-projective link2 system (Gómez-Rodríguez et al., 2014) and a fully non-projective swap system (Nivre, 2009). Several transition oracles are implemented -static oracles, dynamic oracle for the arc-standard system (Goldberg et al., 2014) and a search-based oracle (Straka et al., 2015). Detailed description of the parser architecture and transition systems and oracles can be found in Straka et al. (2016) and Straka et al. (2015). The parser makes use of FORM, UPOS, FEATS and DEPREL embeddings. The form embeddings are precomputed with word2vec using the training data, the other embeddings are initialized randomly, and all embeddings are updated during training. We again precompute as much network operations as possible for the input embeddings. How-ever, to keep memory requirements and loading times reasonable, we do so only for 1000 most frequent embeddings of every type. Because the CoNLL 2017 UD Shared Task did not allow sentences with multiple roots, we modified all the transition systems in UDPipe 1.1 to generate only one root node and to use the root dependency relation only for this node. Hyperparameter Search Support All three described components employ several hyperparameters which can improve performance if tuned correctly. To ease up the process, UD-Pipe offers random hyperparameter search for all the components -the run=number option during training generates pseudorandom but deterministic values for predefined hyperparameters. The hyperparameters are supposed to be tuned for every component individually, and then merged. Training the UDPipe 1.1 Baseline System When developing the UDPipe 1.1 Baseline System in the training phase of CoNLL 2017 UD Shared Task, the testing data were not yet available for the participants. Therefore a new data split was created from the available training and development data: the performance of the models was evaluated on the development data, and part of the training data was put aside and used to tune the hyperparameters. This baselinemodel-split of the UD 2.0 data is provided together with the baseline modes from Straka (2017). The following subsections describe the details of training the UDPipe 1.1 Baseline System. Tokenizer The segmenter and tokenizer network employs character embeddings and GRU cells of dimension 24. The network was trained using dropout both before and after the recurrent units, using the Adam optimization algorithm (Kingma and Ba, 2014). Suitable batch size, dropout probability, learning rate and number of training epochs was tuned on the tune set. Tagger The tagger and the lemmatizer do not use any hyperparameters which require tuning. The guesser hyperparameter were tuned on the tune set. Parser The parser network employs form embeddings of dimension 50, and UPOS, FEATS and DEPREL embeddings of dimension 20. The hidden layer has dimension 200, batch consists of 10 words and the network was trained for 10 iterations. The suitable transition system, oracle, learning rate and L2 regularization was chosen to maximize the accuracy on the tune set. Evaluation of the UDPipe 1.1 Baseline System There are three testing collections in CoNLL 2017 UD Shared Task: UD 2.0 test data, new parallel treebank (PUD) sets, and four surprise languages. The UDPipe 1.1 Baseline System models were completely trained, released and "frozen" on the UD 2.0 training and development data with a new split (see the previous Section 3.5) already in the training phase of the CoNLL 2017 UD Shared Task on the UD 2.0 training data, unlike the participant systems, which could use the full training data for training and development data for tuning. We used the UDPipe 1.1 Baseline System models for evaluation of the completely new parallel treebank (PUD) set and completely new surprise languages in the following way: For the new parallel treebank sets we utilized the "main" treebank for each language (e.g., for Finish fi instead of fi ftb). This arbitrary decision was a lucky one -after the shared task evaluation, the performance on the parallel treebanks was shown to be significantly worse if different treebanks than the "main" were used (even if they were larger or provided higher LAS on their own test set). The reason seem to be the inconsistencies among the treebanks of the same languagethe Universal Dependencies are yet not so universal as everyone would like. To parse the surprise languages, we employed a baseline model which resulted in highest LAS F1-score on the surprise language sample dataresulting in Finnish FTB, Polish, Finnish FTB and Slovak models for the surprise languages Buryat, Kurmanji, North Sámi and Upper Sorbian, respectively. Naturally, most words of a surprise language are not recognized by a baseline model for a different language. Conveniently, the UPOS tags and FEATS are shared across languages, allowing the baseline model to operate similarly to a delexicalized parser. UDPipe 1.2 Participant System We further updated the UDPipe 1.1 Baseline System to participate in CoNLL 2017 UD Shared Task with an improved UDPipe 1.2 Participant System. As participants of the shared task, we trained the system using the whole training data and searched for hyperparameters using the development data (instead of using the baselinemodelsplit described in Section 3.5). Although the data size increase is not exactly a change in the system itself, it improves performance, especially for smaller treebanks. Hyperparameter Changes While tokenization and segmentation is straightforward in some languages, it is quite complex in others (notably in Japanese and Chinese, which do not use spaces for word separation, or in Vietnamese, in which many tokens contain spaces). In order to improve the performance on these languages we increased the embedding dimension and GRU cell dimension in the tokenizer from 24 to 64. We increased form embedding dimension in the parser from 50 to 64 (larger dimensions showed no more improvements on the development set) and also trained the parser for 20 iterations over the training data instead of 10. Furthermore, instead of using beam of size 5 during parsing as in UDPipe 1.1 Baseline System, we tuned the beam size individually for each treebank, choosing 5, 10, 15 or 20 according to resulting LAS on a development set. Merging Treebanks of the Same Language For several languages, there are multiple treebanks available in the UD 2.0 collection. Ideally, one would merge all training data of all treebanks of a given language. However, according to our preliminary experiments, the annotation is not perfectly consistent even across treebanks of the same language. Still, additional training data, albeit imperfect, could benefit small treebanks. We therefore attempt to exploit these multiplex treebanks by enriching each treebank's training data with training data from other treebanks of the same language. Given a treebank for which another treebanks of the same language exist, we evaluate performance of several such expansions and choose the best according to LAS score on the development data of the treebank in question. We extend the original training data by adding random sentences from the additional treebanks of the same language -we consider subsets containing 1 4 , 1 2 , 1 and 2 times the size of the original treebank. Joint Sentence Segmentation and Parsing Some treebanks are very difficult to segment into sentences due to missing punctuation, which harms the parser performance. We segment three smallest treebanks of this kind (namely Gothic, Latin-PROIEL and Slovenian-SST) jointly with the parser, by choosing such sentence segmentation which maximizes likelihood of their parse trees. In order to determine the segmentation with maximum parsing likelihood, we evaluate every possible segmentation with sentences up to a given maximum length L. Because likelihoods of parse trees are independent, we can utilize dynamic programming and find the best segmentation in polynomial time by parsing sentences of lengths 1 to L at every location in the original text. Therefore, the procedure has the same complexity as parsing text which is circa L 2 /2 times longer than the original one. Additionally, we incorporate the segmentation suggested by the tokenizer in the likelihood of the parse trees -we multiply the tree likelihood by a fixed probability for each sentence boundary different than the one returned by the tokenizer. However, if a transition-based parser is used, the optimum solution for the algorithm described so far would probably be to segment the text into one-token sentences, due to the fact that for a single word there is only one possible sequence of transitions (to make the word a root node), which has therefore probability one. Consequently, we introduce a third hyperparameter, which is an additional "cost" for every sentence. We tuned the three described hyperparameters for every treebank independently to maximize LAS score on development set. The chosen hyperparameter values are shown in Table 1. We expect graphical parsing models to benefit even more from this kind of joint segmentation -for every word, one can compute the probability distribution of attaching it as a dependent to all words within a distance of L (including the word itself, which represents the word being a root node). Then, the likelihood of a single-word sentence would not be one, but would take into account the possibility of attaching the word as a dependent to every near word. Experiments and Results The official CoNLL 2017 UD Shared Task evaluation was performed using a TIRA platform (Potthast et al., 2014), which provided virtual machines for every participants' systems. During test data evaluation, the machines were disconnected from the internet, and reset after the evaluation finished -this way, the entire test sets were kept private even during the evaluation. In addition to official results, we also report results of supplementary experiments. These were evaluated after the shared task, using the released test data (Nivre et al., 2017b). All results are produced using the official evaluation script. Because only plain text (and not gold tokenization) is used as input, all results are in fact F1scores and always take tokenization performance into account. The complete UDPipe 1.2 Participant System scores are shown in Table 2. We also include LAS F1-score of the UDPipe 1.1 Baseline System for reference. Note that due to time constraints, some UDPipe 1.2 Participant System submitted models did not generate any XPOS and lemmas. In these cases, we show XPOS and lemmatization results using post-competition models and typeset them in italic. Table 5: Joint segmentation and parsing in UD-Pipe 1.2 Participant System, optimized to maximize parsing likelihood, in comparison with sequential segmentation and parsing. In order to make the extensive results more visual, we show relative difference of baseline LAS score using the grey bars (on a scale that ignores 3 outliers). We use this visualization also in later tables, always showing relative difference to the first occurrence of the metric in question. The effect of enlarging training data using other treebanks of the same language (Section 4.2) is evaluated in Table 3. We include only those treebanks in which the enlarged training data result in better LAS score and compare the performance to cases in which only the original training data is used. The impact of tokenizer dimension 64 compared to dimension 24 can be found in Table 4. We also include the effect of not using the suffix rules for multi-word token splitting, and not using multi-word token splitting at all. As expected, for many languages the dimension 64 does not change the results, but yields superior performance for languages with either difficult tokenization or sentence segmentation. The improvement resulting from joint sentence segmentation and parsing is evaluated in Table 5. While the LAS and UAS F1-scores of the joint approach improves, the sentence segmentation F1score deteriorates significantly. The overall effect of search-based oracle with various transition systems on parsing accuracy is summarized in Table 6. The search-based oracle improves results in all cases, but the increase is only slight if a dynamic oracle is also used. Note however that dynamic oracles for non-projective systems are usually either very inefficient (for link2, only O(n 8 ) dynamic oracle is proposed in Gómez-Rodríguez et al. (2014)) or not known (as is the case for the swap system). Furthermore, if only a static oracle is used, partially or fully non-projective systems yield better overall performance than a projective one. Yet, a dynamic oracle improves performance of the projective system to the extent it yield better results (which is further improved by utilizing also a search-based oracle). The influence of beam size on UAS and LAS scores is analyzed in Table 7. According to the results, tuning beam size for every treebank independently is worse than using large beam size all the time. Finally, model size and runtime performance of individual UDPipe components are outlined in Table 8. The median of complete model size is circa 13MB and the speed of full processing (tokenization, tagging and parsing with beam size 5) is approximately 1700 words per second on a single core of an Intel Xeon E5-2630 2.4GHz processor. . Binary tools as well as bindings for C++, Python, Perl, Java and C# are provided. As our future work, we consider using deeper models in UDPipe for tokenizers, POS taggers and especially for the parser. Table 2 : 2Full results of UDPipe 1.2 Participant System and LAS F1-score of UDPipe 1.1 Baseline System for reference. The results in italic are not part of the official results and were generated using post-competition models due to time constraints. Table 3 : 3The effect of additional training data from other treebanks of the same language in UDPipe 1.2 Participant System. Treebank GRU-based segmentation followed by parsing Joint segmentation and parsing Sents UAS LAS Sents UAS LAS Gothic 32.46 69.04 62.23 24.12 69.26 62.80 Latin-PROIEL 30.37 66.11 60.63 19.56 66.45 61.55 Slovenian-SST 17.76 57.93 51.95 13.13 59.26 53.94 Table 7 : 7UDPipe 1.2 Participant System parsing scores with various beam sizes. Table 4 : 4Impact of tokenizer dimension 64 versus 24, no suffix rules for multi-word token splitting, and no multi-word token splitting at all in the UDPipe 1.2 Participant System.Transition system and oracle No search-based Search-based oracle oracle UAS LAS UAS LAS Arc standard system with static oracle 74.29 68.27 74.80 68.87 Arc standard system with dynamic oracle 75.31 69.36 75.40 69.51 Swap system with static lazy oracle 74.73 68.76 75.16 69.27 Link2 system with static oracle 74.79 68.76 75.21 69.29 Any system, static oracle 74.72 68.71 75.21 69.31 Any system, any oracle 75.27 69.31 75.38 69.52 Table 6 : 6The overall effect of search-based oracle on various transition systems.Model configuration Model size Model speed [MB] [kwords/s] Tokenizer dim 24 0.04 (0.03-0.15) 27.7 (20-37) Tokenizer dim 64 0.20 (0.19-0.31) 6.0 (4.9-8.6) Tagger&lemmatizer 9.4 (2.3-24.8) 6.5 (2.1-14) Parser beam size 1 3.2 (1.9-6.9) 14.9 (12-19) Parser beam size 5 2.7 (2.2-3.6) Complete model 13.2 (4.4-31.9) 1.7 (1.2-2.3) Table 8 : 8UDPipe 1.2 Participant System model size and runtime performance, displayed as a median for all the treebanks, together with the 5 th and 95 th percentile. The complete model consists of a tokenizer with character embedding and GRU cell dimension 64, a tagger, a lemmatizer and a parser with beam size 5. Conclusions and Future Work We described our contributions to CoNLL 2017 UD Shared Task: UDPipe 1.1 Baseline System and UDPipe 1.2 Participant System. Both these systems and the pretrained models are available at http://ufal.mff.cuni.cz/udpipe under opensource Mozilla Public Licence (MPL)6 https://opennlp.apache.org 4 http://ufal.mff.cuni.cz/udpipe Similarly toDevlin et al. (2014). AcknowledgmentsThis work has been partially supported and has been using language resources and tools de- Globaly normalized transition-based neural networks. Daniel Andor, Chris Alberti, David Weiss, Aliaksei Severyn, Alessandro Presta, Kuzman Ganchev, Slav Petrov, Michael Collins, In Association for Computational Linguistic. Daniel Andor, Chris Alberti, David Weiss, Aliaksei Severyn, Alessandro Presta, Kuzman Ganchev, Slav Petrov, and Michael Collins. 2016. Glob- aly normalized transition-based neural networks. In Association for Computational Linguistic. http://arxiv.org/abs/1603.06042. Natural Language Processing with Python. Steven Bird, Ewan Klein, Edward Loper, Reilly Media, Inc1st editionSteven Bird, Ewan Klein, and Edward Loper. 2009. Natural Language Processing with Python. O'Reilly Media, Inc., 1st edition. A fast and accurate dependency parser using neural networks. Danqi Chen, Christopher Manning, Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). Association for Computational Linguistics. the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). Association for Computational LinguisticsDoha, QatarDanqi Chen and Christopher Manning. 2014. A fast and accurate dependency parser using neural networks. In Proceedings of the 2014 Confer- ence on Empirical Methods in Natural Language Processing (EMNLP). Association for Computa- tional Linguistics, Doha, Qatar, pages 740-750. http://www.aclweb.org/anthology/D14-1082. On the properties of neural machine translation: Encoder-decoder approaches. Kyunghyun Cho, Dzmitry Bart Van Merrienboer, Yoshua Bahdanau, Bengio, CoRR abs/1409.1259KyungHyun Cho, Bart van Merrienboer, Dzmitry Bahdanau, and Yoshua Bengio. 2014. On the properties of neural machine translation: Encoder-decoder approaches. CoRR abs/1409.1259. http://arxiv.org/abs/1409.1259. Natural language processing (almost) from scratch. Ronan Collobert, Jason Weston, Léon Bottou, Michael Karlen, Koray Kavukcuoglu, Pavel Kuksa, The Journal of Machine Learning Research. 12Ronan Collobert, Jason Weston, Léon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from scratch. The Journal of Machine Learning Research 12:2493-2537. Fast and robust neural network joint models for statistical machine translation. Jacob Devlin, Rabih Zbib, Zhongqiang Huang, Thomas Lamar, Richard M Schwartz, John Makhoul, Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, ACL 2014. the 52nd Annual Meeting of the Association for Computational Linguistics, ACL 2014Baltimore, MD, USALong Papers1Jacob Devlin, Rabih Zbib, Zhongqiang Huang, Thomas Lamar, Richard M. Schwartz, and John Makhoul. 2014. Fast and robust neural net- work joint models for statistical machine transla- tion. In Proceedings of the 52nd Annual Meet- ing of the Association for Computational Linguis- tics, ACL 2014, June 22-27, 2014, Baltimore, MD, USA, Volume 1: Long Papers. pages 1370-1380. http://aclweb.org/anthology/P/P14/P14-1129.pdf. Deep biaffine attention for neural dependency parsing. Timothy Dozat, Christopher D Manning, CoRR abs/1611.01734Timothy Dozat and Christopher D. Manning. 2016. Deep biaffine attention for neural de- pendency parsing. CoRR abs/1611.01734. Transitionbased dependency parsing with stack long shortterm memory. Chris Dyer, Miguel Ballesteros, Wang Ling, Austin Matthews, Noah A Smith, Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing. the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language ProcessingBeijing, ChinaLong Papers). Association for Computational LinguisticsChris Dyer, Miguel Ballesteros, Wang Ling, Austin Matthews, and Noah A. Smith. 2015. Transition- based dependency parsing with stack long short- term memory. In Proceedings of the 53rd An- nual Meeting of the Association for Computa- tional Linguistics and the 7th International Joint Conference on Natural Language Processing (Vol- ume 1: Long Papers). Association for Computa- tional Linguistics, Beijing, China, pages 334-343. http://www.aclweb.org/anthology/P15-1033. A tabular method for dynamic oracles in transition-based parsing. Yoav Goldberg, Francesco Sartorio, Giorgio Satta, TACL. 2Yoav Goldberg, Francesco Sartorio, and Giorgio Satta. 2014. A tabular method for dynamic oracles in transition-based parsing. TACL 2:119-130. A polynomial-time dynamic oracle for non-projective dependency parsing. Carlos Gómez-Rodríguez, Francesco Sartorio, Giorgio Satta, Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). Association for Computational Linguistics. the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). Association for Computational LinguisticsDoha, QatarCarlos Gómez-Rodríguez, Francesco Sartorio, and Giorgio Satta. 2014. A polynomial-time dy- namic oracle for non-projective dependency pars- ing. In Proceedings of the 2014 Confer- ence on Empirical Methods in Natural Language Processing (EMNLP). Association for Computa- tional Linguistics, Doha, Qatar, pages 917-927. http://www.aclweb.org/anthology/D14-1099. Long short-term memory. Sepp Hochreiter, Jürgen Schmidhuber, 10.1162/neco.1997.9.8.1735Neural Comput. 98Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural Comput. 9(8):1735- 1780. https://doi.org/10.1162/neco.1997.9.8.1735. Adam: A method for stochastic optimization. P Diederik, Jimmy Kingma, Ba, CoRR abs/1412.6980Diederik P. Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. CoRR abs/1412.6980. http://arxiv.org/abs/1412.6980. What do recurrent neural network grammars learn about syntax?. Adhiguna Kuncoro, Miguel Ballesteros, Lingpeng Kong, Chris Dyer, Graham Neubig, Noah A Smith, CoRR abs/1611.05774Adhiguna Kuncoro, Miguel Ballesteros, Lingpeng Kong, Chris Dyer, Graham Neubig, and Noah A. Smith. 2016. What do recurrent neural net- work grammars learn about syntax? CoRR abs/1611.05774. http://arxiv.org/abs/1611.05774. Finding function in form: Compositional character models for open vocabulary word representation. Wang Ling, Tiago Luís, Luís Marujo, Ramón Fernandez Astudillo, Silvio Amir, Chris Dyer, Alan W Black, Isabel Trancoso, CoRR abs/1508.02096Wang Ling, Tiago Luís, Luís Marujo, Ramón Fernan- dez Astudillo, Silvio Amir, Chris Dyer, Alan W. Black, and Isabel Trancoso. 2015. Finding func- tion in form: Compositional character models for open vocabulary word representation. CoRR abs/1508.02096. http://arxiv.org/abs/1508.02096. Distributed representations of words and phrases and their compositionality. Tomas Mikolov, Ilya Sutskever, Kai Chen, Gregory S Corrado, Jeffrey Dean, Advances in Neural Information Processing Systems 26: 27th Annual Conference on Neural Information Processing Systems. Lake Tahoe, Nevada, United StatesProceedings of a meeting heldTomas Mikolov, Ilya Sutskever, Kai Chen, Gregory S. Corrado, and Jeffrey Dean. 2013. Distributed representations of words and phrases and their com- positionality. In Advances in Neural Information Processing Systems 26: 27th Annual Conference on Neural Information Processing Systems 2013. Proceedings of a meeting held December 5-8, 2013, Lake Tahoe, Nevada, United States.. pages 3111-3119. http://papers.nips.cc/paper/5021- distributed-representations-of-words-and-phrases- and-their-compositionality. Algorithms for deterministic incremental dependency parsing. Joakim Nivre, 10.1162/coli.07-056-R1-07-027Comput. Linguist. 344Joakim Nivre. 2008. Algorithms for deterministic in- cremental dependency parsing. Comput. Linguist. 34(4):513-553. https://doi.org/10.1162/coli.07- 056-R1-07-027. Non-projective dependency parsing in expected linear time. Joakim Nivre, Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP. the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP1Joakim Nivre. 2009. Non-projective dependency parsing in expected linear time. In Proceed- ings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Pro- cessing of the AFNLP: Volume 1 -Volume . Joakim Nivre, Željko Agić, Lars Ahrenberg, Universal Dependencies 2.0. LIN-Joakim Nivre,Željko Agić, Lars Ahrenberg, et al. 2017a. Universal Dependencies 2.0. LIN- DAT/CLARIN digital library at the Institute of Formal and Applied Linguistics. PragueCharles UniversityDAT/CLARIN digital library at the Institute of For- mal and Applied Linguistics, Charles University, Prague. http://hdl.handle.net/11234/1-1983. Universal dependencies 2.0 -CoNLL 2017 shared task development and test data. LIN-DAT/CLARIN digital library at the Institute of Formal and Applied Linguistics. Joakim Nivre, Željko Agić, Lars Ahrenberg, Charles UniversityJoakim Nivre,Željko Agić, Lars Ahrenberg, et al. 2017b. Universal dependencies 2.0 -CoNLL 2017 shared task development and test data. LIN- DAT/CLARIN digital library at the Institute of For- mal and Applied Linguistics, Charles University. http://hdl.handle.net/11234/1-2184. Universal Dependencies v1: A multilingual treebank collection. Joakim Nivre, Marie-Catherine De Marneffe, Filip Ginter, Yoav Goldberg, Jan Hajič, Christopher Manning, Ryan Mcdonald, Slav Petrov, Sampo Pyysalo, Natalia Silveira, Reut Tsarfaty, Daniel Zeman, Proceedings of the 10th International Conference on Language Resources and Evaluation (LREC 2016). European Language Resources Association. the 10th International Conference on Language Resources and Evaluation (LREC 2016). European Language Resources AssociationPortorož, SloveniaJoakim Nivre, Marie-Catherine de Marneffe, Filip Gin- ter, Yoav Goldberg, Jan Hajič, Christopher Man- ning, Ryan McDonald, Slav Petrov, Sampo Pyysalo, Natalia Silveira, Reut Tsarfaty, and Daniel Zeman. 2016. Universal Dependencies v1: A multilingual treebank collection. In Proceedings of the 10th International Conference on Language Resources and Evaluation (LREC 2016). European Language Resources Association, Portorož, Slovenia, pages 1659-1666. Improving the reproducibility of PAN's shared tasks: Plagiarism detection, author identification, and author profiling. Martin Potthast, Tim Gollub, Francisco Rangel, Paolo Rosso, Efstathios Stamatatos, Benno Stein, 10.1007/978-3-319-11382-122Information Access Evaluation meets Multilinguality, Multimodality, and Visualization. 5th International Conference of the CLEF Initiative. Evangelos Kanoulas, Mihai LupuPaul Clough, Mark Sanderson, Mark Hall, Allan Hanbury, and Elaine Toms; Berlin Heidelberg New YorkSpringerMartin Potthast, Tim Gollub, Francisco Rangel, Paolo Rosso, Efstathios Stamatatos, and Benno Stein. 2014. Improving the reproducibility of PAN's shared tasks: Plagiarism detection, author iden- tification, and author profiling. In Evangelos Kanoulas, Mihai Lupu, Paul Clough, Mark Sander- son, Mark Hall, Allan Hanbury, and Elaine Toms, editors, Information Access Evaluation meets Mul- tilinguality, Multimodality, and Visualization. 5th International Conference of the CLEF Initiative (CLEF 14). Springer, Berlin Heidelberg New York, pages 268-299. https://doi.org/10.1007/978-3-319- 11382-1 22. CoNLL 2017 shared task -UD-Pipe baseline models and supplementary materials. Milan Straka, Milan Straka. 2017. CoNLL 2017 shared task -UD- Pipe baseline models and supplementary materials. LINDAT/CLARIN digital library at the Institute of Formal and Applied Linguistics. Charles UniversityLINDAT/CLARIN digital library at the Institute of Formal and Applied Linguistics, Charles University. http://hdl.handle.net/11234/1-1990. UD-Pipe: trainable pipeline for processing CoNLL-U files performing tokenization, morphological analysis, POS tagging and parsing. Milan Straka, Jan Hajič, Jana Straková, Proceedings of the 10th International Conference on Language Resources and Evaluation (LREC 2016). European Language Resources Association. the 10th International Conference on Language Resources and Evaluation (LREC 2016). European Language Resources AssociationPortorožMilan Straka, Jan Hajič, and Jana Straková. 2016. UD- Pipe: trainable pipeline for processing CoNLL-U files performing tokenization, morphological anal- ysis, POS tagging and parsing. In Proceedings of the 10th International Conference on Language Resources and Evaluation (LREC 2016). European Language Resources Association, Portorož, Slove- nia. Parsing universal dependency treebanks using neural networks and search-based oracle. Milan Straka, Jan Hajič, Jana Straková, Jan Hajič Jr, Proceedings of Fourteenth International Workshop on Treebanks and Linguistic Theories. Fourteenth International Workshop on Treebanks and Linguistic TheoriesTLT 14Milan Straka, Jan Hajič, Jana Straková, and Jan Hajič jr. 2015. Parsing universal dependency tree- banks using neural networks and search-based or- acle. In Proceedings of Fourteenth International Workshop on Treebanks and Linguistic Theories (TLT 14). Open-source tools for morphology, lemmatization, pos tagging and named entity recognition. Jana Straková, Milan Straka, Proceedings of 52nd Annual Meeting of the Association for Computational Linguistics: System Demonstrations. 52nd Annual Meeting of the Association for Computational Linguistics: System DemonstrationsBaltimore, MarylandAssociation for Computational LinguisticsJana Straková, Milan Straka, and Jan Hajič. 2014. Open-source tools for morphology, lemmatiza- tion, pos tagging and named entity recognition. In Proceedings of 52nd Annual Meeting of the Association for Computational Linguistics: Sys- tem Demonstrations. Association for Computational Linguistics, Baltimore, Maryland, pages 13-18. http://www.aclweb.org/anthology/P14-5003. . Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. CoRR abs/1706.03762Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Atten- tion is all you need. CoRR abs/1706.03762. Multi-task cross-lingual sequence tagging from scratch. Zhilin Yang, Ruslan Salakhutdinov, William W Cohen, CoRR abs/1603.06270Zhilin Yang, Ruslan Salakhutdinov, and William W. Cohen. 2016. Multi-task cross-lingual sequence tagging from scratch. CoRR abs/1603.06270. Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies. Daniel Zeman, Martin Popel, Milan Straka, Jan Hajič, Joakim Nivre, Filip Ginter, Juhani Luotolahti, Sampo Pyysalo, Slav Petrov, Martin Potthast, Francis Tyers, Elena Badmaeva, Memduh Gökırmak, Anna Nedoluzhko, Silvie Cinková, Jan Hajič Jr, Jaroslava Hlaváčová, Václava Kettnerová, Zdeňka Urešová, Jenna Kanerva, Stina Ojala, Anna Missilä, Christopher Manning, Sebastian Schuster, Siva Reddy, Dima Taji, Nizar Habash, Herman Leung, Marie-Catherine De Marneffe, Manuela Sanguinetti, Maria Simi, Hiroshi Kanayama, Proceedings of the CoNLL 2017 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies. Association for Computational Linguistics. Kayadelen, Mohammed Attia, Ali Elkahky, Zhuoran Yu, Emily Pitler, Saran Lertpradit, Michael Mandl, Jesse Kirchner, Hector Fernandez Alcalde, Jana Strnadova, Esha Banerjee, Ruli Manurung, Antonio Stella, Atsuko Shimada, Sookyoung Kwak, Gustavo Mendonça, Tatiana Lando, Rattima Nitisaroj, and Josie Lithe CoNLL 2017 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies. Association for Computational LinguisticsValeria de Paiva, Kira Droganova, Hěctor Martínez Alonso, Hans Uszkoreit, Vivien Macketanz, Aljoscha Burchardt, Kim Harris, Katrin Marheinecke, Georg Rehm, TolgaDaniel Zeman, Martin Popel, Milan Straka, Jan Hajič, Joakim Nivre, Filip Ginter, Juhani Luotolahti, Sampo Pyysalo, Slav Petrov, Martin Potthast, Fran- cis Tyers, Elena Badmaeva, Memduh Gökırmak, Anna Nedoluzhko, Silvie Cinková, Jan Hajič jr., Jaroslava Hlaváčová, Václava Kettnerová, Zdeňka Urešová, Jenna Kanerva, Stina Ojala, Anna Mis- silä, Christopher Manning, Sebastian Schuster, Siva Reddy, Dima Taji, Nizar Habash, Herman Leung, Marie-Catherine de Marneffe, Manuela Sanguinetti, Maria Simi, Hiroshi Kanayama, Valeria de Paiva, Kira Droganova, Hěctor Martínez Alonso, Hans Uszkoreit, Vivien Macketanz, Aljoscha Burchardt, Kim Harris, Katrin Marheinecke, Georg Rehm, Tolga Kayadelen, Mohammed Attia, Ali Elkahky, Zhuoran Yu, Emily Pitler, Saran Lertpradit, Michael Mandl, Jesse Kirchner, Hector Fernandez Alcalde, Jana Strnadova, Esha Banerjee, Ruli Manurung, An- tonio Stella, Atsuko Shimada, Sookyoung Kwak, Gustavo Mendonça, Tatiana Lando, Rattima Nitis- aroj, and Josie Li. 2017. CoNLL 2017 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies. In Proceedings of the CoNLL 2017 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies. Association for Computa- tional Linguistics.
43,481,313
La longueur des tours de parole comme critère de sélection de conversations dans un centre d'appels
RESUME ____________________________________________________________________________________________________________Cet article s'intéresse aux conversations téléphoniques d'un Centre d'Appels EDF, automatiquement découpées en « tours de parole » et automatiquement transcrites. Il fait apparaître une relation entre la longueur des tours de parole et leur contenu, en ce qui concerne le vocabulaire qui les compose et les sentiments qui y sont véhiculés. Après avoir montré qu'il y a un intérêt à étudier ces longs tours, l'article analyse leur contenu et liste quelques exemples autour des notions d'argumentation et de réclamation. Il montre ainsi que la longueur des tours de parole peut être un critère utile de sélection de conversations.ABSTRACT _________________________________________________________________________________________________________Turn-taking length as criterion to select call center conversationsThis article focuses on telephone conversations collected in an EDF Call Center, automatically segmented in "turn-taking" and automatically transcribed. It shows a relationship between the length of the turns and their content regarding the vocabulary and the feelings that are conveyed. After showing that there is an interest in studying these long turns, the article analyzes their content and lists some examples around the notions of argumentation and claim. It shows that the length of turns can be a useful criterion for selecting conversations.MOTS-CLES : Centre d'appels, Conversation, Tour de parole, Reconnaissance de Parole.
[ 5806206 ]
La longueur des tours de parole comme critère de sélection de conversations dans un centre d'appels TALNCopyright TALN2012 Actes De La Conférence Conjointe Jep-Taln-Recital La longueur des tours de parole comme critère de sélection de conversations dans un centre d'appels GrenobleTALN22012Call CenterConversationTurn TakingAutomatic Speech Recognition RESUME ____________________________________________________________________________________________________________Cet article s'intéresse aux conversations téléphoniques d'un Centre d'Appels EDF, automatiquement découpées en « tours de parole » et automatiquement transcrites. Il fait apparaître une relation entre la longueur des tours de parole et leur contenu, en ce qui concerne le vocabulaire qui les compose et les sentiments qui y sont véhiculés. Après avoir montré qu'il y a un intérêt à étudier ces longs tours, l'article analyse leur contenu et liste quelques exemples autour des notions d'argumentation et de réclamation. Il montre ainsi que la longueur des tours de parole peut être un critère utile de sélection de conversations.ABSTRACT _________________________________________________________________________________________________________Turn-taking length as criterion to select call center conversationsThis article focuses on telephone conversations collected in an EDF Call Center, automatically segmented in "turn-taking" and automatically transcribed. It shows a relationship between the length of the turns and their content regarding the vocabulary and the feelings that are conveyed. After showing that there is an interest in studying these long turns, the article analyzes their content and lists some examples around the notions of argumentation and claim. It shows that the length of turns can be a useful criterion for selecting conversations.MOTS-CLES : Centre d'appels, Conversation, Tour de parole, Reconnaissance de Parole. Introduction Avec plus de 30 millions de clients et plusieurs milliers de conseillers en ligne, les Centres d'Appels constituent, pour EDF, un maillon important de la Gestion de la Relation Client et font l'objet d'un suivi permanent avec un focus sur la « professionnalisation des conseillers », consistant à améliorer leur pratique professionnelle afin de toujours mieux répondre aux clients. Cette amélioration passant par des analyses qualitatives, seul un faible pourcentage peut être retenu pour écoute, d'où l'importance des critères de sélection. C'est dans cet esprit qu'EDF R&D a participé aux projets Infom@gic/Callsurf et Voxfactory. Le projet Callsurf (Garnier et al., 2008 ;Bozzi et al., 2009) consistait à enregistrer et transcrire automatiquement les conversations entre clients et conseillers, pour ensuite les analyser. Le projet Voxfactory y ajoute la détection automatique de l'émotion véhiculée, à partir du texte (Cailliau et Cavet, 2010), et par le signal (Devillers et al., 2010). Cet article s'intéresse à la notion de « tours de parole » (Sacks et al., 1974) et à leur longueur mesurée en seconde, une information qui ne semble pas encore avoir fait l'objet d'étude, contrairement à la longueur des phrases, en nombre de mots, dans des données textuelles plus conventionnelles comme l'étude du théâtre du XVII e siècle par (Labbé et Labbé, 2010). La suite de l'article s'intéresse à la relation entre la longueur des tours de parole et le vocabulaire qui les compose, ainsi qu'à la relation entre la longueur de ces tours et les sentiments qu'ils véhiculent. Elle montre que lorsque la longueur du tour a tendance à augmenter, les informations trouvées semblent plus chargées émotionnellement et le vocabulaire employé a tendance à devenir plus pertinent d'un point de vue métier. La partie 2 présente la notion de tour de parole et la partie 3 présente le corpus. La partie 4 s'intéresse à la relation entre longueur du tour de parole et vocabulaire, tandis que la partie 5 décrit la relation entre longueur et sentiment. La dernière partie analyse leur contenu et liste quelques exemples autour des notions d'argumentation et de réclamation. Notion de tour de parole La notion de « tour de parole » (TDP) correspond à la prise de parole par un locuteur et désigne le temps pendant lequel il garde cette parole. Au final, la suite ordonnée des TDP va constituer une conversation (Vincent, 2002). Le tour de parole semble donc être une notion assez simple, mais bien que beaucoup utilisé en « analyse conversationnelle », il reste sujet à interprétation et beaucoup d'interrogations à son sujet subsistent. Différents travaux se demandent encore « Qu'est-ce que vraiment un TDP ? » (Laforest, 2011). Sur la figure 1, l'exemple (1) semble présenter trois TDP, mais l'intervention de « l'agent » n'est qu'un simple « back-channel » pour manifester son attention au client. En analyse conversationnelle, on pourrait regrouper ces trois tours en un seul et les considérer comme étant une seule unité. Dans l'exemple (2), le client a du mal à trouver le nom d'un terme métier, que lui souffle l'agent. Ici aussi on peut considérer qu'il s'agit d'un seul tour de parole et qu'il est co-construit par les deux locuteurs. (1) Client : « … oui, je vous appelle… » Agent : « oui » Client « … pour un problème… » (2) Client : « … et quand je vais la recevoir, la … » Agent : « la facture rectificative ? » FIGURE 1 -Deux exemples de tours de parole Sans vouloir éluder la difficulté de définir précisément ce « tour de parole », nous nous en affranchissons un peu dans cette étude, dans la mesure où nous nous intéressons à des conversations automatiquement découpées en TDP par un segmenteur et retranscrites par un procédé de transcription automatique de la parole (Adda et al., 2011). Au final, le découpage en TDP est imparfait mais reflète la réalité des conversations (répétitions, disfluences, parole superposée, etc.) ainsi que les conditions d'enregistrement (voix sur un seul canal, bruit de fond, téléphone portable, etc.). Plusieurs raisons peuvent expliquer la variation de longueur des TDP : un client peut monopoliser la parole pour exprimer un problème, une réclamation, une insatisfaction, etc., mais un conseiller peut également monopoliser la parole et prendre du temps pour répondre au client (parce qu'un problème peut être compliqué à résoudre par exemple). Enfin, une discussion peut aussi contenir des passages « serrés » ou « tendus » (énervements, émotions…), ce qui peut empêcher le segmenteur de détecter le changement d'interlocuteur. D'un certain point de vue, cela peut être considéré comme un point faible du système, mais de l'autre c'est également un marqueur du fait qu'il se passe quelque chose de particulier et qu'il s'agit donc d'un phénomène intéressant à étudier. Présentation du corpus Pour réaliser notre étude, un corpus de conversations téléphoniques a été enregistré dans le Centre d'Appels Bleu Ciel d'Aix-en-Provence. Les enregistrements ont eu lieu entre janvier et février 2010, auprès d'une quinzaine de conseillers volontaires. Comme la plupart des enregistreurs du marché, l'enregistreur de conversation utilisé est mono-canal, ce qui signifie que les deux signaux de parole du client et du conseiller se superposent quand ils parlent en même temps. Une fois enregistrés, les appels font l'objet d'une série de traitements. D'abord intervient le « segmenteur », qui a pour but de séparer le signal en segments qui, idéalement, correspondent aux tours de parole du client et du conseiller. Ensuite, les TDP sont transcrits par un processus de transcription automatique (avec un taux d'erreur d'environ 30 %). Au total, ce corpus est constitué de 8 551 conversations, composées de 800 596 tours de parole. La durée moyenne d'un TDP est de 3,7 s, le plus long dure plus de 2 min. La répartition du nombre de tours en fonction de leur durée est présentée en figure 2 (échelle linéaire à gauche et logarithmique à droite) : FIGURE 2 -Distribution des tours de parole en fonction de leur durée Ces courbes montrent que la très grande majorité des TDP dure moins de 10 s et que leur nombre chute fortement en fonction de la durée. Néanmoins, il y a quand même des tours longs, voire très longs. Si l'on considère les TDP dont la durée est supérieure à 20 s, il n'y en a « que » 1,2 % en nombre mais 9 % en durée. Pour un seuil de 40 s, on en trouve 0,12 % en nombre et 1,7% en durée. Quand on regarde ces données au niveau des conversations, on s'aperçoit que le nombre de conversations ayant au moins un tour de plus de 20 s est de 3 904 soit 45 % et que le nombre de conversations ayant au moins un tour de plus de 40 s est de 674 soit 7,9 %. En conclusion, on peut dire que les longs TDP sont peu fréquents par rapport à l'ensemble des TDP, mais non négligeables si on prend en compte leur durée totale. Pour les unigrammes, on constitue la liste de tous les mots du corpus sauf ceux faisant partie d'une « stop liste », comme « le », « la », « les », etc. Pour les bigrammes étendus, on constitue la liste de toutes les suites de deux mots dont aucun des deux n'appartient à la « stop liste » et de toutes les suites de trois mots composées d'une préposition au milieu et dont les deux mots extrémités sont absents de la « stop liste ». On détecte ainsi les bigrammes comme « heures pleines », « relevé de compteur », « pompe à chaleur », etc. Ce graphique montre qu'il y a bien une relation entre la longueur des TDP et le vocabulaire de ceux-ci. C'est vers 15 s que se situe une sorte d'équilibre, c'est-à-dire que c'est pour un seuil de 15 s que le corpus des TDP inférieurs à cette durée ressemble le plus au corpus des TDP supérieurs à cette durée. Cela justifie également l'idée d'aller étudier les tours ayant une durée supérieure à 20 s pour y trouver des particularités ou spécificités de vocabulaire. Du tour de parole au passage Comme le montrent indépendamment (Cailliau et Cavet, 2010) et (Danesi et Clavel, 2010), le taux d'erreur de la transcription automatique d'environ 30 % impacte directement les extractions. Pour minimiser cet impact, nous privilégions le passage au TDP, en partant du principe qu'un TDP sentimentalement marqué apparaît rarement isolé. Pour ce faire, nous avons mis en place un lissage par une fenêtre coulissante de 5 tours de parole. Un algorithme de zonage nous permet de transformer la courbe en zones neutre ( ), positive ( ), négative ( ), et très négative ( ). Les seuils ont été définis de façon empirique et la barre colorée est obtenue par projection des zones sur l'échelle temporelle. La figure 5 présente les courbes positive et négative obtenues sur une conversation après lissage sur l'échelle du tour de parole et sur l'échelle de temps, avec la barre colorée correspondante. Le pic négatif apparaissant au début de la conversation occupe relativement peu de tours de parole (17 TDP sur 55, soit 30,9 %), mais il occupe beaucoup de temps sur la conversation (environ 300 s sur 590 s, soit 50,8 %). Longueur des tours de parole et couleur associée Si on conserve les seuils de 20 s et 40 s utilisés en partie 3, on constate, en figure 6, que les TDP courts apparaissent très majoritairement dans des zones non marquées (95 %). Le nombre de TDP dans des zones non neutres augmente fortement avec la longueur de ces TDP, au détriment du non marqué. Les TDP longs contiennent donc généralement plus d'expressions émotionnelles que les TDP courts. Ces résultats ne sont pas si surprenants : notre méthode de calcul privilégie les TDP plus longs car nous utilisons la somme des poids des entités extraites pour calculer les poids positif et négatif des TDP. On remarque également qu'il ne suffit pas d'avoir un TDP long pour avoir du sentiment : 45 % d'entre eux sont neutres, et moins de 8 % sont marqués comme « très négatifs ». Les deux parties précédentes ont permis de montrer qu'il y avait une spécificité des longs TDP par rapport aux autres : ils sont plus « chargés » en émotion et ont une spécificité en termes de vocabulaire. Dans cette partie, nous appliquons une méthode basée sur la fréquence des mots pour identifier lesquels sont à l'origine de cette spécificité. On commence par éliminer les tours très courts (inférieurs à 10 s, parce que très nombreux et très peu porteurs d'informations) et très longs (supérieurs à 80 s, parce que très peu nombreux et pouvant donc perturber fortement les résultats) pour constituer 7 souscorpus : T 10-20 , T 20-30, T 30-40, T 40-50, T 50-60, T 60-70, T 70-80 , T 10-20 étant le corpus constitué des TDP dont la durée est comprise entre 10 s et 20 s. Pour chaque sous-corpus, on calcule les fréquences des mots. On pratique une régression linéaire pour trouver les mots pour lesquels l'augmentation de fréquence est la plus forte. Une analyse des mots présentant la plus forte augmentation permet de discerner deux sphères : la sphère de l'argumentation et la sphère de la réclamation. Client : « … donc premièrement on a on a on a insisté d'avoir un rendez de rendez-vous téléphonique premièrement… mais quand même bon alors on a attendu 2 semaines pour ce rendez-vous téléphonique le monsieur il m'a appelé… » Client : « … ça suffit je vais faire une réclamation concernant le le rendezvous qui n'a pas écoutez je n'habite pas sur place … » Agent : « … en attendant la réponse de notre service national consommateurs… » Conclusion Comme les Centres d'Appels représentent un maillon important de la Gestion de la Relation Client pour EDF, l'amélioration continue de leur performance est un enjeu majeur. Cette amélioration passant par des analyses qualitatives, seul un faible pourcentage peut être retenu pour écoute, d'où l'importance des critères de sélection. En s'appuyant sur un corpus de conversations enregistrées dans un Centre d'Appels EDF et transcrites automatiquement, nous avons montré une relation entre la longueur des tours de parole et leur contenu, à travers le vocabulaire qui les compose et les sentiments qui y sont véhiculés. Par conséquent, la longueur des tours de parole est un critère utile de sélection de conversations. Il peut s'ajouter de façon complémentaire aux autres stratégies de sélection que sont les mots clés (nom d'offre, entreprise concurrente, etc.), les thématiques, les sentiments, etc. FIGURE 3 - 3Distance entre corpus inférieur et supérieur à une certaine durée La figure 3 montre que les deux courbes de distance (pour les unigrammes et les bigrammes) ont globalement la même allure. Quand le seuil de séparation entre les deux corpus est très petit, la distance entre les deux corpus est élevée ce qui semble normal, puisque dans les tours les plus courts, on trouve les « oui », « non », « bonjour », etc. Ensuite cette distance décroît pour atteindre un minimum situé entre 10 s et 20 s. Puis l'écart croit FIGURE 5 - 5Courbes positive et négative d'une conversation et zones correspondantes Durée des tours en secondeRépartition des tours échelle logarithmique4 Longueur des tours et vocabulaire : une relation ?De manière assez intuitive, les TDP très courts semblent peu porteurs d'information. Dans ceux-ci, on trouvera beaucoup de : « oui », « non », « EDF bleu ciel bonjour », etc. Par contre, dans des TDP plus longs, une conversation peut s'installer, le client peut présenter la raison de son appel, le conseiller va répondre à sa problématique, faire des propositions, etc. La question qu'on se pose ici peut se résumer de la manière suivante : « Est-ce que l'on parle des mêmes choses dans les tours longs que dans les autres ? » Pour répondre à cette question, la méthode va être la suivante. Pour une durée d donnée, on va constituer deux corpus : T inf (d), le corpus constitué des TDP dont la durée est inférieure à d et T sup (d), le corpus constitué des TDP dont la durée est supérieure à d. occurrences du mot dans le corpus ( ) et ( ) , le nombre total de mots dans le corpus ( ). Pour le calcul des mots , deux alternatives sont retenues : les unigrammes et les bigrammes étendus.Puis, nous calculons la distance entre ces deux corpus comme décrit dans (Labbé et Labbé, 2003) : = , ( ) − , ( ) m ∈ T inf (d) ∪ T sup (d) , = ( , ) ( ) Avec f m, T d la fréquence du mot m dans le corpus T d , ( , ) , le nombre d' Nous appliquons tout d'abord un ensemble de grammaires faites manuellement à partir d'une fouille approfondie du corpus. Elles détectent les mots et expressions porteurs de sentiment dans le cadre typique de la conversation téléphonique pour un total de 30 types d'entités. A chaque type nous avons attribué un poids déterminé de façon empirique, ainsi qu'une orientation positive, négative ou neutre. Ensuite chaque tour de parole obtient un score de polarité positif et un score de polarité négatif correspondant à la somme des poids des entités repérées dans le tour. Les poids des extractions neutres sont ajoutés au plus haut score positif ou négatif. L'exemple suivant d'un tour de parole contient plusieurs entités, avec le calcul des poids positif et négatif, illustré en figure 4. « Oui oui mais il non mais c'est ça OK tant mieux ailleurs parce que sinon ça serait dur à sortir »5 Longueur des tours et sentiment : une relation ? 5.1 Trouver du sentiment dans un tour de parole Notre analyse du sentiment est faite en trois phases : détection et normalisation des expressions, calcul d'un poids positif et négatif pour chaque tour de parole et calcul des zones positif et négatif. Les deux premières phases sont détaillées dans (Cailliau et Cavet, 2010) et seront brièvement rappelées ici. Extraction Classe et sous-classe Poids positif Poids négatif OK Acceptation -Refus : acceptation 2 - tant mieux Appréciation : favorable (émotif) 4 - sinon Accord -Désaccord : rectificatif - 1 dur Appréciation : défavorable (émotif) - 4 Poids total du TDP 6 5 FIGURE 4 -Poids des entités repérées dans le tour de parole des échanges argumentés à la fois du côté du client pour expliquer son problème mais également du côté du conseiller pour justifier une réponse. Voici quelques extraits : Agent : « Avant, vous étiez dans une période où effectivement le client pouvait choisir… effectivement ses heures creuses… » Agent : « L'intervention coûtera xx €, somme que vous n'aurez pas à payer, elle sera prélevée effectivement sur la facture… » Agent : « Une somme de x €, effectivement ce n'est pas négligeable… » Client : « … je suis propriétaire mais la la locataire a dû elle avait téléphoné justement à edf pour résilier le contrat et moi en fait c'est quelqu'un qui a signé de le jour que j'ai emménagé et justement pour couper edf j'avais dit que justement j'avais fait l'ouverture sur internet… » La seconde sphère est celle de la réclamation, caractérisée par la présence de mots comme « problème », « réponse », ou directement « réclamation », comme par exemple : Agent : « … d'accord avec tous les points où il y a eu un problème d'accord d'accord donc j'ai j'ai bien noté que vous souhaitiez une réponse par écrit. » Ces réclamations vont porter à la fois sur des problèmes techniques et sur des problèmes relationnels. Concernant les aspects techniques, on trouve des mots comme : « technicien », « technique », « raccordement », « disjoncteur », « énergie », « chauffage », « hiver », « index », « chantier », « câble », etc. Ces extraits, concernant l'installation ou le raccordement d'un nouveau compteur, peuvent donner lieu à des passages assez longs. : « … elle est divisé en 2 parties l'installation donc le réseau edf qui va jusqu'à la partie haute du disjoncteur donc tout ce qui est compteur et disjoncteur c est edf par contre… » Concernant les aspects relationnels, on trouve des mots comme : « courrier », « fournisseur », « mail », « réponse », « client », « rendez-vous », « montant », etc.La sphère de l'argumentation est caractérisée par une forte présence d'adverbes comme « effectivement », « exactement », « normalement », « maintenant », « directement » et « justement ». Cette présence des adverbes s'explique par le fait que les longs TDP ont tendance à être d'avantage consacrés aux problèmes les plus complexes et qu'ils donnent lieu à Agent La transcription automatique et la fouille de données conversationnelles pour l'analyse de la relation client. G Adda, F Cailliau, A-L Daquo, M Garnier-Rizet, S Guillemin-Lanne, P Suignard, C Waast-Richard, Sémantique et multimodalité en analyse de l'information. M. Campedel et P. HoogstoëlParisHermes LavoisierADDA, G., CAILLIAU, F., DAQUO, A-L, GARNIER-RIZET, M., GUILLEMIN-LANNE, S., SUIGNARD, P. et WAAST-RICHARD, C. (2011). La transcription automatique et la fouille de données conversationnelles pour l'analyse de la relation client. In M. Campedel et P. Hoogstoël (Ed.), Sémantique et multimodalité en analyse de l'information. Hermes Lavoisier, Paris. Segmentation et classification non supervisée de conversations téléphoniques automatiquement retranscrites. L Bozzi, P Suignard, C Waast-Richard, Actes de TALN. s de TALNSenlisBOZZI, L., SUIGNARD, P., WAAST-RICHARD, C. (2009). Segmentation et classification non supervisée de conversations téléphoniques automatiquement retranscrites. In Actes de TALN, Senlis. Analyse des sentiments et transcription automatique : modélisation du déroulement de conversations téléphonique. F Cailliau, Cavet Et, A , TAL. ATALACAILLIAU, F., et CAVET, A. (2010). Analyse des sentiments et transcription automatique : modélisation du déroulement de conversations téléphonique. In TAL, 51-3, ATALA. Impact of spontaneous speech features on business concept detection: a study of call-centre data. C Danesi, C Clavel, Proc. of SSCS '10. of SSCS '10New York, NY, USAACMDANESI, C. et CLAVEL, C. (2010). Impact of spontaneous speech features on business concept detection: a study of call-centre data. In Proc. of SSCS '10, ACM, New York, NY, USA. Real-life emotion-related states detection in call centers: a cross-corpora study. L Devillers, Vaudable, C Chastagnol, INTERSPEECH-2010. DEVILLERS, L., VAUDABLE, C et CHASTAGNOL, C. (2010). Real-life emotion-related states detection in call centers: a cross-corpora study. In INTERSPEECH-2010, pages 2350-2353. CallSurf -Automatic transcription, indexing and structuration of call center conversational speech for knowledge extraction and query by content. M Garnier-Rizet, G Adda, F Cailliau, G Guillemin-Lanne, C Waast-Richard, Actes de LREC. s de LRECMarrakechGARNIER-RIZET, M., ADDA, G., CAILLIAU, F., GUILLEMIN-LANNE, G. et WAAST-RICHARD, C. (2008). CallSurf -Automatic transcription, indexing and structuration of call center conversational speech for knowledge extraction and query by content. In Actes de LREC 2008. Marrakech. La distance intertextuelle. C Labbe, D Labbe, Corpus, décembre 2003. consulté le 20/12/2011LABBE, C. et LABBE, D. (2003). La distance intertextuelle. In Corpus, décembre 2003, http://corpus.revues.org/index31.html [consulté le 20/12/2011]. Ce que disent leurs phrases. C Labbe, D Labbe, Proceedings of 10th International Conference Statistical Analysis of Textual Data. 10th International Conference Statistical Analysis of Textual DataRome, ItalieLABBE, C. et LABBE, D. (2010). Ce que disent leurs phrases. In Proceedings of 10th International Conference Statistical Analysis of Textual Data, Rome, Italie. Trois petits tours et puis s'en vont ou qu'est-ce qu'un tour de parole ?. M Laforest, Langues et linguistique, numéro spécial Journées de linguistique. LAFOREST, M. (2011). Trois petits tours et puis s'en vont ou qu'est-ce qu'un tour de parole ? In Langues et linguistique, numéro spécial Journées de linguistique, pages 34-42. A simplest systematics for the organization of turn-taking for conversation. H Sacks, E A Schegloff, G Jefferson, Language. 50SACKS, H., SCHEGLOFF, E.A. et JEFFERSON, G. (1974). A simplest systematics for the organization of turn-taking for conversation. In Language, 50 (4), pages 696-735. Les enjeux de l'analyse conversationnelle et les enjeux de la conversation. D Vincent, Revue québécoise de linguistique. VINCENT, D. (2002). Les enjeux de l'analyse conversationnelle et les enjeux de la conversation. In Revue québécoise de linguistique, 30-1, pages 177-198.
3,470,796
Unsupervised learning of agglutinated morphology using nested Pitman-Yor process based morpheme induction algorithm
In this paper we describe a method to morphologically segment highly agglutinating and inflectional languages from Dravidian family. We use nested Pitman-Yor process to segment long agglutinated words into their basic components, and use a corpus based morpheme induction algorithm to perform morpheme segmentation. We test our method in two languages, Malayalam and Kannada and compare the results with Morfessor Categories-MAP.
[ 14863802, 8188244, 1541597 ]
Unsupervised learning of agglutinated morphology using nested Pitman-Yor process based morpheme induction algorithm September 2015 Unsupervised learning of agglutinated morphology using nested Pitman-Yor process based morpheme induction algorithm Proceedings of the Student Research Workshop associated with RANLP 2015 the Student Research Workshop associated with RANLP 2015Hissar, BulgariaSeptember 2015 In this paper we describe a method to morphologically segment highly agglutinating and inflectional languages from Dravidian family. We use nested Pitman-Yor process to segment long agglutinated words into their basic components, and use a corpus based morpheme induction algorithm to perform morpheme segmentation. We test our method in two languages, Malayalam and Kannada and compare the results with Morfessor Categories-MAP. Introduction Morphological processing is an important task for natural language processing systems, such as information retrieval systems. In the case of languages with agglutinated and rich morphology, such as Dravidian family of languages, morphological processing is more important because one word can actually be the combination of several others, each with a number of morphological/flexive markers. Properly identifying morphemes in agglutinated words is essential for tasks such as information retrieval and machine translation. Consider the following example from Malayalam, a language from south Dravidian family having 38 millions of native speakers and one of the classical languages of India. A word in Malayalam ( ഴകളയി , puḻ akaḷayirunnu, there were rivers), here root word is ( ഴ, puḻ a, river) is inflected with plural marker (ക , kal, Plural marker) and it also contains verb phrase(ആയി , ayirunnu were) all of them are joined together with orthographic changes. It is possible to have orthographic changes when words are combined, because of morpho-phonemic change called sandhi, which makes the task of segmenting Dravidian languages challenging. Dravidian languages are agglutinated like Turkish and inflected like Finnish. Other than agglutination and inflection, Orthographic changes in morpheme boundaries occurs due to sandhi changes and alpha syllabic writing system. In this case the job of a morphological analyzer is to segment the large word sequence ( ഴകളയി , puḻ akaḷayirunnu, there were rivers) into( ഴ, puḻ a ക , kal, ആയി, ayi, ഉ , unnu), which are the constituent morphemes. In the above word phrase orthography of constituent morphemes are different when they combined to in the form a word due to alpha-syllabic script that capture phonological changes. This property makes morphological processing of this languages challenging. As morpheme boundaries are marked at syllabic level, morpheme boundaries can occur inside ligatures an digraphs. In this paper we are developing a non parametric Bayesian models based on nested Pitman-Yor process on syllable level to segment long words into individual components and learn their morphological segmentation. Dravidian family of languages are least resourced so we use corpora created from Wikipedia for conducting the experiments. We define a nested Pitman-Yor process based model for segmentation of agglutinated long sequence of words and defined model inferred using a parallel blocked Gibbs sampling algorithm. It is a generative approach in which we consider syllables are the basic units that are combined in context (agglutination) to form words. Once the algorithm achieves the segmentation on corpus created from Wikipedia, we use a heuristic search based algorithms to achieve final morphological segmentation. We test our algorithm pipeline in the case of two highly agglutinated and inflected lan-Arun Kumar Universitat Oberta Catalonia, UPC arunsocs@gmail.com guages, Malayalam and Kannada from Dravidian family. As the gold standard segmentation is not available for evaluation, we created a gold standard segmentation file for both languages and evaluate the results. We manually analyze the errors in morphological segmentation to get the idea of errors that are produced by them system and to improve the system performance in further studies. In section 2 we describe previous work Bayesian non-parametric and morphological processing of agglutinating languages. In section 3 we describe Pitman-Yor models, and Section 4 describes the used algorithm for morphological segmentation. Sections 5 and 6 present the results and error analysis, and finally, section 7 presents the conclusions and future work of our research. Related Work In this section we describe related works carried out on Bayesian non-parametric models to learn morphology of languages. Pitman-Yor Process language model Pitman-Yor process (Pitman, 2002) a generalization of Dirichlet process and it is a stochastic process. Goldwater et al. (Goldwater et al., 2009) and Teh (Teh, 2006) use it for language modeling. It is represented as: G ∼ P Y (G 0 , d, θ) The stochastic process generates a discrete probability distribution G similar to another given distribution G0. G 0 is called base measure, d is a discount factor and θ is a variable that controls similarity between both distributions G 0 and G. A unigram language model can be expressed as a Pitman-Yor process as: G 1 = p(w) ∀w ∈ L where w ranges over all words in the lexicon (L). In the case of a bigram distribution, we have G 2 = p(w|v) ∀v, w ∈ L For frequent words G 1 will be similar to G 2 , so we can compute G 2 using G 1 as a base measure: G 2 ∼ P Y (G 1 , d, θ) Similarly it is possible to compute also trigram models. As this model has no analytic form the model described is represented in the form of Chinese Restaurant Process (CRP) (Aldous, 1985). Chinese Restaurant Process is an infinite large restaurant with infinitely many tables and capacity of many customers. At first the restaurant is empty, then the first customer enters and sit at an empty table. Next customer sit a new table, based on a concentration parameter or sit to already occupied table probability proportional to number of customers sitting there. n -gram probability computed in CRP representation. Words are customers that are sitting in various tables. Tables in the restaurants are context of the words. Context of the word is length of the suffix in all earlier occurrences. So in this representation, each n-gram context h is a table and customers are n-gram counts seated over tables 1 · · · t hw . The seat assignation to customers is constructed choosing a table k for each c(w|h) (count of w given the context h) is the n-gram count and its probability is proportional to p(c(w|h)) ∝ { c hwk − d, k = (1, · · · t hk ) θ + d · t h (k = new) where c hwk is the number of customers seated in the table k and t h is the total number of table in h. When the k = new, the t h is incremented. As a result the n-gram probability can be computed as: p(w|h) = c(w|h) − d · t hw θ + c(h) + θ + dt h θ + c(h) p(w|h ′ ) where θ and d are the hyper parameters to be learned from data. Those parameters are inferred from the data (unsegmented corpus) and assuming that posterior probability of the variable are from Beta or Gamma distribution. Inference on the model is done using adding and removing customers to the table t w in the way d and θ are optimized using MCMC. For more details, refer to (Teh, 2006) 3.1 Nested Pitman-Yor process Nested Pitman-Yor Process is a hierarchical process in which the base measure G 0 is replaced with another Pitman-Yor process. In our model base measure G 0 is replaced by a Pitman-Yor process of syllable n-grams. Then the base measure becomes: G(w) = p(s 1 · · · s k ) = k ∏ i=1 (s i |s i−n+1 · · · s i−1 ) The above process can be consider as Hierarchical model, where two levels exist one is the word model and another is syllable model. We consider our syllable model as uni gram language model. For the inference it is represented in the form of a nested CRP in which a word model is connected to syllable model. In this set-up, a word w is generated from a base measure and the base measure is a Pitman -Yor process of syllables. For the inference on the particular model, we use a parallel blocked Gibbs sampler. Considering the syllables are the basic characters that joined to form words sentences. More details of sampling procedure can be found in (Neubig, 2014). Morpheme identification and verification algorithm After inference on the defined model, we apply a morpheme identification and verification algorithm to the acquired root words and morphemes. Our method is similar to that of Dasgupta & Ng (Dasgupta and Ng, 2007). Our morpheme identification algorithm has two major parts. The first part of the algorithm is to identify a list of possible affixes for morpheme induction and composite suffixes. The list of possible affixes is extracted from the segmented corpus in following way: We assume that a word αβ is concatenation of α and β, If we find both α and αβ in the counter (we keep a counter of words from segmented corpus according to their frequencies) we extract β to the list of suffixes. Similarly if we find character sequence in αβ and β in the counter, we list the α in the list of prefixes. But the problem with this technique is that it can create a large number of invalid suffixes and prefixes. To reduce this problem we rank the affixes based on their frequencies with different character sequences. Only top affixes that have got higher ranks are selected for induction purposes. The second part of the algorithm aims to identify composite suffixes. As the Dravidian language family is highly inflectional large number of composite affixes are present in the vocabulary. For example in Malayalam, (ആ ക െട, āḷukaḷuṭe, belongs to men) has a composite suffix (ക െട , kaluṭe) formed by suffixes (ക , kal ഉെട, uṭe). We remove these composite suffixes from list of suffixes, otherwise it can lead to under segmentation. The third step of our morpheme identification algorithm is to identify possible roots. We take a word w from the counter and then we compose it with suffixes in the counter table. Thus, if x + w (where x is an induced prefix) or w + y (where x is an induced suffix) is present in the corpus, we consider w as a root and it is added to the root list. This procedure is continued until we get root, prefix and suffix lists. Using the proposed list of roots, prefixes and suffixes overall corpus is segmented to morphemes. Data and Experiments To validate our model and algorithm, we tested our algorithm on Malayalam and Kannada corpus. As Malayalam and Kannada are least resourced languages, we used a corpus crawled from Wikipedia containing 10 million words both languages, which are manually processed. As a first step of our experiments, we converted the Unicode encoded file to corresponding ISO romanized form for internal processing. We create word list of 10 million words annd add a space between characters, For example, A Kannada word ( ಾ , Vidyārthi, student) is represented as V i d y ā r t h i and it converted into constituent syllables. Second step of the experiment consists of applying our nested Pitman-Yor model and inference algorithm to the data. For this the data is fed to the sampling algorithm for 100 iterations. Depending on the number of tokens, time taken for convergence varies. Our algorithm took 3 hours to converge in a machine with a 4-core processor with four threads in execution. Next step is to apply our morpheme identification and evaluation algorithm to in-duce morpheme. Once the process is completed the system produces morphological segmentation of input words. For evaluation, we manually segmented 10,000 words of Malayalam and Kannada. The segmentation in the gold standards as follows (മ ഷ െറ,, manuṣyanṟ e, of human) The segmentation is (മ ഷ ,manuṣyan ഇ െറ , inṟ e Genitive case marker). We measured precision (P), recall (R) and F-measure (F) of predicted morpheme boundaries. We used programs provided by morpho-challenge (Virpioja et al., 2011) team for evaluation. In order to get a comparison result, we train Morfessor Categories-MAP 0.9.2 1 with same 10 million words for 10 Epoch and create the model. Using the model produced we segment the gold standard file and apply evaluation algorithm. Results of the experiments shown in Table1 Error Analysis We analyzed the results of experiments to get an insight errors that need to be solved in future research. We are listing the errors that are produced by our algorithms and Morfessor-MAP. In the case of our algorithm, it has two major steps one is to identify accurate word boundaries and other is to find accurate morpheme boundaries. • Morfessor and our system fail to identify character combinations which need to considered as single character so it segmented digraphs and ligatures. In the case of our system it as we use a internal notation it did not segment the digraphs and ligatures. • In the case of loaned root words, both systems fails to identify the morphemes. • Our system is able to identify morpheme boundaries where morpho-phonemic occurs. In the case of Morfessor-MAP, it fails to identify morpheme boundaries if there is a morpho-phonemic change and it consider zero-width joiner of Unicode as morpheme boundary. • Our algorithms is able to identify orthographic changes that happening in the morpheme boundaries during sandhi changes but Morfessor-MAP fails. For example, a Malayalam word (മര ൾ, maraṅṅaḷ, trees) our system segment it to (മരം,maram) and ( ., ṅṅaḷ). Conclusions and future research We presented a method to segment words into morphemes using nested Pitman-Yor process for highly agglutinating and least resourced language such as Malayalam and Kannada. Our morphology learning system segmented complex morpheme sequences and it produce results that outperform state of the art systems. In future research, we focus on morphological processing of other languages in Dravidian family and we also focus on more richer models Indian languages, such as Hindi, Marathi and Malayalam based on suffix lists. Idicula & David (Idicula and David, 2007) present a morphological analyzer for Malayalam based on Finite state Transducers and inflectional rules.Re- search works in unsupervised learning of mor- phology are also relevant. Hammarström and Borin (Hammarström and Borin, 2011) pro- vide a detailed survey of the topic. Morfes- sor (Creutz and Lagus, 2002; Creutz and oth- ers, 2006; Creutz et al., 2007) based on Min- imum Description Length principle is the ref- erence model for highly inflecting languages, such as Finnish. Goldwater et al. (Goldwa- ter et al., 2009) introduce a word segmenta- tion model based on Dirichlet Process mix- ture to model words and their contextual de- pendencies. They test their method on pho- netic scripts of child speech. Following this line of research, Naradowsky & Goldwater (Naradowsky and Goldwater, 2009) incorpo- rated English spelling rules to the morpho- logical model to achieve better results for En- glish phonetic script segmentation. Following these studies, Teh (Teh, 2006) introduced a Bayesian language model based on Pitman- Yor process and a new sampling procedure for the model. Lee et al. (Lee et al., 2011) modeled syntactic context to achieve better morphological segmentation. Dreyer & Eis- ner (Dreyer and Eisner, 2011) identified mor- phological paradigms using Dirichlet Process Mixture models and seed paradigms. Can and Manandhar (Can and Manandhar, 2012) clus- tered morphological paradigms using Hierar- chical Dirichlet Process models, and Sirts & Goldwater (Sirts and Goldwater, 2013) used adapter grammar to achieve morphological segmentation. Nested Pitman-Yor process is an extension of above Dirichlet process, used to produce word segmentation of languages, such as Japanese (Mochihashi et al., 2009) and creation of language models for speech recog- nition (Mousa et al., 2013). These works are also relevant in the case of Bayesian non para- metric models for learning morphology. In the case of the Dravidian languages, un- supervised techniques are rarely applied. For the larger languages of the family (Telugu, Tamil, Kannada and Malayalam) there are studies that use supervised techniques. Those studies in the case of Malayalam are the fol- lowing: Vasudevan & Bhattacharya (N and Bhattacharyya, 2013) propose a stemmer for Table 1 : 1Results compared to Morfessor-MAPMethod Kannada Malayalam P R F P R F Morfessor-MAP 48.1 60.4 53.5 47.3 60.0 52.9 NPY 66.8 58.0 62.1 60.3 59.6 59.9 Burcu Can and Suresh Manandhar. 2012. Probabilistic hierarchical clustering of morphological paradigms. In Proceedings of the 13th Conference of the European Chapter of the Association for Computational Linguistics, pages 654-663. Association for Computational Linguistics. Mathias Creutz and Krista Lagus. 2002. Unsupervised discovery of morphemes. In Proceedings of the ACL-02 workshop on Morphological and phonological learning-Volume 6, pages 21-30. Association for Computational Linguistics. Mathias Creutz et al. 2006. Induction of the morphology of natural language: Unsupervised morpheme segmentation with application to automatic speech recognition. Helsinki University of Technology. Mathias Creutz, Teemu Hirsimäki, Mikko Kurimo, Antti Puurula, Janne Pylkkönen, Vesa Siivola, Matti Varjokallio, Ebru Arisoy, Murat Saraçlar, and Andreas Stolcke. 2007. Morphbased speech recognition and modeling of outof-vocabulary words across languages. ACM Transactions on Speech and Language Processing (TSLP), 5(1):3. Markus Dreyer and Jason Eisner. 2011. Discovering morphological paradigms from plain text using a dirichlet process mixture model. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 616-627. Association for Computational Linguistics. Sharon Goldwater, Thomas L Griffiths, and Mark Johnson. 2009. A bayesian framework for word segmentation: Exploring the effects of context. Cognition, 112(1):21-54. Harald Hammarström and Lars Borin. 2011. Unsupervised learning of morphology. Computational Linguistics, 37(2):309-350. Sumam Mary Idicula and Peter S David. 2007. A morphological processor for malayalam language. South Asia Research, 27(2):173-186. Yoong Keok Lee, Aria Haghighi, and Regina Barzilay. 2011. Modeling syntactic context improves morphological segmentation. In Proceedings of the Fifteenth Conference on Computational Natural Language Learning, pages 1-9. Association for Computational Linguistics. Daichi Mochihashi, Takeshi Yamada, and Naonori Ueda. 2009. Bayesian unsupervised word segmentation with nested pitman-yor language modeling. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP: Volume 1-Volume 1, pages 100-108. Association for Computational Linguistics. Amr El-Desoky Mousa, M Ali Basha Shaik, Ralf Schlüter, and Hermann Ney. 2013. Morpheme level hierarchical pitman-yor class-based language models for lvcsr of morphologically rich languages. In INTERSPEECH, pages 3409-3413. Citeseer. Vasudevan N and Pushpak Bhattacharyya. 2013. Little by little: Semi supervised stemming through stem set minimization. In Proceedings of the Sixth International Joint Conference on Natural Language Processing, pages 774-780, Nagoya, Japan, October. Asian Federation of Natural Language Processing.Sajib Dasgupta and Vincent Ng. 2007. High- performance, language-independent morpholog- ical segmentation. In HLT-NAACL, pages 155-163. Citeseer. http://www.cis.hut.fi/projects/morpho/morfessorcatmapdownloadfo Acknowledgments ReferencesDavid J Aldous. 1985. Exchangeability and related topics. In École d'Été de Probabilités de Saint-Flour XIII-1983, pages 1-198. Springer Berlin Heidelberg. Improving morphology induction by learning spelling rules. Jason Naradowsky, Sharon Goldwater, IJCAI. Jason Naradowsky and Sharon Goldwater. 2009. Improving morphology induction by learning spelling rules. In IJCAI, pages 1531-1536. Simple, correct parallelization for blocked gibbs sampling. Graham Neubig, Technical ReportGraham Neubig. 2014. Simple, correct paralleliza- tion for blocked gibbs sampling. In Technical Report, November. Combinatorial stochastic processes. Jim Pitman, 621Dept. Statistics, UC BerkeleyTechnical ReportLecture notes for St. Flour courseJim Pitman. 2002. Combinatorial stochastic pro- cesses. Technical report, Technical Report 621, Dept. Statistics, UC Berkeley, 2002. Lecture notes for St. Flour course. Minimally-supervised morphological segmentation using adaptor grammars. Kairit Sirts, Sharon Goldwater, TACL. 1Kairit Sirts and Sharon Goldwater. 2013. Minimally-supervised morphological segmen- tation using adaptor grammars. TACL, 1:255-266. A hierarchical bayesian language model based on pitman-yor processes. Yee Whye Teh, Proceedings of the 21st International Conference on Computational Linguistics and the 44th Annual Meeting of the Association for Computational Linguistics, ACL-44. the 21st International Conference on Computational Linguistics and the 44th Annual Meeting of the Association for Computational Linguistics, ACL-44Stroudsburg, PA, USAAssociation for Computational LinguisticsYee Whye Teh. 2006. A hierarchical bayesian language model based on pitman-yor processes. In Proceedings of the 21st International Confer- ence on Computational Linguistics and the 44th Annual Meeting of the Association for Com- putational Linguistics, ACL-44, pages 985-992, Stroudsburg, PA, USA. Association for Compu- tational Linguistics. Empirical comparison of evaluation methods for unsupervised learning of morphology. Sami Virpioja, T Ville, Sebastian Turunen, Oskar Spiegler, Mikko Kohonen, Kurimo, Traitement Automatique des Langues. 522Sami Virpioja, Ville T. Turunen, Sebastian Spiegler, Oskar Kohonen, and Mikko Kurimo. 2011. Empirical comparison of evaluation methods for unsupervised learning of morphol- ogy. Traitement Automatique des Langues, 52(2):45-90.
245,442,676
Obituary under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0) license
It is with great sadness that we report the passing of Martin Kay in August 2021. Martin was a pioneer and intellectual trailblazer in computational linguistics. He was also a close friend and colleague of many years.Martin was a polyglot undergraduate student of modern and medieval languages at Cambridge University, with a particular interest in translation. He was not (yet) a mathematician or engineer, but idle speculation in 1958 about the possibilities of automating the translation process led him to Margaret Masterman at the Cambridge Language Research Unit, and a shift to a long and productive career.In 1960 he was offered an internship with Dave Hays and the Linguistics Project at The RAND Corporation in California, another early center of research in our emerging discipline. He stayed at RAND for more than a decade, working on basic technologies that are needed for machine processing of natural language. Among his contributions during that period was the development of the first so-called chart parser (Kay 1967), a computationally effective mechanism for dealing systematically with linguistic dependencies that cannot be expressed in context-free grammars. The chart architecture could be deployed for language generation as well as parsing, an important property for Martin's continuing interest in translation.It was during the years at RAND that Martin found his second calling, as a teacher of computational linguistics, initially at UCLA and then in many other settings. He was a gifted and entertaining speaker and lecturer, able to present complex material with clarity and precision. He took great pleasure in the interactions with his students and the role that he played in helping to advance their careers. He left RAND in 1972 to become a full-time professor and chair of the Computer Science Department at the University of California at Irvine.His time at Irvine was short-lived, as he was attracted back to an open-ended research environment. In 1974 he joined with Danny Bobrow, Ron Kaplan, and Terry Winograd to form the Language Understander project at the recently created Palo Alto Research Center (PARC) of the Xerox Corporation. The group took as a first goal the construction of a mixed-initiative dialog system using state-of-the-art components for knowledge representation and reasoning, language understanding, language production, and dialog management(Bobrow et al. 1977). Martin took responsibility for
[ 15971472, 11315192 ]
Obituary under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0) license Martin Kay German Research Center for Artificial Intelligence (DFKI) Giance Ronald M Kaplan German Research Center for Artificial Intelligence (DFKI) Giance Hans Uszkoreit German Research Center for Artificial Intelligence (DFKI) Giance Obituary under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0) license It is with great sadness that we report the passing of Martin Kay in August 2021. Martin was a pioneer and intellectual trailblazer in computational linguistics. He was also a close friend and colleague of many years.Martin was a polyglot undergraduate student of modern and medieval languages at Cambridge University, with a particular interest in translation. He was not (yet) a mathematician or engineer, but idle speculation in 1958 about the possibilities of automating the translation process led him to Margaret Masterman at the Cambridge Language Research Unit, and a shift to a long and productive career.In 1960 he was offered an internship with Dave Hays and the Linguistics Project at The RAND Corporation in California, another early center of research in our emerging discipline. He stayed at RAND for more than a decade, working on basic technologies that are needed for machine processing of natural language. Among his contributions during that period was the development of the first so-called chart parser (Kay 1967), a computationally effective mechanism for dealing systematically with linguistic dependencies that cannot be expressed in context-free grammars. The chart architecture could be deployed for language generation as well as parsing, an important property for Martin's continuing interest in translation.It was during the years at RAND that Martin found his second calling, as a teacher of computational linguistics, initially at UCLA and then in many other settings. He was a gifted and entertaining speaker and lecturer, able to present complex material with clarity and precision. He took great pleasure in the interactions with his students and the role that he played in helping to advance their careers. He left RAND in 1972 to become a full-time professor and chair of the Computer Science Department at the University of California at Irvine.His time at Irvine was short-lived, as he was attracted back to an open-ended research environment. In 1974 he joined with Danny Bobrow, Ron Kaplan, and Terry Winograd to form the Language Understander project at the recently created Palo Alto Research Center (PARC) of the Xerox Corporation. The group took as a first goal the construction of a mixed-initiative dialog system using state-of-the-art components for knowledge representation and reasoning, language understanding, language production, and dialog management(Bobrow et al. 1977). Martin took responsibility for It is with great sadness that we report the passing of Martin Kay in August 2021. Martin was a pioneer and intellectual trailblazer in computational linguistics. He was also a close friend and colleague of many years. Martin was a polyglot undergraduate student of modern and medieval languages at Cambridge University, with a particular interest in translation. He was not (yet) a mathematician or engineer, but idle speculation in 1958 about the possibilities of automating the translation process led him to Margaret Masterman at the Cambridge Language Research Unit, and a shift to a long and productive career. In 1960 he was offered an internship with Dave Hays and the Linguistics Project at The RAND Corporation in California, another early center of research in our emerging discipline. He stayed at RAND for more than a decade, working on basic technologies that are needed for machine processing of natural language. Among his contributions during that period was the development of the first so-called chart parser (Kay 1967), a computationally effective mechanism for dealing systematically with linguistic dependencies that cannot be expressed in context-free grammars. The chart architecture could be deployed for language generation as well as parsing, an important property for Martin's continuing interest in translation. It was during the years at RAND that Martin found his second calling, as a teacher of computational linguistics, initially at UCLA and then in many other settings. He was a gifted and entertaining speaker and lecturer, able to present complex material with clarity and precision. He took great pleasure in the interactions with his students and the role that he played in helping to advance their careers. He left RAND in 1972 to become a full-time professor and chair of the Computer Science Department at the University of California at Irvine. His time at Irvine was short-lived, as he was attracted back to an open-ended research environment. In 1974 he joined with Danny Bobrow, Ron Kaplan, and Terry Winograd to form the Language Understander project at the recently created Palo Alto Research Center (PARC) of the Xerox Corporation. The group took as a first goal the construction of a mixed-initiative dialog system using state-of-the-art components for knowledge representation and reasoning, language understanding, language production, and dialog management (Bobrow et al. 1977 the language production module, which was initially based on the quite rudimentary technology of the time. That was the beginning of his focus on "reversible grammars," grammatical rules and representations that could be applied to parse strings into their underlying syntactic representations but also convert underlying representations back to the strings that express them. He and his colleagues at PARC developed the idea of hierarchical attribute-value structures (feature/functional structures) as underlying representations that could be characterized by the primitive predicates of equality and unification. This insight took form in his Functional Unification Grammar (Kay 1979) and in Lexical Functional Grammar (Kaplan and Bresnan 1982), and it also surfaced in the design of Head Driven Phrase Structure Grammar (Pollard and Sag 1987). Reversibility, for translation as well as dialog, was also the motivation at PARC for developing the mathematical, linguistic, and computational concepts that led to the use of bi-directional finite-state transducers for phonological and morphological description (Kaplan and Kay 1994). This technology is still being applied to a wide variety of language processing problems. But for Martin translation was always a central theme, bracketed by his early article "The Proper Place of Men and Machines in Language Translation" (which circulated in research for quite some time before it was finally published [Kay 1997]) and his most recent book (Kay 2017). In 1985 Martin struck a new balance between his commitment to research and his love of teaching by officially dividing his time between his prestigious role as a Research Fellow at PARC and a professorship in the Linguistics Department at Stanford. In addition to his Stanford professorship, he also taught (1998-2014) as an Honorary Professor at Saarland University, offering one or two courses every year. During his stays in Germany, he also advised on ongoing research, and his lectures and discussions helped in the gradual integration of programs in linguistics, computational linguistics, and translation studies. Martin contributed in many other ways to international progress in computational linguistics. In the 1970s and later he was a mainstay lecturer in the International Summer Schools in Computational Linguistics in Italy, and the Nordic summer schools in Scandinavia (actually, he and his wife Iris hosted one Nordic summer school at their home in Menlo Park). He advised research organizations and projects in several countries. He was a specialist advisor to the German Ministry of Education and Research, a reviewer for the two largest European projects in automatic translation, Eurotra and Verbmobil, and a valued advisor for projects at the German Research Center for Artificial Intelligence (DFKI). He also served for many years as chairman of the International Committee for Computational Linguistics (ICCL). Martin received many honors during his lifetime. He is a past President of the Association for Computational Linguistics (ACL). In 2005 he received the ACL Lifetime Achievement Award (Kay 2006). He was awarded honorary doctorate degrees from the University of Gothenburg (1982) and the University of Geneva (2008). He was the recipient of the Okawa Prize in 2019. Martin's quiet and modest style of personal interaction stood only in apparent contrast to his widely recognized fame as an intellectual leader. His impressive expertise in several disciplines and his diverse intellectual interests made him a wonderful conversation partner for colleagues and friends who were lucky enough to be able to spend time with him. All students and colleagues remember him as a gifted speaker who was able to captivate and convince his audience with excellent didactics, rhetorical sharpness, and his very own sense of humor. ). Martin took responsibility for https://doi.org/10.1162/COLI a 00424 © 2022 Association for Computational Linguistics Published under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0) license We also remember Martin's wife, Iris Kay, who predeceased him by a few months. Iris was a warm and psychologically insightful figure who played a prominent role in the early social history of computational linguistics, when personal relationships were more immediate and so important. They will both be sorely missed. GUS, a frame-driven dialog system. Daniel G Bobrow, M Ronald, Martin Kaplan, Donald A Kay, Henry Norman, Terry Thompson, Winograd, 10.1016/0004-3702(77)90018-2Artificial Intelligence. 877Reprinted in Grosz et al.Bobrow, Daniel G., Ronald M. Kaplan, Martin Kay, Donald A. Norman, Henry Thompson, and Terry Winograd. 1977. GUS, a frame-driven dialog system. Artificial Intelligence, 8:155-173. Reprinted in Grosz et al. (1986), pages 595-604. https://doi.org/10.1016/0004-3702 (77)90018-2 Formal Issues in Lexical-Functional Grammar. Mary Dalrymple, Ronald M Kaplan, John T Maxwell, Iii , Annie Zaenen, CSLI PublicationsStanford, CADalrymple, Mary, Ronald M. Kaplan, John T. Maxwell, III, and Annie Zaenen, editors. 1995. Formal Issues in Lexical-Functional Grammar. CSLI Publications, Stanford, CA. Collected Papers of Martin Kay. Dan Flickinger, Stephan Oepen, CSLI PublicationsStanford UniversityFlickinger, Dan and Stephan Oepen, editors. 2010. Collected Papers of Martin Kay. CSLI Publications, Stanford University. Barbara Grosz, Karen Spärck Jones, Bonnie Webber, Readings in Natural Language Processing. Los Altos, CAMorgan KaufmanGrosz, Barbara, Karen Spärck Jones, and Bonnie Webber, editors. 1986. Readings in Natural Language Processing. Morgan Kaufman, Los Altos, CA. Lexical-Functional Grammar: A formal system for grammatical representation. Ronald M Kaplan, Joan Bresnan, Joan BresnanMIT PressCambridge, MAThe Mental Representation of Grammatical Relations. Reprinted in Dalrymple et al.Kaplan, Ronald M. and Joan Bresnan. 1982. Lexical-Functional Grammar: A formal system for grammatical representation. In Joan Bresnan, editor, The Mental Representation of Grammatical Relations. MIT Press, Cambridge, MA, pages 173-281. Reprinted in Dalrymple et al. (1995, 98-111). Regular models of phonological rule systems. Ronald M Kaplan, Martin Kay, Computational Linguistics. 203Kaplan, Ronald M. and Martin Kay. 1994. Regular models of phonological rule systems. Computational Linguistics, 20(3):331-378. Experiments with a powerful parser. Martin Kay, Proceedings of the Second International Conference sur le Traitement Automatique des Langues. the Second International Conference sur le Traitement Automatique des LanguesReprinted in Flickinger and OepenKay, Martin. 1967. Experiments with a powerful parser. In Proceedings of the Second International Conference sur le Traitement Automatique des Langues, pages 1-20. Reprinted in Flickinger and Oepen (2010), pages 98-111. Functional grammar. Martin Kay, 10.3765/bls.v5i0.3262Proceedings of the Fifth Annual Meeting of the Berkeley Linguistics Society. the Fifth Annual Meeting of the Berkeley Linguistics SocietyBerkeley, CABerkeley Linguistic SocietyReprinted in Flickinger and OepenKay, Martin. 1979. Functional grammar. In Proceedings of the Fifth Annual Meeting of the Berkeley Linguistics Society, pages 142-158, Berkeley Linguistic Society, Berkeley, CA. Reprinted in Flickinger and Oepen (2010), pages 247-264. https://doi.org/10.3765 /bls.v5i0.3262 The proper place of men and machines in language translation. Martin Kay, 10.1023/A:1007911416676Machine Translation. 121Kay, Martin. 1997. The proper place of men and machines in language translation. Machine Translation, 12(1):3-23. https:// doi.org/10.1023/A:1007911416676 A life of language. Martin Kay, 10.1162/089120105775299159Computational Linguistics. 314Kay, Martin. 2006. A life of language. Computational Linguistics, 31(4):425-438. https://doi.org/10.1162 /089120105775299159 Martin Kay, C A Stanford, Pollard, Ivan A Carl, Sag, Fundamentals. Center for the Study of Language and Information. 1Stanford UniversityInformation-based Syntax and SemanticsKay, Martin. 2017. Translation, CSLI Publications, Stanford, CA. Pollard, Carl and Ivan A. Sag. 1987. Information-based Syntax and Semantics: Vol. 1: Fundamentals. Center for the Study of Language and Information, Stanford University.
226,284,015
[]
Association for Computational LinguisticsCopyright Association for Computational LinguisticsSpLU 2020. November 19, 2020. 2020 Proceedings of the Third International Workshop on Spatial Language Understanding the Third International Workshop on Spatial Language UnderstandingAssociation for Computational LinguisticsSpLU 2020. November 19, 2020. 202010.18653/v1/P17 Spatial information extraction is essential to understand geographical information in text. This task is largely divided to two subtasks: spatial element extraction and spatial relation extraction. In this paper, we utilize BERT (Devlin et al., 2018), which is very effective for many natural language processing applications. We propose a BERT-based spatial information extraction model, which uses BERT for spatial element extraction and R-BERT (Wu and He, 2019) for spatial relation extraction. The model was evaluated with the SemEval 2015 dataset. The result showed a 15.4% point increase in spatial element extraction and an 8.2% point increase in spatial relation extraction in comparison to the baseline model(Nichols and Botros, 2015). Introduction Extracting spatial relations from text is a type of relation extraction, focusing on the static and dynamic spatial relations in the text. It is essential for natural language understanding systems, such as robot navigation systems and questionanswering systems, to understand geographical relations or to track moving objects. For example, in the sentence, "Tom is on the box," we find a static relation in which Tom is the trajector, box is the landmark, and on denotes their static spatial relation. In the following sentence, "He steps down from the box to the ground," we also find a dynamic spatial relation, in which He (Tom) is the mover, steps down is the trigger, box is 1 The subtasks are defined in more detail for evaluation in SemEval-2015 Task 8 (Pustejovsky et al., 2015). Spatial element extraction and spatial relation extraction correspond to 1.b and 1.d tasks, respectively, in the definition. However, the source, and ground is the destination. Using simple inference based on the extracted relations, we can infer a new relation: "Tom is on the ground now." The task is largely divided into two subtasks 1 : spatial element extraction and spatial relation extraction. Finding candidate elements for spatial relations roles, such as the trajector, landmark, and trigger, defines spatial element extraction. Finding or verifying relations among the role candidates defines spatial relation extraction. Many natural language processing techniques and machine learning methods have been applied to spatial information extraction. For example, a conditional random field (CRF) model (Lafferty et al., 2001) is used for spatial element extraction, and support vector machine (SVM) (Suykens and Vandewalle, 1999;Roberts and Haragagiu, 2012) and convolutional neural net (CNN) (Mazalov et al., 2015) models are used for spatial relation extraction. Various language resources, such as GloVe (Pennington et al., 2014), WordNet (Salaberri et al., 2015), and PropBank (Salaberri et al., 2015) are also used for spatial information extraction. In this paper, we propose a BERT-based spatial information extraction model that utilizes BERT (Devlin et al., 2018) extensively for both spatial element extraction and spatial relation extraction. Recently, many context-aware language models have been developed, including not only BERT, but also ELMO (Peters et al., 2018), XLNet (Yang et al., 2019), and GPT (Radford et al., 2018), among others. We chose BERT simply because many downstream applications of the BERT system have been developed for named entity in this paper, spatial element extraction task is extended to extract not only spatial elements such as paths, places, motions, spatial entities, for instance, but also spatial signals and motion signals. BERT-based Spatial Information Extraction Hyeong Jin Shin * , Jeong Yeon Park ** , Dae Bum Yuk ** , Jae Sung Lee ** Computer Science Major, School of Electrical Engineering and Computer Science, Graduate School of Chungbuk National University, Korea hjshin1985@gmail.com * , {parkjeongyeon, daebum1994, jasonlee}@cbnu.ac.kr ** recognition and semantic role labeling, which can be easily applied to spatial information extraction. In Section 2, we briefly summarize related works. In Section 3, we describe our proposed model, which consists of three modules, specifically a spatial element extraction model, a triple candidate generator, and a spatial relation extraction model. Section 4 presents the experimental results of our model. Finally, section 5 concludes the paper. Related Works An early method of spatial information extraction was introduced as spatial role labeling (SpRL) by Kordjamshidi et al. (2011). SemEval-2012 introduced a spatial role labeling task mainly focusing on static spatial relations. SemEval-2013 expanded static spatial relations to capture finegrained semantics and to include dynamic spatial relations. SemEval-2015 was the first shared task conference to evaluate implementation systems for the SpaceEval annotation scheme, which is the current spatial information annotation scheme (Pustejovsky et al., 2015). Many spatial information extraction systems have been developed based on the SpaceEval annotation scheme. Nichols and Botros (2015) proposed the SpRL-CWW model, which uses a CRF model (Lafferty et al., 2001) for spatial element extraction and an SVM model (Suykens and Vandewalle, 1999) for spatial relation extraction. It uses many input features for element extraction, such as word embedding using GloVe (Pennington et al., 2014), named entities, part of speech tags and dependency parse labels. SVM is used to filter out correct triples from all possible combinations of triples. D' Souza and Ng (2015) proposed the UTD-SpRL model based on SVM, which includes more than 100 different features generated by a greedy feature selection technique and uses the joint detection of a relation's arguments. The X-Space model proposed by Salaberri et al. (2015) uses node information, such as the place, position, location and so forth, included in WordNet for spatial element extraction. It also uses argument information in PropBank for spatial relation classification. A multimodal approach that uses image and text information simultaneously in a multimodal spatial role labeling (mSpRL) shared task was also presented in CLEF 2017 (Kordjamshidi et al., 2017), but the result was not satisfactory (Zablocki et al., 2017). Mazalov et al. (2015) extracted spatial roles and their relations by adapting a convolutional neural network based system developed for semantic role labeling. The pre-existing system was successfully adapted to spatial information extraction. Dan et al. (2020) proposed spatial BERT to predict the spatial relation between two entities given an image involving them. The spatial BERT was composed of a spatial model, implemented with a feed forward network, and a language model, which were implemented with BERT. The language model is used as complementary features to predict unseen (untrained) relations in images. Despite the fact that BERT is used as the language model in this approach, spatial relation extraction is limited to relation detection for the given subject and object entities in the image. Our approach also uses BERT but deals instead with the entire process of relation extraction from raw text; we extract spatial elements from raw text, determine their corresponding spatial roles, and find spatial relations from the spatial roles. Spatial Information Extraction Model We divide the spatial information extraction task into two subtasks: spatial element extraction and spatial relation extraction, according to the ISOspace annotation scheme (ISO, 2014;Pustejovsky et al., 2015). For the integrated system, we pipelined the two subtasks via a triple candidate generator. Figure 1 shows the overall architecture of our system. A sentence is inputted to the element extractor, which is jointed with one of link's role modules. The element extractor outputs the spatial elements and spatial roles jointly. The spatial roles are combined to triples as spatial relation candidates by the triple candidate generator. The triple candidates are classified as either valid relations or invalid relations by the relation extractor. Each module is described in the following sections. For the general architecture of spatial relation extraction, two restrictions are imposed in this work. First, only three arguments are allowed. According to ISOspace, we have certain arguments for each relation, as shown Table 1. To maintain the static architecture of relation extraction, we set the number of arguments of each relation to three. Therefore, we keep all arguments for QSLink (Qualitative Spatial Link), OLink (Orientation information Link), and MeLink (Measurement Link), but for MvLnk (Movement Link), we choose three arguments out of seven: mover, goal, and motion. Second, only one prime spatial role is determined in the element extraction stage. In multiple relations in a sentence, an entity may be related to multiple relations in multiple roles. In this case, it is necessary to choose only one role in a typical case. For example, in the sentence in Figure 2, 'vase' has two roles for each relation: trajector and landmark. Because sharing these two different roles most frequently occurs, we decided to include these roles as one role label, traLand. The triple candidate generator interprets this role label as two roles separately, trajector and landmark, for triple candidate generation. Spatial Element Extraction Spatial element extraction is a problem of sequence labeling, which can be easily solved with BERT. The structure of the model is shown in Figure 3. A sentence is segmented into word pieces and they are inputted to BERT to extract the spatial elements and spatial roles jointly. In previous methods, many features are extracted through preprocessing for learning by CRFs (Nichols and Botros, 2015). However, the BERT-based spatial element extraction module does not require any preprocessing for feature extraction; rather, it requires only raw text as input for fine tuning. Multi-layer perceptron (MLP), used for a classifier on top of BERT, performs fully connected layer computation and produces IOBbased tags for annotation. Because BERT is based on word pieces (Wu et al., 2016), the outputs are also word pieces. For sequence labeling, we labeled only the first word piece. For example, we can assume that 'flower' is divided into two word pieces 'flow' and '##er', with only 'flow' then annotated as a normal tag, such as a Spatial Entity tag, whereas '##er' is annotated as an Other tag. We use a joint model for spatial element extraction and spatial role extraction. Two classifiers are located on top of the BERT system and share the same parameters for BERT fine tuning. We noted an improvement in the joint model over the single model during a preliminary test on Korean data (Kim and Lee, 2016). Triple Candidate Generator Because the spatial role extractor produces only entity tags, we do not know which entity is related to which entity, especially when there are multiple relations. Moreover, we do not know the relation type to which they belong. The triple candidate generator produces all possible combinations of given spatial roles for the spatial relation extractor to determine which combination and type should be chosen. For example, in the sentence shown in Figure 4, we have two trajectors, 'bike' and 'puppy'; two landmarks, 'warehouse' and 'gate'; and two triggers, 'by' and 'in front of'. The triple candidate generator produces all combinations of trajector, landmark, and trigger. In this case, it produces 8 (2*2*2) triple candidates. Generally, for a set of trajector T, a set of landmark L, and a set of trigger G, we have a number of Cartesian product triple candidates: |T|*|L|*|G|. Spatial Relation Extraction Spatial relation extraction is a task to identify the relation between given entities, in our case, triple entities. A similar task has been done by using BERT in semantic role labeling (SRL), in which a relationship is classified for two given semantic role arguments (Wu and He, 2019). This model showed the best performance in SRL, and we refer to this model as R-BERT in this paper. We adopted R-BERT for spatial relation extraction, but we modified two aspects of the model. We extended two arguments to three arguments, and we include null argument for the case of a movement link. Figure 5 shows the structure of the modified model. A sentence, marked with a triple candidate, is inputted to BERT. The BERT outputs of each of the roles in a triple and CLS token are averaged and go through the fully connected network. The four outputs are concatenated and then go through fully connected network again. The softmax of the output is the final result to determine the validity of the triple relation. For the argument span, the input format is changed with the start index and end index along with the words [words, start index, end index]. In the case of two arguments, we utilize formulae (1) to (5) for the spans of i and j, and k and m: = (tanh( )) + (1) = [tanh ∑ ] + (2) = [tanh ∑ ] + (3) ℎ′′ = [ ( + + )] + (4) = (ℎ′′)(5) We extended the model to operate for three arguments of spatial information extraction. We added one additional tanh output for a trigger with span q and r, as shown in formula (6). We also modified formula (4) to formula (7) to include the trigger: = [tanh ∑ ] + (6) ℎ′′ = [ ( + + + )] + (7) For the null argument in the case of a movement link, we utilize the last character in the sentence. For example, the sentence "John leaves from school." contains a mover, "John," and a motion, "leaves," but it does not have a goal. In this case, we represent the goal as the null argument. Therefore, we have a three-argument span: ['John', 0, 0], ['.', 4, 4], and ['leaves', 1, 1]. For both the spatial element model and the relation role extraction model, the hyper-parameters in Table 2 are used. An experiment was conducted with a dataset of SemEval-2015 task 8: SpaceEval. Table 3 shows the statistics of the dataset. Because non-motion events and MeLink are usually not evaluated in spatial information extraction tasks, they were excluded from our experiment. We also added invalid triplets generated by the triple candidate generator for training. These triplets accounted for approximately 40% of all data and were used as negative data in the training data. Results Because the SpRL-CWW model (Nichols and Botros, 2015) was best in SpaceEval, it was used as the baseline model in this evaluation. In our model, spatial elements and spatial roles are jointly trained and extracted. Because the spatial roles depend on the link relation type, we have four types of joint models here: QSLink, OLink, MvLink, and MeLink. The evaluation results for these models are shown in Table 4. Overall, the performance for element extraction was better than that for role extraction. Moreover, the Joint-with-QSLink model showed the worst performance, whereas the Joint-with-MeLink model showed the best performance. Ablation study In order to observe the effects of the proposed features, in this case the traLand tag, and the joint model of the spatial elements and roles, we conducted an ablation test. Table 7. Ablation test of models without using the joint training feature and a dual-role tag (traLand) Conclusion Spatial information extraction is necessary for many applications, such as robot navigation and question-answering systems, to understand geographical information in text. This task is processed largely with two subtasks: spatial element extraction and spatial relation extraction. In this paper, we proposed a BERT-based spatial information extraction model that uses BERT (Devlin et al., 2018) for spatial element extraction and R-BERT (Wu and He, 2019) for spatial relation extraction. The two modules are connected with a pipeline through a triple candidate generator. Spatial elements are extracted jointly with spatial roles that are input for spatial relation extraction. The joint model contributes to increase the performance of spatial role extraction in some cases, which is more useful for relation extraction. R-BERT, which was originally used for semantic role labeling, was modified here to handle three arguments and a null argument for spatial relation extraction. Our model was evaluated with the SemEval 2015 dataset. The result showed a 15.4% point improvement in spatial element extraction and an 8.2% point improvement in spatial relation extraction in comparison to the baseline model (Nichols and Botros, 2015). This proves that our BERT-based model is very effective for spatial information extraction. Figure 1 :Figure 2 : 12Pipeline model for spatial information extraction Example of sharing trajector and landmark roles: vase has two roles, trajector and landmark Figure 3 : 3Joint model for spatial elements and spatial role extraction Figure 4 : 4Example for multiple triplets in a sentence, represented with triple[trigger, trajector, landmark] Figure 5 : 5Spatial relation extraction model using R-BERT QSLink trajector, landmark, trigger OLink trajector, landmark, trigger midpoint, landmark, source, path MeLink trajector, landmark, measure Table 1: Spatial roles of each linkSpatial relation MvLink mover, goal, motion, We use the pre-trained cased BERTBASE version model for fine tuning available at the GitHub site 2 .Sentence: A bike is by the warehouse and a puppy is in front of the gate Triples: ['bike', 'warehouse', 'by'] ['puppy', 'gate', 'in front of'] 14 4 Experiments 4.1 Settings and dataset Table 5 5shows a performance comparison for all elements of the baseline model and our model. The performance was compared with our Joint-with- QSLink model, which showed the worst performance. The results show that the performance of every spatial element increased significantly in comparison with the baseline model. Additionally, the micro-average of our model was 15.4% point better than that of the baseline model. This demonstrates that the BERT- based deep-learning model for element extraction can more effectively improve the performance compared to traditional machine-learning models, such as CRF models, which use features extracted through various preprocessors. Spatial Elements baseline ours ∆ (ours - base) Place 74.7 86.8 +12.1 Path 61.7 94.9 +33.2 Spatial entity 80.8 89.9 +9.1 Motion 76.9 94.3 +17.4 Motion signal 78.6 90.7 +12.1 Spatial signal 70.9 85.9 +15.0 Measure 79.1 98.3 +19.2 Non-motion 56.4 89.4 +33.0 Micro-average 74.6 90.0 +15.4 Table 5 : 5Performance comparison of spatial element extraction (F1 score) Table 6 6Even though the relation type was limited to static relations in SemEval-2012 and SemEval-2013, we included the two best research results, those ofRoberts and Haragagiu (2012) andMazalov et al. (2015), for comparison.) Our model outperformed all of the other models compared here. This proves that our relation classification model based on R-BERT is very effective for spatial relation extraction.shows a comparison of the spatial relation extraction performance outcomes of selected models. We measured their average performances for only three link types (QSLink, OLink, and MvLink) because a researcher investigating other previous models chose them for evaluation. ( Table 6 : 6Spatial relation extraction performance for comparison †: SemEval 2012 dataset, relation with general type ‡: SemEval-2013 dataset, average of two datasets Table 7 7shows the result. The simple spatial role extraction model (w/o a joint model in the table) performs very poorly at 25.2% F1. Our analysis shows that this occurs because the data for some roles are very sparse in the SemEval dataset, degrading the performance. This problem is mitigated in the joint model with the use of the element data.The model without traLand tag is tested in two ways: replacing the tag with a trajactor tag and doing so with a landmark tag. The two models are degraded by 22.5% point F1 and by 14.8% point F1, respectively.These results show that the two proposed features had a positive impact on the performance.In particular, the joint model feature greatly improved the performance.models prec recall F1 ∆ (F1) proposed 62.7 59.8 61.2 0 w/o joint 25.7 24.6 25.2 -36.0 w/o traLand (repl w/ trajector) 26.5 71.6 38.7 -22.5 w/o traLand (repl w/ landmark) 43.5 49.7 46.4 -14.8 Acknowledgments Understanding Spatial Relations through Multiple Modalities. Hangfeng Soham Dan, Dan He, Roth, Proceedings of The 12th Language Resources and Evaluation Conference (LREC2020). The 12th Language Resources and Evaluation Conference (LREC2020)Soham Dan, Hangfeng He, and Dan Roth. 2020. Understanding Spatial Relations through Multiple Modalities. Proceedings of The 12th Language Resources and Evaluation Conference (LREC2020). Pages 2368-2372. Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, arXiv:1810.04805Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprintJacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv: 1810.04805. UTD: Ensemble-based spatial relation extraction. D&apos; Jennifer, Vincent Souza, Ng, Proceedings of the 9th International Workshop on Semantic Evaluation. the 9th International Workshop on Semantic EvaluationJennifer D'Souza and Vincent Ng. 2015. UTD: Ensemble-based spatial relation extraction. Proceedings of the 9th International Workshop on Semantic Evaluation (SemEval 2015). Language resource management -Semantic annotation framework -Part. Spatial information. 7(E). ISOspaceISO (2014), 24617-7:2014(E). Language resource management -Semantic annotation framework - Part 7: Spatial information (ISOspace) Extracting Spatial Entities and Relations in Korean Text. Bogyum Kim, Jae Sung Lee, Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers. COLING 2016, the 26th International Conference on Computational Linguistics: Technical PapersBogyum Kim and Jae Sung Lee. 2016. Extracting Spatial Entities and Relations in Korean Text. Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers. pages 2389-2396. Spatial role labeling: Towards extraction of spatial relations from natural language. Parisa Kordjamshidi, Martijn Van Otterlo, Marie-Francine Moens, ACM Transactions on Speech and Language Processing. 83TSLP)Parisa Kordjamshidi, Martijn Van Otterlo, and Marie- Francine Moens. 2011. Spatial role labeling: Towards extraction of spatial relations from natural language. ACM Transactions on Speech and Language Processing (TSLP) 8(3): 1-36. Parisa Kordjamshidi, Taher Rahgooy, Marie-Francine Moens, James Pustejovsky, Umar Manzoor, Kirk Roberts, Multimodal spatial role labeling task working notes. CLEF (Working Notes). Parisa Kordjamshidi, Taher Rahgooy, Marie-Francine Moens, James Pustejovsky, Umar Manzoor, and Kirk Roberts. 2017. CLEF 2017: Multimodal spatial role labeling task working notes. CLEF (Working Notes). Conditional random fields: Probabilistic models for segmenting and labeling sequence data. John Lafferty, Andrew Mccallum, Fernando C N Pereira, Proceeding of the 18th International Conference on Machine Learning. eeding of the 18th International Conference on Machine LearningJohn Lafferty, Andrew McCallum, and Fernando C.N. Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. Proceeding of the 18th International Conference on Machine Learning 2001 (ICML 2001), pages 282-289. Alexey Mazalov, Bruno Martins, David Matos, Spatial Role Labeling with Convolutional Neural Networks. Proceedings of the 9th Workshop on Geographic Information Retrieval. Alexey Mazalov, Bruno Martins, and David Matos. 2015. Spatial Role Labeling with Convolutional Neural Networks. Proceedings of the 9th Workshop on Geographic Information Retrieval. SpRL-CWW: Spatial relation classification with independent multi-class models. Eric Nichols, Fadi Botros, Proceedings of the 9th International Workshop on Semantic Evaluation. the 9th International Workshop on Semantic EvaluationEric Nichols and Fadi Botros. 2015. SpRL-CWW: Spatial relation classification with independent multi-class models. Proceedings of the 9th International Workshop on Semantic Evaluation (SemEval 2015). GloVe: Global vectors for word representation. Jeffery Pennington, Richard Socher, Christopher D Manning, Proceedings of the 2014 Conference on Empirical Methods in Natural language Processing (EMNLP). the 2014 Conference on Empirical Methods in Natural language Processing (EMNLP)Jeffery Pennington, Richard Socher, and Christopher D. Manning. 2014. GloVe: Global vectors for word representation. Proceedings of the 2014 Conference on Empirical Methods in Natural language Processing (EMNLP). pages 1532-1543. Matthew E Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, Luke Zettlemoyer, arXiv:1802.05365Deep contextualized word representation. arXiv preprintMatthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representation. arXiv preprint arXiv: 1802.05365. James Pustejovsky, Parisa Kordjamshidi, Marie-Francine Moens, Aaron Levine, Seth Dworman, Zachary Yocum, Semeval-2015 task 8: Spaceeval. Proceedings of the 9th International Workshop on Semantic Evaluation. James Pustejovsky, Parisa Kordjamshidi, Marie- Francine Moens, Aaron Levine, Seth Dworman, and Zachary Yocum. 2015. Semeval-2015 task 8: Spaceeval. Proceedings of the 9th International Workshop on Semantic Evaluation (semeval 2015). pages 884-894. Improving Language Understanding by Generative Pre-Training. Alec Radford, Karthik Narasimhan, Tim Salimans, Ilya Sutskever, Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving Language Understanding by Generative Pre-Training. URL https://s3-us-west-2.amazonaws.com/openai- assets/research-covers/language- unsupervised/language_understanding_paper.pdf UTD-SpRL:A joint approach to spatial role labeling. Kirk Roberts, Sanda M Haragagiu, Kirk Roberts and Sanda M. Haragagiu. 2012. UTD- SpRL:A joint approach to spatial role labeling. * Sem, Proceedings of the Sixth International Workshop on Semantic Evaluation. the Sixth International Workshop on Semantic Evaluation1The First Joint Conference on Lexical and Computational Semantics*SEM 2012: The First Joint Conference on Lexical and Computational Semantics-Volume 1: Proceedings of the main conference and the shared task, and Volume 2: Proceedings of the Sixth International Workshop on Semantic Evaluation (SemEval 2012). pages 419-424. IXAGroupEHUSpaceEval: (X-Space) A WordNet-based approach towards the automatic recognition of spatial information following the ISO-Space annotation scheme. Haritz Salaberri, Olatz Arregi, Beñat Zapirain, Proceedings of the 9th International Workshop on Semantic Evaluation. the 9th International Workshop on Semantic EvaluationHaritz Salaberri, Olatz Arregi, and Beñat Zapirain. 2015. IXAGroupEHUSpaceEval: (X-Space) A WordNet-based approach towards the automatic recognition of spatial information following the ISO-Space annotation scheme. Proceedings of the 9th International Workshop on Semantic Evaluation (SemEval 2015). Least squares support vector machine classifiers. J A K Suykens, J Vandewalle, Neural processing letters. 93J.A.K. Suykens and J. Vandewalle. 1999. Least squares support vector machine classifiers. Neural processing letters 9(3):293-300. Attention is all you need. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, Illia Polosukhin, Advances in neural information processing systems. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. Advances in neural information processing systems. pages 5998-6008. Enriching pretrained language model with entity information for relation classification. Shanchan Wu, Yifan He, Proceedings of the 28th ACM International Conference on Information and Knowledge Management. the 28th ACM International Conference on Information and Knowledge ManagementShanchan Wu and Yifan He. 2019. Enriching pre- trained language model with entity information for relation classification. Proceedings of the 28th ACM International Conference on Information and Knowledge Management. pages 2361-2364. Yonghui Wu, Mike Schuster, Zhifeng Chen, V Quoc, Mohammad Le, Wolfgang Norouzi, Maxim Macherey, Yuan Krikun, Qin Cao, Klaus Gao, Jeff Macherey, Apurva Klingner, Melvin Shah, Xiaobing Johnson, Łukasz Liu, Stephan Kaiser, Yoshikiyo Gouws, Taku Kato, Hideto Kudo, Keith Kazawa, George Stevens, Nishant Kurian, Wei Patil, Wang, arXiv:1609.08144Google's neural machine translation system: Bridging the gap between human and machine translation. Greg Corrado, Macduff Hughes, and Jeffrey DeanCliff Young, Jason Smith, Jason Riesa, Alex RudnickarXiv preprintYonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, Łukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Greg Corrado, Macduff Hughes, and Jeffrey Dean. 2016. Google's neural machine translation system: Bridging the gap between human and machine translation. arXiv preprint arXiv: 1609.08144. Xlnet: Generalized autoregressive pretraining for language understanding. Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, V Quoc, Le, Advances in neural information processing systems. Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, and Quoc V. Le. 2019. Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems. pages 5753-5763. Éloi Zablocki, Patrick Bordes, Laure Soulier, Benjamin Piwowarski, Patrick Gallinari, LIP6@ CLEF2017: Multi-modal spatial role labeling using word embeddings. CLEF (Working Notes). Éloi Zablocki, Patrick Bordes, Laure Soulier, Benjamin Piwowarski, and Patrick Gallinari. 2017. LIP6@ CLEF2017: Multi-modal spatial role labeling using word embeddings. CLEF (Working Notes).
257,154,238
Evaluating the Examiner: The Perils of Pearson Correlation for Validating Text Similarity Metrics
In recent years, researchers have developed question-answering based approaches to automatically evaluate system summaries, reporting improved validity compared to word overlapbased metrics like ROUGE, in terms of correlation with human ratings of criteria including fluency and hallucination. In this paper, we take a closer look at one particular metric, Quest-Eval, and ask whether: (1) it can serve as a more general metric for long document similarity assessment; and (2) a single correlation score between metric scores and human ratings, as the currently standard approach, is sufficient for metric validation. We find that correlation scores can be misleading, and that score distributions and outliers should be taken into account. With these caveats in mind, QuestEval can be a promising candidate for long document similarity assessment.
[ 245855878, 220275177, 250390959, 47018994, 233219059, 219573654, 218613630, 964287 ]
Evaluating the Examiner: The Perils of Pearson Correlation for Validating Text Similarity Metrics Gisela Vallejo gvallejo@student.unimelb.edu.au The University of Melbourne Timothy Baldwin The University of Melbourne MBZUAI Lea Frermann lea.frermann@unimelb.edu.au The University of Melbourne Evaluating the Examiner: The Perils of Pearson Correlation for Validating Text Similarity Metrics In recent years, researchers have developed question-answering based approaches to automatically evaluate system summaries, reporting improved validity compared to word overlapbased metrics like ROUGE, in terms of correlation with human ratings of criteria including fluency and hallucination. In this paper, we take a closer look at one particular metric, Quest-Eval, and ask whether: (1) it can serve as a more general metric for long document similarity assessment; and (2) a single correlation score between metric scores and human ratings, as the currently standard approach, is sufficient for metric validation. We find that correlation scores can be misleading, and that score distributions and outliers should be taken into account. With these caveats in mind, QuestEval can be a promising candidate for long document similarity assessment. Introduction Methods which can provide accurate estimates of document content similarity are critical to tasks such as news analysis and fact-checking (Shaar et al., 2020). Researchers have proposed a broad range of metrics to estimate document similarity (Sai et al., 2020), from n-gram overlap metrics such as BLEU (Papineni et al., 2002) and Meteor (Lavie and Agarwal, 2007) for machine translation, and ROUGE (Lin, 2004) for automatic summarisation, to embedding-based metrics such as BERTScore (Zhang et al., 2020) and MoverScore (Zhao et al., 2019). However, these metrics have been shown to rely heavily on superficial features, correlate poorly with human annotations, and perform poorly over longer document pairs (Hanna and Bojar, 2021;Balasubramanian et al., 2020;Kryscinski et al., 2019;Koto et al., 2022). A more radical recent proposal has been to use question-answering (QA) based models (Wang et al., 2020;Scialom et al., 2021), to automatically generate question-answer pairs from a source document, and estimate similarity by the proportion of questions that can be successfully answered on the basis of the target document. While such approaches were designed to evaluate automatic summarisation in a reference-free manner, i.e., compare a full (long) document with its (short) summary, they can in principle be applied to arbitrary document pairs. In this paper we ask whether the QuestEval method (Scialom et al., 2021) scales to varying-length document pairs, and in particular, can be used to calculate the similarity between same length documents reliably. In other words, we are comparing two evaluation settings: longshort document pairs vs. documents of the same length. In Table 1, we present the different document length scenarios in terms of average length. Consistent with other work on the evaluation of similarity metrics (including the original QuestEval paper), we explore this question by measuring the Pearson correlation between the estimated similarity scores and a gold standard. Pearson correlation is notoriously susceptible to outliers (Sai et al., 2020;Mathur et al., 2020), so in addition to the raw correlation values, we perform detailed analysis of the distribution of the gold and predicted similarity scores (via inspection of scatter plots). We find that reported correlations can be inflated by a small number of outliers, caused by a skewed distribution in the gold standard, and are thus hot fully reflective of the quality of QuestEval. Our contributions are as follows: (1) we eval-uate QuestEval on three different datasets, and demonstrate that it is robust to increasing document lengths; (2) we showcase the perils of presenting Pearson correlation coefficients for metric evaluation in isolation, without examining the raw data distribution; and (3) we suggest visualization strategies which expose possible data biases to the interpretation of raw correlation values. Background Evaluating text similarity evaluation Most common automatic metrics for evaluating summarisation like BLEU and ROUGE, and BERTScore measure lexical overlap. In the case of BLEU and ROUGE, this is based on n-gram overlap, interpolated over different values of n, and with an additional brevity penalty in the case of BLEU. BERTScore, on the other hand, abstracts away from the tokens in calculating similarity based on contextualized embeddings of each token in the respective documents. While these metrics are computationally inexpensive, they do not penalize critical content divergences (e.g. due to "hallucination" under summarisation: Wang et al. (2020)) or repetitions, and are poor at capturing meaning-critical differences in polarity. Such shortcomings were a large part of the motivation behind QA-based metrics such as QuestEval, which were shown by the authors to be more adept at evaluating factual consistency. We note that subsequent work of Koto et al. (2022) showed that with appropriate model and layer selection, BERTScore is actually superior in evaluating all aspects of summary quality, including factuality. Additionally, unlike the metrics above, QuestEval does not require a reference summary, as it is exclusively based on the consistency between document and generated summary (although varieties of the metric can leverage human annotations). QuestEval QuestEval is QA-based pipeline that generates question-answer pairs from a source document, and measures similarity by the proportion of those questions which can be successfully answered based on the target document. While in the context of summarisation evaluation, this is based on the source document and summary, respectively (to test how faithfully the summary captures the content of the source document), this can be applied to document similarity by performing the calcula-tion in both directions and averaged. That is, for a document pair (d i , d j ), separate scores can be calculated taking each of d i and d j as the source document, and the remaining document as the target document. QuestEval consists of a question generation (QG) and a question answering (QA) model. In question generation, QuestEval selects nouns and named entities as gold-standard answers, and generates questions for them. The model generates questions for each of the nouns and name entities and discards the ones that the QA module is not able to answer correctly. The QuestEval metric comprises two evaluations, which measure whether the summary contains only true information (precision), and conversely whether it contains all important information (recall). Both the QG and QA components are a fine-tuned version of T5 (Raffel et al., 2020) using SQuAD-v2 (Rajpurkar et al., 2018). Even though SQuAD -where answers are generated based on Wikipedia paragraphs -is not comparable to typical summarization datasets which consist of news articles, the original QuestEval paper showed that the method is robust to the domain shift between component pre-training data and final application. This paper further asks whether QuestEval extends to document similarity assessment more generally, between arbitrary document pairs. It is worth mentioning that the typical input limit of 512 tokens of pre-trained language models does not affect QuestEval, because the model generates and answers questions based on pre-identified nouns in their local context of five sentences. Thus, there is no limit on the document length that Quest-Eval can be applied to. Experimental Setup Here, we describe the datasets and evaluation methods we use to test QuestEval's applicability to long documents, as well as reliability across datasets and reference annotations. Data We experiment with three datasets: (1) SummEval, made up of article-summary pairs (long-short); (2) ABC News, consisting of article-article pairs (long-long); and (3) SemEval, also made up of article-article pairs (long-long). In each case, a given document pair is associated with one or more ground-standard labels. SummEval (Fabbri et al., 2021) consists of 1600 generated summaries from 16 different models generated for a random sample of 100 articles from the CNN/DailyMail dataset (Hermann et al., 2015), and was used in the original QuestEval publication (Scialom et al., 2021). The average length of each generated summary and source document is 63 and 359 words respectively. Each summary was rated by three experts and five non-experts (crowdworkers) regarding coherence, consistency, fluency, and relevance. In our experiments, we only use the expert ratings for all four dimensions. Note that coherence and fluency are intrinsically intradocument properties, independent of the source document. As such, QuestEval is a slightly odd choice of method, given that it compares the source document with the summary. In line with the original QuestEval paper, however, we include these results based on the hypothesis that there should be some influence on the ability to correctly answer questions if the summary lacks coherence or fluency. ABC News (Lee et al., 2005) consists of 1225 document-pairs, created by exhaustively pairing 50 news articles taken from the Australian Broadcasting Corporation (ABC) news service. The average article length is 86 words. Each article pair was rated by 8-10 annotators for similarity on a fivepoint scale from 1 (highly unrelated) to 5 (highly related). In our experiments, we compare Quest-Eval scores against the average annotated similarity per article pair. SemEval (Chen et al., 2022) was published as part of SemEval-2022 Task 8: Multilingual news article similarity. The full dataset contains 10K pairs of documents from 10 languages, including both monolingual (two documents in the same language, e.g., English) and cross-lingual (documents in different languages, e.g., English vs. Arabic) pairs. Here we only use the 1348 pairs of the training Validating QuestEval Scores We obtained QuestEval scores for all three datasets using QuestEval version 0.1.1 3 and calculated the Pearson and Spearman correlation coefficients of the respective gold labels with our QuestEval scores. We report the results in Table 2. It is widely known that correlation scores are susceptible to outliers (Sai et al., 2020;Mathur et al., 2020), rendering the findings less robust. To assess the robustness of observed correlations, we additionally inspect the full distributions of gold ratings and QuestEval scores in Figure 1 in the form of kernel density estimation (KDE) plots, onto which we superimpose the regression line of best fit based on Pearson correlation. We also include the raw scatter plots in Appendix C for comparison. Results In analysing the results, we investigate: (1) whether QuestEval is document-length agnostic, i.e., scales from the original scenario of article-summary 1 Noting that the script for reproducing the dataset occasionally failed, so that we evaluate on 74% of the data described in Chen et al. (2022). 2 The original annotations were collected on the reverse scale (4: most dissimilar), but we flip the scores for consistency with the other results. 3 The authors provide this link with the source code to reproduce the scores reported in the paper: https://github.com/recitalAI/QuestEval/ releases/tag/v0.1.1 Figure 1: Visualised correlation (heat map of raw data + correlation line) for QuestEval with several human annotated metrics for SummEval, ABC News, and SemEval. (long-short) similarity to estimating article-article (long-long) similarity in terms of raw Pearson Correlation scores; (2) whether QuestEval correlates with ratings of document similarity, departing from the dimensions of coherence, consistency, fluency, and relevance as originally assessed; and (3) how robust the observed Pearson and Spearman correlations are across all data conditions and ground-truth labels. QuestEval as a measure of long document similarity The correlation coefficients reported in Table 2 address questions (1) and (2). The top block in the table shows our reproduction of the original QuestEval evaluation setup (Scialom et al., 2021). 4 Our numbers are comparable to the original reported scores, and confirm that QuestEval best captures consistency (i.e,. content similarity) and to a lesser extent accounts for the other three axes of summary quality. The bottom block of Table 2 shows the correlation of QuestEval with the respective manual document similarity scores in the ABC News and SemEval datasets. Both are either close or exceed the best evaluation score obtained for summary evaluation, suggesting that the metric indeed can be employed to estimate long document 4 Compared to QUESTEVAL W uniform our coherence, consistency, and relevance scores are 1-2 points lower and fluency scores are 1.3 points higher than those reported in the paper. We also include Spearman, which is not reported in the orginal paper. similarity. However, given the coefficient's high sensitivity to outliers -and consequently the distribution of reference and QuestEval scores -we next assess the robustness of the reported score. Robustness of QuestEval validation Validating automatic evaluation metrics in terms of their correlation to human labels seems intuitive, however, correlation scores like Pearson are susceptible to outliers. This is particularly pertinent in cases where rank (or label) distributions are skewed, as is often the case when collecting human similarity ratings. Consider the data densities implied for the human quality/similarity ratings in Figure 1, i.e., densities along the x-axis. For most metrics (with the exception of relevance and coherence in Sum-mEval), human labels are concentrated at one end of the spectrum, suggesting that instances labelled with unusual ratings are outliers and to some degree atypical. We can thus achieve high Pearson correlation scores under these highly atypical data conditions. Conversely, if the outliers were removed, the correlation would drop substantially. Following Mathur et al. (2020), we removed outliers in all datasets based on QuestEval scores x by means of the Median Absolute Deviation (MAD) as shown below: cutoff < |x − median(x)| MAD(x) Data Cutoff # of Outliers ABC News 5.5 20 SemEval 10 39 SummEval 3.5 16 We selected a different cutoff for each of the datasets, taking as reference box plots, and depict cutoffs and the total amount of outliers in Table 4. Raw scatter plots of the data including removed outliers are illustrated in Figure 2. We report the obtained results in Table 3 and show how the correlations drop for all datasets. The effect is particularly pertinent in the case of ABC News, with a drop of about 22 absolute points in Pearson correlation. Here, the removal of a small number of outliers (similarity > 4.0) would reduce correlation close to zero. On the other hand, for the SemEval 2022 documents, we observe a relatively wide spread of human labels, and correspondingly small impact of removing outliers, and can conclude that the high correlation with QuestEval scores (Table 2) is reliable. We observe a similar trend for the best-correlated SummEval score of Consistency, for which 89.4% of the data points were labeled with a score > 4.0. SummEval Relevance and Coherence scores are more evenly spread, leading to lower, albeit much more robust, estimates of Pearson correlation. Beyond that, we are aware that Pearson correlation is sensitive to outliers and Spearman correlation is less robust when the distribution happens to have clusters. None of these metrics are perfect and therefore it is crucial to understand the data, plot the distributions in scatter plots and conclude how informative are correlation coefficients. Analysis and Discussion From our results we can observe that summarisation evaluation metrics and more specifically, QuestEval have utility for tasks beyond summarisation, especially where there is no access to gold human annotations. In our case, we showed that QuestEval scores do correlate with the overall news article similarity scores of SemEval. However, this is not the case for every metric, as we were also able to show with dimensions like document sim-ilarity, consistency, and fluency. Moreover, we showed that in isolation Pearson correlation coefficients with human ratings are not a reliable signal for the quality of an evaluation metric, due to their sensitiveness to outliers. We recommend to visualise score distributions in tandem with calculating the correlation to ensure that it is not affected by a minority of outliers. This is consistent with the observations of Mathur et al. (2020) in their analysis of WMT task results. We observed that QuestEval scores are distributed in the range of 0-1 for almost all datasets/measurements except for ABC News, motivating us to look more closely at this dataset. In the Appendix we present some examples with high document similarities but low QuestEval scores. While we are aware that QuestEval values are lower than expected for those examples, the similarity rating is also arguable. For both cases, almost none of the entities overlap in the depicted documents; this could be the reason why QuestEval scores are low. We also propose to take into consideration several correlation coefficients as we show in Table 2. In addition to that, it is also important to understand the data by plotting it to look for useful patterns. Conclusion In this paper we investigated whether automatic QA-based metrics for summarisation evaluation can be adopted to compare long documents. We also conducted a more detailed evaluation of the robustness of Pearson correlation for similarity metric evaluation, and found that correlation-based metrics need to be validated by plotting and understanding labels and score distributions. In future work, we plan to extend our work to different languages. Kong, China. Association for Computational Linguistics. A Limitations We are aware that our analysis may be biased because we focus only on English data. Additionally, due to time constrains we were not able to comprehensively clean the SemEval data, so there may be remnant noise. B ABC News Examples See Table 5 for examples where the gold-standard similarity is high but QuestEval score is exceedingly low compared to a sample of documents that are indeed very similar and get high scores from annotations as well as from QuestEval. Figure 2 is a complement to the kernel density plots of Figure 1, and presents the raw scatter plots for the different datasets and removed outliers. C Scatterplots Table 1 : 1Average document length (words) in each dataset. In the case of SummEval, Doc 1 denotes a summary while Doc 2 the source text. Table 2 : 2Pearson (r) and Spearman (ρ) correlation coefficients for QuestEval scores under different data conditions. Table 3 : 3Pearson (r) and Spearman (ρ) correlation coef- ficients, after removing outliers. We underline the most drastic drops. split where both documents are English. 1 The av- erage article length is 535 words. Document pairs were labeled by trained annotators for a variety of axes of similarity (tone, style, narrative, temporal and geographical range, and entities) as well as overall similarity. Annotations were collected on a four-point scale from 1 (very dissimilar) to 4 (very similar). 2 In our experiments, we include only the overall similarity score, which we correlate with QuestEval similarity. Table 4 : 4Selected cutoff parameter for each dataset for outliers removal as well as total number of removed outliers. Averaged Similarity: 3.7 -QuestEval Score: 0.0004 The Bush administration has drawn up plansThe Iraqi capital is agog after the violent death of to escalate the war of words against Iraq, one of the world's most notorious terrorists, but with new campaigns to step up pressure the least of the Palestinian diplomat's worries was on Baghdad and rally world opinion behind the disposal of Abu Nidal's body, which lay on a slab the US drive to oust President Saddam Hussein.in an undisclosed Baghdad morgue. Abu Nidal's This week, the State Department will begin Fatah Revolutionary Council is held responsible for mobilising Iraqis from across North America, the death or injury of almost 1000 people in 20 Europe and the Arab world, training them to countries across Europe and the Middle East in the appear on talk shows, write opinion articles three decades since he fell out with Yasser Arafat and give speeches on reasons to end President over what Abu Nidal saw as Arafat's willingness to Saddam's rule.accommodate Israel in the Palestinian struggle. Averaged Similarity: 3.9 -QuestEval Score: 0.0003 U.S. intelligence cannot say conclusively thatThe Iraqi capital is agog after the violent death of Saddam Hussein has weapons of mass destruction, an one of the world's most notorious terrorists, but information gap that is complicating White House the least of the Palestinian diplomat's worries was efforts to build support for an attack on Saddam's the disposal of Abu Nidal's body, which lay on a slab Iraqi regime. The CIA has advised top administration in an undisclosed Baghdad morgue. Abu Nidal's officials to assume that Iraq has some weapons of Fatah Revolutionary Council is held responsible for mass destruction. But the agency has not given Presi-the death or injury of almost 1000 people in 20 dent Bush a "smoking gun," according to U.S. intelli-countries across Europe and the Middle East in the gence and administration officials. three decades since he fell out with Yasser Arafat over what Abu Nidal saw as Arafat's willingness to accommodate Israel in the Palestinian struggle.Averaged Similarity: 5.0 -QuestEval Score: 0.182 An Islamic high court in northern Nigeria rejected an Nigerian President Olusegun Obasanjo said he will appeal today by a single mother sentenced to be weep if a single mother sentenced to death by stoning stoned to death for having sex out of wedlock.for having a child out of wedlock is killed, but added Clutching her baby daughter, Amina Lawal burst into he has faith the court system will overturn her tears as the judge delivered the ruling. Lawal, 30, was sentence. Obasanjo's comments late Saturday first sentenced in March after giving birth to a appeared to confirm he would not intervene directly daughter more than nine months after divorcing.in the case, despite an international outcry. What's in a name? are BERT named entity representations just as good for any other name?. Sriram Balasubramanian, Naman Jain, Gaurav Jindal, Abhijeet Awasthi, Sunita Sarawagi, 10.18653/v1/2020.repl4nlp-1.24Proceedings of the 5th Workshop on Representation Learning for NLP. the 5th Workshop on Representation Learning for NLPOnline. Association for Computational LinguisticsSriram Balasubramanian, Naman Jain, Gaurav Jin- dal, Abhijeet Awasthi, and Sunita Sarawagi. 2020. What's in a name? are BERT named entity represen- tations just as good for any other name? In Proceed- ings of the 5th Workshop on Representation Learning for NLP, pages 205-214, Online. Association for Computational Linguistics. SemEval-2022 task 8: Multilingual news article similarity. Xi Chen, Ali Zeynali, Chico Camargo, Fabian Flöck, Devin Gaffney, Przemyslaw Grabowicz, Scott Hale, David Jurgens, Mattia Samory, 10.18653/v1/2022.semeval-1.155Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022). the 16th International Workshop on Semantic Evaluation (SemEval-2022)Seattle, United StatesAssociation for Computational LinguisticsXi Chen, Ali Zeynali, Chico Camargo, Fabian Flöck, Devin Gaffney, Przemyslaw Grabowicz, Scott Hale, David Jurgens, and Mattia Samory. 2022. SemEval- 2022 task 8: Multilingual news article similarity. In Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022), pages 1094- 1106, Seattle, United States. Association for Compu- tational Linguistics. SummEval: Re-evaluating summarization evaluation. Alexander R Fabbri, Wojciech Kryściński, Bryan Mc-Cann, Caiming Xiong, Richard Socher, Dragomir Radev, 10.1162/tacl_a_00373Transactions of the Association for Computational Linguistics. 9Alexander R. Fabbri, Wojciech Kryściński, Bryan Mc- Cann, Caiming Xiong, Richard Socher, and Dragomir Radev. 2021. SummEval: Re-evaluating summariza- tion evaluation. Transactions of the Association for Computational Linguistics, 9:391-409. A fine-grained analysis of BERTScore. Michael Hanna, Ondřej Bojar, Proceedings of the Sixth Conference on Machine Translation. the Sixth Conference on Machine TranslationOnline. Association for Computational LinguisticsMichael Hanna and Ondřej Bojar. 2021. A fine-grained analysis of BERTScore. In Proceedings of the Sixth Conference on Machine Translation, pages 507-517, Online. Association for Computational Linguistics. Teaching machines to read and comprehend. Karl Moritz Hermann, Tomás Kociský, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, Phil Blunsom, Advances in Neural Information Processing Systems 28: Annual Conference on Neural Information Processing Systems. Montreal, Quebec, CanadaKarl Moritz Hermann, Tomás Kociský, Edward Grefen- stette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching machines to read and comprehend. In Advances in Neural Information Processing Systems 28: Annual Conference on Neu- ral Information Processing Systems 2015, December 7-12, 2015, Montreal, Quebec, Canada, pages 1693- 1701. FFCI: A framework for interpretable automatic evaluation of summarization. Fajri Koto, Jey Han Lau, Timothy Baldwin, Journal of Artificial Intelligence Research. 73Fajri Koto, Jey Han Lau, and Timothy Baldwin. 2022. FFCI: A framework for interpretable automatic eval- uation of summarization. Journal of Artificial Intelli- gence Research, 73:1553-1607. Neural text summarization: A critical evaluation. Wojciech Kryscinski, Nitish Shirish Keskar, Bryan Mc-Cann, Caiming Xiong, Richard Socher, 10.18653/v1/D19-1051Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)Hong Kong, ChinaAssociation for Computational LinguisticsWojciech Kryscinski, Nitish Shirish Keskar, Bryan Mc- Cann, Caiming Xiong, and Richard Socher. 2019. Neural text summarization: A critical evaluation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 540-551, Hong Kong, China. Association for Computational Linguis- tics. METEOR: An automatic metric for MT evaluation with high levels of correlation with human judgments. Alon Lavie, Abhaya Agarwal, Proceedings of the Second Workshop on Statistical Machine Translation. the Second Workshop on Statistical Machine TranslationPrague, Czech RepublicAssociation for Computational LinguisticsAlon Lavie and Abhaya Agarwal. 2007. METEOR: An automatic metric for MT evaluation with high levels of correlation with human judgments. In Proceed- ings of the Second Workshop on Statistical Machine Translation, pages 228-231, Prague, Czech Republic. Association for Computational Linguistics. An empirical evaluation of models of text document similarity. D Michael, Brandon Lee, Matthew Pincombe, Welsh, Proceedings of the Annual Meeting of the Cognitive Science Society. the Annual Meeting of the Cognitive Science Society27Michael D Lee, Brandon Pincombe, and Matthew Welsh. 2005. An empirical evaluation of models of text document similarity. In Proceedings of the Annual Meeting of the Cognitive Science Society, 27. ROUGE: A package for automatic evaluation of summaries. Chin-Yew Lin, Text Summarization Branches Out. Barcelona, SpainAssociation for Computational LinguisticsChin-Yew Lin. 2004. ROUGE: A package for auto- matic evaluation of summaries. In Text Summariza- tion Branches Out, pages 74-81, Barcelona, Spain. Association for Computational Linguistics. Tangled up in BLEU: Reevaluating the evaluation of automatic machine translation evaluation metrics. Nitika Mathur, Timothy Baldwin, Trevor Cohn, 10.18653/v1/2020.acl-main.448Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsNitika Mathur, Timothy Baldwin, and Trevor Cohn. 2020. Tangled up in BLEU: Reevaluating the eval- uation of automatic machine translation evaluation metrics. In Proceedings of the 58th Annual Meet- ing of the Association for Computational Linguistics, pages 4984-4997, Online. Association for Computa- tional Linguistics. Bleu: a method for automatic evaluation of machine translation. Kishore Papineni, Salim Roukos, Todd Ward, Wei-Jing Zhu, 10.3115/1073083.1073135Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics. the 40th Annual Meeting of the Association for Computational LinguisticsPhiladelphia, Pennsylvania, USAAssociation for Computational LinguisticsKishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic evalu- ation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Compu- tational Linguistics, pages 311-318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics. Exploring the limits of transfer learning with a unified text-to-text transformer. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J Liu, Journal of Machine Learning Research. 2167Colin Raffel, Noam Shazeer, Adam Roberts, Kather- ine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. Journal of Machine Learning Research, 21:140:1-140:67. Know what you don't know: Unanswerable questions for SQuAD. Pranav Rajpurkar, Robin Jia, Percy Liang, 10.18653/v1/P18-2124Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. the 56th Annual Meeting of the Association for Computational LinguisticsMelbourne, AustraliaAssociation for Computational Linguistics2Pranav Rajpurkar, Robin Jia, and Percy Liang. 2018. Know what you don't know: Unanswerable ques- tions for SQuAD. In Proceedings of the 56th Annual Meeting of the Association for Computational Lin- guistics (Volume 2: Short Papers), pages 784-789, Melbourne, Australia. Association for Computational Linguistics. A survey of evaluation metrics used for NLG systems. B Ananya, Akash Sai, Mitesh M Kumar Mohankumar, Khapra, abs/2008.12009CoRR. Ananya B. Sai, Akash Kumar Mohankumar, and Mitesh M. Khapra. 2020. A survey of evaluation met- rics used for NLG systems. CoRR, abs/2008.12009. QuestEval: Summarization asks for fact-based evaluation. Thomas Scialom, Paul-Alexis Dray, Sylvain Lamprier, Benjamin Piwowarski, Jacopo Staiano, Alex Wang, Patrick Gallinari, 10.18653/v1/2021.emnlp-main.529Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing. the 2021 Conference on Empirical Methods in Natural Language ProcessingOnline and Punta Cana, Dominican RepublicAssociation for Computational LinguisticsThomas Scialom, Paul-Alexis Dray, Sylvain Lamprier, Benjamin Piwowarski, Jacopo Staiano, Alex Wang, and Patrick Gallinari. 2021. QuestEval: Summariza- tion asks for fact-based evaluation. In Proceedings of the 2021 Conference on Empirical Methods in Natu- ral Language Processing, pages 6594-6604, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. That is a known lie: Detecting previously fact-checked claims. Shaden Shaar, Nikolay Babulkov, Giovanni Da San, Preslav Martino, Nakov, 10.18653/v1/2020.acl-main.332Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsShaden Shaar, Nikolay Babulkov, Giovanni Da San Mar- tino, and Preslav Nakov. 2020. That is a known lie: Detecting previously fact-checked claims. In Pro- ceedings of the 58th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 3607- 3618, Online. Association for Computational Lin- guistics. Asking and answering questions to evaluate the factual consistency of summaries. Alex Wang, Kyunghyun Cho, Mike Lewis, 10.18653/v1/2020.acl-main.450Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsAlex Wang, Kyunghyun Cho, and Mike Lewis. 2020. Asking and answering questions to evaluate the fac- tual consistency of summaries. In Proceedings of the 58th Annual Meeting of the Association for Compu- tational Linguistics, pages 5008-5020, Online. Asso- ciation for Computational Linguistics. BERTScore: Evaluating text generation with BERT. Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q Weinberger, Yoav Artzi, 8th International Conference on Learning Representations. Addis Ababa, Ethiopia2020OpenReview.netTianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q. Weinberger, and Yoav Artzi. 2020. BERTScore: Evaluating text generation with BERT. In 8th Inter- national Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net. MoverScore: Text generation evaluating with contextualized embeddings and earth mover distance. Wei Zhao, Maxime Peyrard, Fei Liu, Yang Gao, Christian M Meyer, Steffen Eger, 10.18653/v1/D19-1053Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)HongWei Zhao, Maxime Peyrard, Fei Liu, Yang Gao, Chris- tian M. Meyer, and Steffen Eger. 2019. MoverScore: Text generation evaluating with contextualized em- beddings and earth mover distance. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th Interna- tional Joint Conference on Natural Language Pro- cessing (EMNLP-IJCNLP), pages 563-578, Hong
198,992,720
[]
Analysing Rhetorical Structure as a Key Feature of Summary Coherence August 2, 2019. 2019 Janšnajder jan.snajder@fer.hr Tamara Sladoljev-Agejev Faculty of Electrical Engineering and Computing University of Zagreb TakeLab Faculty of Economics and Business University of Zagreb Svjetlana Kolić-Vehovec Faculty of Humanities and Social Sciences Department of Psychology University of Rijeka Analysing Rhetorical Structure as a Key Feature of Summary Coherence Association for Computational Linguistics the Fourteenth Workshop on Innovative Use of NLP for Building Educational ApplicationsFlorence, Italy46August 2, 2019. 2019 We present a model for automatic scoring of coherence based on comparing the rhetorical structure (RS) of college student summaries in L2 (English) against expert summaries. Coherence is conceptualised as a construct consisting of a rhetorical relation and its arguments. Comparison with expert-assigned scores shows that RS scores correlate with both cohesion and coherence. Furthermore, RS scores improve the accuracy of a regression model for cohesion score prediction. Introduction Assessment of text quality may benefit from automatic scoring as it is cognitively demanding and often requires much expertise (Rahimi et al., 2017), especially in college-level expository writing. One of the key aspects of text quality is writing coherence (Crossley and McNamara, 2010) which reflects students' ability to connect ideas in their mind and to convey the same message in essays or summaries (Halliday and Hasan, 2014). Existing approaches to text quality predominantly focus on surface measures for assessment (e.g., number of cohesive devices), which sometimes have little relation either to human judgment, e.g., text length (Mintz et al., 2014), or to textspecific meaning (Rahimi et al., 2017). However, automatic scoring of coherence should ideally provide clear and reliable feedback based on features with cognitive validity, e.g., (Loukina et al., 2015). One way to meet such requirements is to define coherence as the identification of relations between the text's ideas (Rapp et al., 2007). Such a definition may best be analysed in summaries in which the key ideas of the source text are integrated into a rhetorical structure (RS). In cognitive terms, writing summaries is an exercise in reading-for-understanding (RU) and gist reasoning (Chapman and Mudar, 2013). The result of such processes is the macrostructure of the source text constructed in the reader's mind (Louwerse and Graesser, 2005), which consists of concepts and propositions, their mutual relations (Sanders and Noordman, 2000), and relations with prior knowledge. Coherent summaries should express the intention of the source text (Hobbs, 1993) using linguistic devices (cohesion), which makes summarisation also a readingto-write (RW) task (Delaney, 2008). Moreover, summaries have a distinctive feature for annotation: a largely shared knowledge base, i.e., the source text(s) known both to the writer and to the rater(s), which assists raters in their judgment and helps develop a reliable text-specific scoring tool. In this paper we present a model for automatic scoring of summaries based on analysing a rhetorical structure of a student's summary compared to that of reference summaries. Our starting point is coherence conceptualized as a construct consisting of three elements: a rhetorical relation and its two arguments. We posit that expository text has a rhetorical structure (RS) consisting of a series of text-specific rhetorical segments, the majority of which will be conveyed in a coherent summary if full text-level comprehension is achieved. The model uses a discourse parser to extract rhetorical structures of summaries, and then compares similarity of these structures. We show that the scores produced by the model correlate with the expertassigned cohesion and coherence scores as well as with surface indices of cohesion. We also show that the model-produced scores can be used to improve cohesion score prediction. Related Work Automatic assessment of text quality can include content, language accuracy, sophistication and style as well as sometimes overlapping features such as topic similarity, focus, coherence, cohesion, readability, or text organisation and development, e.g., (Pitler et al., 2010;Yannakoudakis and Briscoe, 2012;Guo et al., 2013;Rahimi et al., 2015;Gao et al., 2018). Coherence is a broad concept assessed by different automatic tools, e.g., (Higgins et al., 2004;Yannakoudakis and Briscoe, 2012;. Scoring measures may include surface features such as word or text length or the number of pronouns and connectives, e.g., (Yannakoudakis and Briscoe, 2012;MacArthur et al., 2018), which may also be contextualised, e.g., (Pitler et al., 2010). Source overlaps may also be used in scoring such as overlapping n-grams in summaries (Madnani et al., 2013), and semantic similarity (e.g,. LSA) may provide information on relatedness between words, e.g., lexical chaining (Somasundaran et al., 2014), sentences (Foltz et al., 1998;Higgins et al., 2004;Higgins and Burstein, 2007), or larger text sections (Crossley and McNamara, 2010). Both types of features (surface and LSA) are encompassed by Coh-Metrix (Graesser et al., 2004;McNamara et al., 2014), a comprehensive computational tool using a range of measures to grasp cognitive aspects of text analysis. Moreover, inter-sentential coherence can be measured using syntax-based entity grids (Barzilay and Lapata, 2008), for example, to distinguish between high-and low-coherence essays (Burstein et al., 2010), or analysing discourse relations (Pitler and Nenkova, 2008;Skoufaki, 2009). In order to improve the predictive value of automatic assessment, scoring measures are often combined. For example, Pitler and Nenkova (2008) use entity grids, syntactic features, discourse relations (Prasad et al., 2008), vocabulary, and length features. Yannakoudakis and Briscoe (2012) examine different measures and find that semantic similarity is the best addition to lexical and grammatical features. Somasundaran et al. (2014) combine lexical chains, grammar, word usage, mechanics, and RST discourse relations (Mann and Thompson, 1988) in L1 and L2 texts, while Higgins et al. (2004) use semantic similarity together with discourse structure to measure relatedness to the essay question and between discourse segments. More recently, Sladoljev-Agejev andŠnajder (2017) combine reference-based and linguistic features (e.g., Coh-Metrix, BLEU, ROUGE) to predict coherence and cohesion in college student summaries in L2. The coherence assessment model presented here relies on summaries as a RU/RW task which consists of detecting and conveying the RS of the source text. Similar to Higgins et al. (2004), we use semantic similarity and rhetorical structure to assess coherence of student summaries against summaries written by experts. While Higgins et al. measured the coherence of functional discourse segments (e.g., thesis, conclusion) via semantic similarity between their respective sentences, in our study coherence is measured via similarity between rhethorical structures. Our intuition relies on the establishment of source macrostructure as a coherence-building exercise during reading. Such an approach appears to be cognitively valid and may ensure meaningful feedback both in terms of comprehension and writing skills development or assessment. Our model is constrained by the source content, so we also compare its performance to cohesion features provided by Coh-Metrix in (Sladoljev-Agejev andŠnajder, 2017) to assess generic RW skills. Summary Scoring Model The summary scoring model works by comparing the RS of a student summary against the rhetorical structures of one or more reference summaries. The model produces a score that indicates to what extent the two structures overlap. Discourse parsing. To extract the rhetorical relations and their arguments, we use the PDTB-style parser of Lin et al. (2014), a state-of-the-art, end-toend parser which labels instances of both implicit and explicit relations as well as their argument spans. The PDTB relation labels are organized in a three-level hierarchy of "sense tags" (Prasad et al., 2008). The parser recognizes the first two levels: relation Category (e.g., Comparison) and Type (e.g., Contrast). The end-to-end performance of the parser, measured as F1-score under partial argument matching, is 48%. The output of this step is, for each summary S, a set of rhetorical relations {r i } i , where r i = (l i , a 1 i , a 2 i ) is a relation of class/type label l i , while a 1 i and a 2 i are text segments corresponding to its arguments. Comparing rhetorical structures. When comparing the similarity of summaries' rhetorical structures, we want the model to assign high scores to pairs of summaries that have many rhetorical relations in common. Of course, we cannot expect the arguments of rhetorical relations to be literally the same, but, if two relations of the same label are to be considered equivalent, their corresponding arguments should be highly semantically similar. We formalize this intuition by defining the weight w ij between a pair of rhetorical relations r i = (l i , a 1 i , a 2 i ) and r j = (l j , a 1 j , a 2 j ) as: w ij = 1 2 s(a 1 i , a 1 j ) + s(a 2 i , a 2 j ) if l i = l j , 0 otherwise. where s(·, ·) is the semantic similarity between two text segments. In line with much of recent work, we rely on additive compositionality of word embeddings, and compute the semantic similarity as the cosine similarity between averaged word embeddings of the two segments. We use the 300dimensional skip-gram word embeddings built on the Google-News corpus (Mikolov et al., 2013). 1 To compute the overlap score between a pair of summaries S 1 and S 2 , each consisting of a set of rhetorical relations, we use the maximum bipartite graph matching algorithm (Kuhn, 1955). The graph edges represent pairs of relations (r i , r j ), r i ∈ S 1 , r j ∈ S 2 , weighted by w ij . Let n 1 = |S 1 | and n 2 = |S 2 | be the number of rhetorical relations in S 1 and S 2 , respectively, and m the maximum matching score between S 1 and S 2 . We define the precision (P ) and recall (R) of the match as: P = m − max(0, n 1 − n 2 ) n 1 R = m − max(0, n 2 − n 1 ) n 2 The intuition is that precision is maximized if all relations from S 1 are perfectly matched to some relations from S 2 , and conversely for recall. The F1-score is the harmonic mean of P and R. Finally, we compute the F1-score of a student's summary S as the mean of pairwise F1-scores between S and both reference summaries. Evaluation Dataset. For model evaluation, we adopt the dataset of (Sladoljev-Agejev andŠnajder, 2017). The dataset consists of a total of 225 text-present summaries (c. 300 words) of two articles written by 114 first-year business undergraduates in English as L2 (mostly upper intermediate and advanced). Both articles (c. 900 words each) were taken from The Economist, a business magazine. Two expert 1 https://code.google.com/archive/p/word2vec/ raters used a 4-point analytic scale (grades 0-3) to assess the summaries in terms of coherence (RU) and cohesion (RW). The scales were quantified by defining the number of coherence and cohesion breaks. Descriptors for each grade included expressions such as "meaningfully related ideas" and "logical sequencing" (for coherence) and "linguistically connected text segments" (for cohesion). Inter-rater reliability (weighted kappas) was 0.69 for coherence and 0.83 for cohesion. The raters discussed and agreed on all the grades although reliability was adequate. As expected, we observe a strong correlation between coherence and cohesion scores (Spearman correlation coefficient of 0.64). All the summaries were checked for spelling and basic grammar. For the two articles from The Economist, two experts with considerable experience with business texts in English wrote 300-word summaries following the same instruction as the students. Comparison with expert-assigned scores. To assess the validity of the summary scoring model, we measure the correlations of P, R, and F1 scores produced by the model against expert-provided coherence and cohesion scores, considering both Class and Type levels of PDTB relations. Table 1 shows the results. We can make several observations. First, while all the scores correlate positively with both cohesion and coherence, correlation for coherence is consistently lower, possibly due to the role of the raters' prior knowledge, which is unavailable to the model (also note that interannotator agreement is lower for coherence than for cohesion). Second, correlation for Type level is consistently lower than for Class level, which can probably be traced to the PDTB parser being less accurate on Type-level relations. Lastly, we note that the highest correlation with both cohesion and coherence is achieved with the F1-score of the Class level model. These results suggest that the proposed summary scoring model is at least partially successful in modeling both cohesion and coherence -and this in spite of the unavoidable errors of the PDTB parser and errors in similarity computations. Comparison with Coh-Metrix indices. As mentioned in the introduction, a number of studies have used Coh-Metrix cohesion indices as predictors of both cohesion and coherence. In particular, Sladoljev-Agejev andŠnajder (2017) found modest correlation between expert-assigned coherence/cohesion and indices for connectives (additive connectives -CNCAdd, logical connectives -CNCLogic, and all connectives -CNCAll) and referential cohesion indices (mean of noun/pronoun overlaps between two sentences -CRFAOa, and content word overlap -CRFCWOA). It is therefore interesting to investigate to what extent these surface-level predictors correlate with the scores of our model. Table 2 gives Spearman correlation coefficients between the Coh-Metrix indices and expert-provided scores as well as the Classand Type-level F1-scores of the model. The Coh-Metrix indices correlate positively with both the expert-assigned scores and the scores of our model. However, while CNCLogic and CRFOAo indices mostly correlate with the expert-assigned cohesion and coherence scores, respectively, the scores of our model mostly correlate with the CNCAdd index. Supervised scoring. Following Sladoljev-Agejev andŠnajder (2017), we frame the automated Table 3: Accuracy of cohesion (Chs) and coherence (Chr) scores predictions for the baseline and ridge regression models with Coh-Metrix (CM), rhetorical structure (RS), and combined (CM+RS) feature sets. The best results are shown in bold. The "*" indicates a statistically significant difference to baseline (p<0.05, Wilcoxon signed-rank test). The differences between regression models with the CM feature set and models with RS and CM+RS feature sets are not statistically significant. scoring as a multivariate regression task and use two regression models, one for cohesion and the other for coherence, each trained to predict the expert-assigned score on a 0-3 scale. We use an L2-regularized linear regression model (ridge regression) 2 and consider three sets of features: (1) five Coh-Metrix CNC and CRF indices ("CM"), (2) the F1-scores of the summary scoring model computed at Class and Type levels ("RS"), and (3) a combination of the two ("CM+RS"). We evaluate the models using a nested 10×5 cross-validation: the models' performance is measured in terms of accuracy averaged over the five outer folds, after rounding the predictions to closest integers and limiting the scores to the 0-3 range. All the features are z-scored on the train set, and the same transformation is applied on the test set. As baselines, we use the rounded average of the expert-assigned scores. Table 3 shows the results. We can make three main observations. Firstly, cohesion models outperform the corresponding coherence models. Secondly, the only two models for which the differences against the baseline are statistically significant are the two cohesion models that use RS. This suggests that our model does provide useful signals for predicting expert-assigned cohesion scores. In the absence of statistical significance, the results for coherence are inconclusive, though we observe a similar trend. Conclusion We have described a model for coherence scoring based on a simple definition of coherence in line with cognitive theories of text comprehension. The model produces scores that correlate with expertassigned scores and improve the cohesion prediction of a regression model: a model that uses rhetorical structure scores as features yields a statistically significant improvement over the baseline of averaged expert-assigned scores. The proposed model could provide a basis for meaningful feedback in summaries and other similar tasks, and may also be used for measuring gist reasoning in case of a shared knowledge base between the rater and the examinee. Table 2 : 2Spearman correlation coefficients between Coh-Metrix indices (connectives -CNC, referential co- hesion -CRF) and expert-assigned cohesion (Chs) and coherence (Chr) scores as well as model-produced F1 scores at Class level (F1@C) and Type level (F1@T) of PDTB connectives. The highest correlations in each column are shown in boldface. All correlations are sta- tistically significant (p<0.05). Ridge / CM+RS 0.511 * 0.414Model / Features Chs Chr Baseline 0.369 0.361 Ridge / CM 0.489 0.409 Ridge / RS 0.476 * 0.419 We use the implementation ofPedregosa et al. (2011). AcknowledgmentsWe thank Višnja Kabalin-Borenić for her contribution in the assessment of summaries analysed in this work. Modeling local coherence: An entity-based approach. Regina Barzilay, Mirella Lapata, Computational Linguistics. 341Regina Barzilay and Mirella Lapata. 2008. Modeling local coherence: An entity-based approach. Compu- tational Linguistics, 34(1):1-34. Using entity-based features to model coherence in student essays. Jill Burstein, Joel Tetreault, Slava Andreyev, Human language technologies: The 2010 annual conference of the North American chapter of the Association for Computational Linguistics. Jill Burstein, Joel Tetreault, and Slava Andreyev. 2010. Using entity-based features to model coherence in student essays. In Human language technologies: The 2010 annual conference of the North American chapter of the Association for Computational Lin- guistics, pages 681-684. Holistic discourse coherence annotation for noisy essay writing. Jill Burstein, Joel Tetreault, Martin Chodorow, Dialogue & Discourse. 42Jill Burstein, Joel Tetreault, and Martin Chodorow. 2013. Holistic discourse coherence annotation for noisy essay writing. Dialogue & Discourse, 4(2):34-52. Discourse gist: A window into the brains complex cognitive capacity. Sandra Bond Chapman, Raksha Anand Mudar, Discourse Studies. 155Sandra Bond Chapman and Raksha Anand Mudar. 2013. Discourse gist: A window into the brains complex cognitive capacity. Discourse Studies, 15(5):519-533. Cohesion, coherence, and expert evaluations of writing proficiency. Scott Crossley, Danielle Mcnamara, Proceedings of the Annual Meeting of the Cognitive Science Society. the Annual Meeting of the Cognitive Science Society32Scott Crossley and Danielle McNamara. 2010. Cohe- sion, coherence, and expert evaluations of writing proficiency. In Proceedings of the Annual Meeting of the Cognitive Science Society, volume 32. Investigating the reading-to-write construct. Delaney Yuly Asencion, Journal of English for Academic Purposes. 73Yuly Asencion Delaney. 2008. Investigating the reading-to-write construct. Journal of English for Academic Purposes, 7(3):140-150. The measurement of textual coherence with latent semantic analysis. W Peter, Walter Foltz, Thomas K Kintsch, Landauer, Discourse processes. 25Peter W Foltz, Walter Kintsch, and Thomas K Lan- dauer. 1998. The measurement of textual coherence with latent semantic analysis. Discourse processes, 25(2-3):285-307. Automated content analysis: A case study of computer science student summaries. Yanjun Gao, M Patricia, Rebecca J Davies, Passonneau, Proceedings of the Thirteenth Workshop on Innovative Use of NLP for Building Educational Applications. the Thirteenth Workshop on Innovative Use of NLP for Building Educational ApplicationsYanjun Gao, Patricia M Davies, and Rebecca J Passon- neau. 2018. Automated content analysis: A case study of computer science student summaries. In Proceedings of the Thirteenth Workshop on Innova- tive Use of NLP for Building Educational Applica- tions, pages 264-272. Coh-Metrix: Analysis of text on cohesion and language. Arthur C Graesser, Danielle S Mcnamara, Max M Louwerse, Zhiqiang Cai, Behavior Research Methods. 362Arthur C. Graesser, Danielle S. McNamara, Max M. Louwerse, and Zhiqiang Cai. 2004. Coh-Metrix: Analysis of text on cohesion and language. Behav- ior Research Methods, 36(2):193-202. Predicting human judgments of essay quality in both integrated and independent second language writing samples: A comparison study. Assessing Writing. Liang Guo, Scott A Crossley, Danielle S Mcnamara, 18Liang Guo, Scott A. Crossley, and Danielle S. McNa- mara. 2013. Predicting human judgments of essay quality in both integrated and independent second language writing samples: A comparison study. As- sessing Writing, 18(3):218-238. . Michael Alexander , Kirkwood Halliday, Ruqaiya Hasan, Cohesion in English. RoutledgeMichael Alexander Kirkwood Halliday and Ruqaiya Hasan. 2014. Cohesion in English. Routledge. Sentence similarity measures for essay coherence. Derrick Higgins, Jill Burstein, Proceedings of the 7th International Workshop on Computational Semantics. the 7th International Workshop on Computational SemanticsDerrick Higgins and Jill Burstein. 2007. Sentence simi- larity measures for essay coherence. In Proceedings of the 7th International Workshop on Computational Semantics, pages 1-12. Evaluating multiple aspects of coherence in student essays. Derrick Higgins, Jill Burstein, Daniel Marcu, Claudia Gentile, Proceedings of the Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics: HLT-NAACL. the Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics: HLT-NAACLDerrick Higgins, Jill Burstein, Daniel Marcu, and Clau- dia Gentile. 2004. Evaluating multiple aspects of co- herence in student essays. In Proceedings of the Hu- man Language Technology Conference of the North American Chapter of the Association for Computa- tional Linguistics: HLT-NAACL 2004. Intention, information, and structure in discourse: A first draft. R Jerry, Hobbs, Burning Issues in Discourse, NATO Advanced Research Workshop. CiteseerJerry R Hobbs. 1993. Intention, information, and struc- ture in discourse: A first draft. In Burning Issues in Discourse, NATO Advanced Research Workshop, pages 41-66. Citeseer. The hungarian method for the assignment problem. Harold W Kuhn, Naval research logistics quarterly. 2Harold W Kuhn. 1955. The hungarian method for the assignment problem. Naval research logistics quar- terly, 2(1-2):83-97. A PDTB-styled end-to-end discourse parser. Ziheng Lin, Min-Yen Hwee Tou Ng, Kan, Natural Language Engineering. 202Ziheng Lin, Hwee Tou Ng, and Min-Yen Kan. 2014. A PDTB-styled end-to-end discourse parser. Natural Language Engineering, 20(2):151-184. Feature selection for automated speech scoring. Anastassia Loukina, Klaus Zechner, Lei Chen, Michael Heilman, Proceedings of the Tenth Workshop on Innovative Use of NLP for Building Educational Applications. the Tenth Workshop on Innovative Use of NLP for Building Educational ApplicationsAnastassia Loukina, Klaus Zechner, Lei Chen, and Michael Heilman. 2015. Feature selection for au- tomated speech scoring. In Proceedings of the Tenth Workshop on Innovative Use of NLP for Building Ed- ucational Applications, pages 12-19. M M Louwerse, A C Graesser, Macrostructure. Encyclopedia of Language and Linguistics. M. M. Louwerse and A. C. Graesser. 2005. Macrostructure. Encyclopedia of Language and Linguistics. Which linguistic features predict quality of argumentative writing for college basic writers, and how do those features change with instruction?. A Charles, Amanda Macarthur, Jennings, Philippakos, Reading and Writing. Charles A MacArthur, Amanda Jennings, and Zoi A Philippakos. 2018. Which linguistic features predict quality of argumentative writing for college basic writers, and how do those features change with in- struction? Reading and Writing, pages 1-22. Automated scoring of a summarywriting task designed to measure reading comprehension. Nitin Madnani, Jill Burstein, John Sabatini, Tenaha O&apos; Reilly, Proceedings of the Eighth Workshop on Innovative Use of NLP for Building Educational Applications. the Eighth Workshop on Innovative Use of NLP for Building Educational ApplicationsAssociation for Computational LinguisticsNitin Madnani, Jill Burstein, John Sabatini, and Tenaha O'Reilly. 2013. Automated scoring of a summary- writing task designed to measure reading compre- hension. In Proceedings of the Eighth Workshop on Innovative Use of NLP for Building Educational Ap- plications, pages 163-168. Association for Compu- tational Linguistics. C William, Sandra A Mann, Thompson, Rhetorical structure theory: Toward a functional theory of text organization. Text-Interdisciplinary Journal for the Study of Discourse. 8William C Mann and Sandra A Thompson. 1988. Rhetorical structure theory: Toward a functional the- ory of text organization. Text-Interdisciplinary Jour- nal for the Study of Discourse, 8(3):243-281. Automated evaluation of text and discourse with Coh-Metrix. Danielle S Mcnamara, Arthur C Graesser, Philip M Mccarthy, Zhiqiang Cai, Cambridge University PressDanielle S. McNamara, Arthur C. Graesser, Philip M. McCarthy, and Zhiqiang Cai. 2014. Automated eval- uation of text and discourse with Coh-Metrix. Cam- bridge University Press. Distributed representations of words and phrases and their compositionality. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, Jeff Dean, Advances in neural information processing systems. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Cor- rado, and Jeff Dean. 2013. Distributed representa- tions of words and phrases and their compositional- ity. In Advances in neural information processing systems, pages 3111-3119. Automatic assessment of student reading comprehension from short summaries. Lisa Mintz, Dan Stefanescu, Shi Feng, D&apos; Sidney, Arthur Mello, Graesser, Educational Data Mining. Lisa Mintz, Dan Stefanescu, Shi Feng, Sidney D'Mello, and Arthur Graesser. 2014. Automatic assessment of student reading comprehension from short sum- maries. In Educational Data Mining 2014. Scikit-learn: Machine learning in Python. Fabian Pedregosa, Gaël Varoquaux, Alexandre Gramfort, Vincent Michel, Bertrand Thirion, Olivier Grisel, Mathieu Blondel, Peter Prettenhofer, Ron Weiss, Vincent Dubourg, Jake Vanderplas, Alexandre Passos, David Cournapeau, Matthieu Brucher, Matthieu Perrot, Edouard Duchesnay, Journal of Machine Learning Research. 12Fabian Pedregosa, Gaël Varoquaux, Alexandre Gram- fort, Vincent Michel, Bertrand Thirion, Olivier Grisel, Mathieu Blondel, Peter Prettenhofer, Ron Weiss, Vincent Dubourg, Jake Vanderplas, Alexan- dre Passos, David Cournapeau, Matthieu Brucher, Matthieu Perrot, and Edouard Duchesnay. 2011. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research, 12(Oct):2825-2830. Automatic evaluation of linguistic quality in multidocument summarization. Emily Pitler, Annie Louis, Ani Nenkova, Proceedings of the 48th annual meeting of the Association for Computational Linguistics. the 48th annual meeting of the Association for Computational LinguisticsAssociation for Computational LinguisticsEmily Pitler, Annie Louis, and Ani Nenkova. 2010. Automatic evaluation of linguistic quality in multi- document summarization. In Proceedings of the 48th annual meeting of the Association for Compu- tational Linguistics, pages 544-554. Association for Computational Linguistics. Revisiting readability: A unified framework for predicting text quality. Emily Pitler, Ani Nenkova, Proceedings of the conference on empirical methods in natural language processing. the conference on empirical methods in natural language processingAssociation for Computational LinguisticsEmily Pitler and Ani Nenkova. 2008. Revisiting read- ability: A unified framework for predicting text qual- ity. In Proceedings of the conference on empirical methods in natural language processing, pages 186- 195. Association for Computational Linguistics. The Penn discourse treebank 2.0. Rashmi Prasad, Nikhil Dinesh, Alan Lee, Eleni Miltsakaki, Livio Robaldo, K Aravind, Bonnie L Joshi, Webber, LREC. CiteseerRashmi Prasad, Nikhil Dinesh, Alan Lee, Eleni Milt- sakaki, Livio Robaldo, Aravind K Joshi, and Bon- nie L Webber. 2008. The Penn discourse treebank 2.0. In LREC. Citeseer. Assessing students' use of evidence and organization in response-to-text writing: Using natural language processing for rubric-based automated scoring. Zahra Rahimi, Diane Litman, Richard Correnti, Elaine Wang, Lindsay Clare Matsumura, International Journal of Artificial Intelligence in Education. 274Zahra Rahimi, Diane Litman, Richard Correnti, Elaine Wang, and Lindsay Clare Matsumura. 2017. As- sessing students' use of evidence and organization in response-to-text writing: Using natural language processing for rubric-based automated scoring. In- ternational Journal of Artificial Intelligence in Edu- cation, 27(4):694-728. Incorporating coherence of topics as a criterion in automatic response-to-text assessment of the organization of writing. Zahra Rahimi, Diane J Litman, Elaine Wang, Richard Correnti, Proceedings of the Tenth Workshop on Innovative Use of NLP for Building Educational Applications. the Tenth Workshop on Innovative Use of NLP for Building Educational ApplicationsAssociation for Computational LinguisticsZahra Rahimi, Diane J Litman, Elaine Wang, and Richard Correnti. 2015. Incorporating coherence of topics as a criterion in automatic response-to-text as- sessment of the organization of writing. In Proceed- ings of the Tenth Workshop on Innovative Use of NLP for Building Educational Applications, pages 20-30. Association for Computational Linguistics. Higher-order comprehension processes in struggling readers: A perspective for research and intervention. N David, Paul Rapp, Van Den, Kristen L Broek, Panayiota Mc-Master, Christine A Kendeou, Espin, Scientific studies of reading. 114David N Rapp, Paul van den Broek, Kristen L Mc- Master, Panayiota Kendeou, and Christine A Espin. 2007. Higher-order comprehension processes in struggling readers: A perspective for research and in- tervention. Scientific studies of reading, 11(4):289- 312. Preliminary reading literacy assessment framework: Foundation and rationale for assessment and system design. John Sabatini, O&apos; Tenaha, Paul Reilly, Deane, ETS Research Report Series. 22013John Sabatini, Tenaha O'Reilly, and Paul Deane. 2013. Preliminary reading literacy assessment framework: Foundation and rationale for assessment and system design. ETS Research Report Series, 2013(2). The role of coherence relations and their linguistic markers in text processing. J M Ted, Sanders, G M Leo, Noordman, Discourse processes. 291Ted JM Sanders and Leo GM Noordman. 2000. The role of coherence relations and their linguistic markers in text processing. Discourse processes, 29(1):37-60. An exploratory application of rhetorical structure theory to detect coherence errors in L2 English writing: Possible implications for automated writing evaluation software. Sophia Skoufaki, International Journal of Computational Linguistics & Chinese Language Processing. 1414Special Issue on Computer Assisted Language LearningSophia Skoufaki. 2009. An exploratory application of rhetorical structure theory to detect coherence er- rors in L2 English writing: Possible implications for automated writing evaluation software. Interna- tional Journal of Computational Linguistics & Chi- nese Language Processing, Volume 14, Number 2, June 2009-Special Issue on Computer Assisted Lan- guage Learning, 14(2). Using analytic scoring rubrics in the automatic assessment of college-level summary writing tasks in l2. Tamara Sladoljev-Agejev, Janšnajder , Proceedings of the Eighth International Joint Conference on Natural Language Processing. the Eighth International Joint Conference on Natural Language ProcessingShort PapersTamara Sladoljev-Agejev and JanŠnajder. 2017. Us- ing analytic scoring rubrics in the automatic assess- ment of college-level summary writing tasks in l2. In Proceedings of the Eighth International Joint Conference on Natural Language Processing (Vol- ume 2: Short Papers), pages 181-186. Lexical chaining for measuring discourse coherence quality in test-taker essays. Swapna Somasundaran, Jill Burstein, Martin Chodorow, Proceedings of COLING 2014, the 25th International conference on computational linguistics: Technical papers. COLING 2014, the 25th International conference on computational linguistics: Technical papersSwapna Somasundaran, Jill Burstein, and Martin Chodorow. 2014. Lexical chaining for measur- ing discourse coherence quality in test-taker essays. In Proceedings of COLING 2014, the 25th Inter- national conference on computational linguistics: Technical papers, pages 950-961. Modeling coherence in ESOL learner texts. Helen Yannakoudakis, Ted Briscoe, Proceedings of the Seventh Workshop on Building Educational Applications Using NLP. the Seventh Workshop on Building Educational Applications Using NLPAssociation for Computational LinguisticsHelen Yannakoudakis and Ted Briscoe. 2012. Model- ing coherence in ESOL learner texts. In Proceedings of the Seventh Workshop on Building Educational Applications Using NLP, pages 33-43. Association for Computational Linguistics.
256,460,980
Unsupervised Entity Linking with Guided Summarization and Multiple-Choice Selection
Entity linking, the task of linking potentially ambiguous mentions in texts to corresponding knowledge-base entities, is an important component for language understanding. We address two challenge in entity linking: how to leverage wider contexts surrounding a mention, and how to deal with limited training data. We propose a fully unsupervised model called SumMC that first generates a guided summary of the contexts conditioning on the mention, and then casts the task to a multiple-choice problem where the model chooses an entity from a list of candidates. In addition to evaluating our model on existing datasets that focus on named entities, we create a new dataset that links noun phrases from WikiHow to Wikidata. We show that our SumMC model achieves stateof-the-art unsupervised performance on our new dataset and on existing datasets.
[ 221738970, 5019682, 201646309, 18309765, 17784265, 226262266, 233296875, 189999659, 15156124, 16677041, 1575573, 3081080, 158046969, 6216506, 6430811, 11275066 ]
Unsupervised Entity Linking with Guided Summarization and Multiple-Choice Selection December 7-11, 2022 Young-Min Cho University of Pennsylvania Li Zhang University of Pennsylvania Chris Callison-Burch University of Pennsylvania Unsupervised Entity Linking with Guided Summarization and Multiple-Choice Selection Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing the 2022 Conference on Empirical Methods in Natural Language ProcessingDecember 7-11, 2022 Entity linking, the task of linking potentially ambiguous mentions in texts to corresponding knowledge-base entities, is an important component for language understanding. We address two challenge in entity linking: how to leverage wider contexts surrounding a mention, and how to deal with limited training data. We propose a fully unsupervised model called SumMC that first generates a guided summary of the contexts conditioning on the mention, and then casts the task to a multiple-choice problem where the model chooses an entity from a list of candidates. In addition to evaluating our model on existing datasets that focus on named entities, we create a new dataset that links noun phrases from WikiHow to Wikidata. We show that our SumMC model achieves stateof-the-art unsupervised performance on our new dataset and on existing datasets. Introduction Entity linking (EL) is an important Natural Language Processing (NLP) task that associates ambiguous mentions to corresponding entities in a knowledge base (KB, also called knowledge graph). EL is a crucial component of many NLP applications, such as question answering (Yih et al., 2015) and information extraction (Hoffart et al., 2011). Although there have been significant and continuous developments of EL, most work requires sufficient labeled data and a well-developed KB (Zhang et al., 2021;Mulang' et al., 2020;van Hulst et al., 2020;Raiman and Raiman, 2018). However, many real-world applications, especially those in specific domains, suffer from scarcity of both training data and a fully-populated KB. Previous research has tackled this problem by learning EL models without data labeled entity links, but requires indirect supervision in the form of textual descriptions attached to entities in KBs, drawn from sources such as Wikipedia (Cao et al., 2017;Logeswaran et al., 2019). However, such descriptions may not be available in KBs in low-resource domains such as medicine or law. Thus, we focus on fully unsupervised EL, which only has access to the entities' names and their KB relations like subclass-of (Le and Titov, 2019;Arora et al., 2021). One challenge of unsupervised EL is leveraging useful information from potentially noisy and misleading context (Pan et al., 2015). Specifically, a local context (the sentence containing the mention) may not be sufficient for disambiguating the target mention without the global context (other sentences in the document). For example, in Figure 1, the target mention 'band' cannot be disambiguated solely with the local context "This band is so lovely", but needs to consider the global context that also includes "I can't wait for my wedding." To address this problem, we introduce an unsupervised approach to EL that builds on the strengths of large neural language models like GPT-3 (Brown et al., 2020). We use zero-shot GPT-3 prompting for two sub-tasks. First, we perform guided summarization, which summarizes the input document conditioned on the target mention and outputs a condensed global context. Then, we cast EL to a multiple-choice selection problem where the model chooses an entity from a list of candidates. We refer to our unsupervised EL model as SumMC (Summarization+Multiple-Choice). With a few exceptions (Ratinov et al., 2011;Cheng and Roth, 2013), the majority of EL work targets named entities, such as names of people and organizations (Mulang' et al., 2020;van Hulst et al., 2020), neglecting entities such as physical objects or concepts. To comprehensively evaluate our model, we create the first EL dataset on procedural texts, WikiHow-Wikidata, which links noun phrases from WikiHow 1 to Wikidata 2 entities (Vrandečić and Krötzsch, 2014). Our SumMC model outperforms current stateof-the-art (SoTA) unsupervised EL models on our new WikiHow-Wikidata data, as well as existing benchmarks including AIDA-CoNLL (Hoffart et al., 2011), WNED-Wiki and WNED-Clueweb dataset (Guo and Barbosa, 2018). In addition, we also provide ablation studies to show the positive influence of generating guided summaries. 3 Methodology Fully unsupervised EL is the task that links a target mention from a given document to some entities in a KB without requiring any text data to be labeled with explicit links to the KB. The only available information in the KB is the names of the entities and the relations among them. In this paper, we follow previous work (Le and Titov, 2019;Arora et al., 2021) and use Wikidata as our target KB, which defines instance-of and subclass-of relations between entities. Wikidata can be seen as a knowledge graph with entities as nodes and relations as edges, and the popularity of an entity can be represented by its degree. We now introduce SumMC, our proposed unsupervised EL model which consists of two instances of a generative language model. The first performs guided summarization by generating a summary of the document conditioned on a mention. The second casts EL to a multiple-choice selection problem and chooses an appropriate entity from a list of candidates generated by some heuristics. In our work, we use GPT-3 as the language model due to its superior performance on various NLP tasks (Brown et al., 2020). Candidate Generation. Following previous work (Le and Titov, 2019;Arora et al., 2021), we first select all entities from Wikidata whose name or alias contains all tokens in a mention. Then, we narrow it down to the top 20 entities with the highest degree (in-degree + out-degree) in the KB. For each entity in the final list, we produce a textual representation by concatenating the names of all related entities. For example, the representation of the candidate ribbon in Figure 1 is ribbon: costume component, textile. SumMC. The first application of GPT-3 performs a guided summarization of the input document. With zero-shot prompting, GPT-3 summarizes the texts using the prompt "[D] Summarize the text above in one sentence: [M]", where [D] is the input document and [M] is the target mention. Here, we force GPT-3's summarization to start with the mention to ensure that the conditioned summary contains both the target mention and related global context. At this point, the generated summary serves as a global context while the sentence containing the mention serves as a local context, both of which help disambiguate the target mention. The second application of GPT-3 casts the task to multiple-choice selection following many successful cases (Ouyang et al., 2022). With the two contexts, GPT-3 transforms EL to a multiple-choice question using the prompt "According to the context above, which of the following best describes [M]?", followed by the representations of the mention [M]'s candidates as choices. WikiHow-Wikidata Dataset Most work on EL has targeted named entities, especially in the news. To account for more diverse entities in different styles of texts, we create a humanannotated dataset called WikiHow-Wikidata that links noun phrases in procedural texts to Wikidata. The research revolving around entities in procedural texts have long received much attention in the community (Dalvi et al., 2018;Zhang et al., 2020;Tandon et al., 2020;Zhang, 2022), without existing large-scale datasets of entity links in such a style of texts. To create the dataset, we first extract 40,000 articles from the WikiHow corpus (Zhang et al., 2020) detailing everyday procedures. To select mentions to link, we choose the top 3 most-frequentlyoccurring nouns from each article using a part-ofspeech tagger, assuming that most mentions in a document share the same word sense (Gale et al., 1992). Then, we ask students from a university in the U.S. to manually link these mentions to some Wikidata entity. Finally, to measure and control annotation quality, we manually annotate a subset of examples beforehand as control questions. Details about our data collection process, interface, and measures for quality control can be found in Appendix B. Eventually, WikiHow-Wikidata consists of 11,287 triples of a WikiHow article, a target mention, and a Wikidata entity. Experiments We evaluate our SumMC model along with other strong baselines on some widely used EL datasets and our WikiHow-Wikidata dataset. Models τ MIL-ND: Le and Titov (2019) introduced the first EL model that did not require an annotated dataset. Their model casts the EL task to a binary multiinstance learning (Dietterich et al., 1997) problem along with a noise-detecting classifier. Eigentheme: Arora et al. (2021) created Eigentheme, the current state-of-the-art among fully unsupervised EL models. By representing each entity with its graph embedding, the model identifies a low-rank subspace using SVD on the embedding matrix and ranks candidates by the distance to this hyperplane. To analyze the effect of using global context in our SumMC model, we report the evaluation results using three variations. SumMC: Our proposed model integrates GPT-3 guided summarization and multiple-choice selection models. We use the Curie model for summarization conditioned on the target mention and the Davinci model for multiple-choice. As discussed before, both global and local contexts are provided. -Guide: This is an ablated version of SumMC that generates summaries without being conditioned on the target mention. While both global and local contexts are provided, the global context is not guaranteed to be related to the target mention. -Sum: This is another ablated version that does not generate summaries of a whole document but directly performs multiple-choice selection, given only with the local context of the mention. Dataset We choose AIDA-CoNLL-testb (AIDA-B), WNED-Wiki, and WNED-Clueweb (WNED-Cweb) to measure models' performance on disambiguating named entities and use our WikiHow-Wikidata (WikiWiki) dataset for evaluating on noun phrases. Following previous settings (Tsai and Roth, 2016;Guo and Barbosa, 2018;Arora et al., 2021), we report micro precision@1 (P@1) and categorize each mention into 'easy' and 'hard' by whether the candidate entity with the highest degree in the knowledge graph is the correct answer. Performance on 'hard' mention is important since it shows the model's ability on highly ambiguous mentions. 'Not-found' is for mentions whose candidate list does not contain the correct answer. 'Overall' performance is reported considering all mentions, including 'Not-found' by treating it as a false prediction. The distribution of each dataset is shown in Table 1. Results and Discussion We show our results in (Arora et al., 2021). It captures global context only using the relations between mentions while neglecting the texts in the document. However, this assumption might not always hold. Our model, in contrast, removes this assumption by producing a guided summary of texts in the document. Effect of Global Context. We show the results of our ablation study in Table 3. On all datasets, SumMC outperforms the variation without having the summary guided by the mention (-Guide), which outperforms the variation without summarization (-Sum). This result shows the efficacy of not only using summaries as global contexts, but also forcing the summaries to contain information about the mention. Indeed, in many cases, we find that the mention might not be central to the document so that a standard summary might contain noise or insufficient signal for disambiguating the mention. Interestingly, we observe that the performance gap between variations on WikiHow-Wikidata is relatively small. We speculate that WikiHow's instructional sentences are usually self-explanatory, so the local context often provides enough information to disambiguate the mention. Effect of Multiple-Choice Selection. Using similarity measures to link a mention to an entity is one of the most successful EL methods (Pan et al., 2015). We also examine this approach using Sentence-BERT (Reimers and Gurevych, 2019) and cosine similarity instead of the multiple-choice selection model. As a result, it has only 42% P@1 on AIDA-B dataset. The text-based embedding approach might not be practical in our setting because entity candidates can only be represented by minimal texts, making text embedding unstable. Error Analysis. In some cases, common sense is required to disambiguate mentions. For example, "Japan" in an article about a soccer tournament should be linked to the entity "Japan national football team" instead of the country "Japan." The correct answer can be inferred from the term "Asian Cup" in the text. However, our model fails such a case when the word 'soccer' is not included in the context. Currently, each of our multiple choices is a concatenation of the target entity and its related entities based on two KB relations: instance-of and subclass-of. However, these might be insufficient. For example, most person entities have 'human' as the only related entity, which is uninformative. Conversely, considering other relations might also introduce unnecessary noise. Conclusion We introduce SumMC, a fully unsupervised Entity Linking model that first produces a summary of the document guided by the mention, and then casts the task to a multiple-choice format. Our model achieves new state-of-the-art performance on var-ious benchmarks, including our new WikiHow-Wikidata, the first EL dataset on procedural texts. Notably, our approach of guided summarization may be applied to other tasks that benefit from global contexts. Future work might also extend our methods to supervised settings. Limitations Because we focus on fully unsupervised models, we do not consider fine-tuning GPT-3 nor provide a direct comparison with other supervised approaches. A potential criticism of this work is our use of GPT-3. Although GPT-3 is publicly available to everyone, it is not an open-source model and can be expensive to use at scale. For direct comparison, we use the candidate generation method from (Le and Titov, 2019) and Arora et al. (2021), which has a low recall on datasets. Although there are better methods (Sil et al., 2012;Charton et al., 2014), we do not consider them in this work. Document SOCCER -JAPAN GET LUCKY WIN, CHINA IN SURPRISE DEFEAT. Nadim Ladki AL-AIN, United Arab Emirates 1996-12-06 Japan began the defence of their Asian Cup title with a lucky 2-1 win against Syria in a Group C championship match on Friday. But China saw their luck desert them in the second match of the group, crashing to a surprise 2-0 defeat to newcomers Uzbekistan. China controlled most of the match and saw several chances missed until the 78th minute when Uzbek striker Igor Shkvyrin took advantage of a misdirected defensive header to lob the ball over the advancing Chinese keeper and into an empty net. Oleg Shatskiku made sure of the win in injury time, hitting an unstoppable left foot shot from just outside the area. The former Soviet republic was playing in an Asian Cup finals tie for the first time. Despite winning the Asian Games title two years ago, Uzbekistan are in the finals as outsiders. Two goals from defensive errors in the last six minutes allowed Japan to come from behind and collect all three points from their opening meeting against Syria. Takuya Takagi scored the winner in the 88th minute, rising to head a Hiroshige Yanagimoto cross towards the Syrian goal which goalkeeper Salem Bitar appeared to have covered but then allowed to slip into the net. It was the second costly blunder by Syria in four minutes. Defender Hassan Abbas rose to intercept a long ball into the area in the 84th minute but only managed to divert it into the top corner of Bitar's goal. Nader Jokhadar had given Syria the lead with a well-struck header in the seventh minute. Japan then laid siege to the Syrian penalty area for most of the game but rarely breached the Syrian defence. Bitar pulled off fine saves whenever they did. Japan coach Shu Kamo said: "The Syrian own goal proved lucky for us. The Syrians scored early and then played defensively and adopted long balls which made it hard for us." Japan, co-hosts of the World Cup in 2002 and ranked 20th in the world by FIFA, are favourites to regain their title here. Hosts UAE play Kuwait and South Korea take on Indonesia on Saturday in Group A matches. All four teams are level with one point each from one game. Mention Summary -Japan began the defence of their Asian Cup title with a lucky 2-1 win against Syria in a Group C championship match on Friday. Japan Japan won 2-1 against Syria in the first game of the Asian Cup, while China lost 2-0 to Uzbekistan in the second game of the group. Syria Syria lost to Japan 2-1 in the Asian Cup championship, with two late goals coming from defensive errors. Uzbekistan Uzbekistan defeated China 2-0 in their first match of the Asian Cup, surprising many observers. A Examples of Guided Summarization Based on the document '1163testb_soccer' in the AIDA-B dataset, we show examples of guided summarization in Table 4. In the first example, the model generates a general document summary since it is not guided with a mention. Thus, information about Uzbekistan is not shown in the summary. The latter three examples are guided with 'Japan', 'Syria', and 'Uzbekistan', and give corresponding summaries specified to the mention. We also provide example guided summaries of the AIDA-B dataset, which can be found in the uploaded file. B Creation of WikiHow-Wikidata Our annotation interface shows example sentences from a Wikihow article and asks the annotator to select the correct sense of one of the three most frequent nouns. Our inventory of senses is a numbered list of possible Wikidata candidate entities, along with a short description of each sense. Participants read the article and select the word sense by picking the closest match from the candidate list or choosing "No Answer" if there is none. Annotators can also input multiple answers if more than one candidate matches the correct sense inferred from example sentences. We do not force participants to input only one answer because it is common in Wikidata that multiple entities describe the same meaning. Our program records the WikiHow article URL, target mention, and the corresponding Wikidata QID students selected. We manually annotated 30 questions for control questions. The program shows a random control question for every ten questions without telling participants. The annotation program is available in the uploaded file. Eventually, we collect 31,354 responses from 521 participants. We then filtered qualifying participants so that only those with more than 95% accuracy on confident control questions remain. Hence, we end up with a cleaned set of 23,352 responses. In order to apply to different models examined in our paper, we do further filtering on the cleaned set. We run the candidate generation mentioned in Section 2, and exclude entities that cannot be found in the list of DEEPWALK (Perozzi et al., 2014) graph embedding trained on Wikidata by Arora et al. (2021). Also, we drop mentions with a candidate list that does not have a gold entity or has only one entity in the list. As a result, we get a final set of 11,287 mentions. C Effect of GPT-3 Engine Size We also compare the impact of GPT-3 engine size to SumMC model. Guided summarization is very powerful regardless of the engine. Only changing engine size, our model with Ada achieves 0.631 P@1, and Babbage scores 0.633 P@1 on AIDA-B, which tie with 0.636 P@1 by Curie. This gives an alternative option to users with a limited budget but who still want a moderate performance. Compared to Curie, the pricing of Ada is 87% cheaper, but it is still equivalent to the result that Curie achieved. On the other hand, multiple-choice selection requires a large model. Compared with the 0.633 P@1 on AIDA-B with Davinci engine, Curie and Babbage only score 0.204 and 0.196 P@1, respectively, while the Ada engine fails to complete the evaluation. Using our model's setting, it costs around $0.002 for guided summarization and $0.01 for multiplechoice selection. D Model Setting Details Since most of our code is API call of GPT-3, SumMC does not require a strong requirement on computational resources. In our model, we used default hyperparameter setting for both guided summarization and multiplechoice selection. In detail, we set temperature=0.7, max_tokens=256, top_p=1. frequency_penalty=0, and presence_penalty=0. Due to the input token limit of GPT-3 engines, we truncated the input document to 512 words surrounding the target mention during guided summarization. We used the '2021-09-13' dump of Wikidata in our model, and used Knowledge Graph Toolkit (Ilievski et al., 2020) to extract entities and their relations. Figure 1 : 1Example of an Entity Linking problem. Figure 2 : 2Pipeline of SumMC. Texts highlighted with green are machine generated. Table 1 : 1Statistics of datasets showing distributions of mention difficulty. Table 2 . 2Our SumMC model achieves significantly better results than other unsupervised EL models in all evaluation datasets. Specifically, SumMC has a strong performance on Improvement over SoTA +0.26 +0.01 +0.27 +0.02 -0.06 +0.21 +0.03 -0.01 +0.18 +0.07 -0.02 +0.31WikiHow-Wikidata AIDA-B WNED-Wiki WNED-Clueweb Overall Easy Hard Overall Easy Hard Overall Easy Hard Overall Easy Hard τ MIL-ND - - - 0.45 0.70 0.19 0.13 - - 0.27 - - Eigentheme 0.50 0.61 0.53 0.62 0.86 0.50 0.44 0.82 0.47 0.41 0.77 0.29 SumMC (ours) 0.76 0.62 0.80 0.64 0.80 0.71 0.47 0.81 0.65 0.48 0.75 0.60 Table 2 : 2Performance comparison across SoTA models. Result is reported with Precision@1. We get results of τ MIL-ND and Eigentheme on public datasets fromArora et al. (2021). 'Overall' shows result considering 'Not-found' mentions.-Guide -Sum WikiWiki Easy -0.02 -0.01 AIDA-B Easy -0.02 -0.03 WNED-Wiki Easy -0.01 -0.07 WNED-Cweb Easy -0.02 -0.03 Average Easy -0.02 -0.04 WikiWiki Hard -0.01 -0.00 AIDA-B Hard -0.04 -0.08 WNED-Wiki Hard -0.01 -0.06 WNED-Cweb Hard -0.01 -0.02 Average Hard -0.02 -0.04 Table 3 : 3Ablation study showing the effects on our SumMC model by removing the mention condition on summary or the global context.'hard' mentions. In comparison, Eigentheme, the current SoTA model, has slightly higher scores on 'easy' mentions on most datasets but performs worse on 'hard' mentions. Comparison with Previous Models. Overall, SumMC achieves 63% precision, while Eigen- theme scores 47%. Although SumMC has 1% less precision on 'easy' cases (75% vs. 76%), it outper- forms Eigentheme on 'hard' cases by 26% (73% vs. 47%). Eigentheme assumes that gold entities in a document are topically related Table 4 : 4Example of guided summarization on '1163testb_soccer' document in AIDA-B dataset. https://www.wikihow.com/Main-Page 2 https://www.wikidata.org/wiki/Wikidata:Main_P age 3 The code and data are available at https://github.com /JeffreyCh0/SumMC AcknowledgementsThis research is based upon work supported in part by the DARPA KAIROS Program (contract FA8750-19-2-1004), the DARPA LwLL Program (contract FA8750-19-2-0201), the IARPA BET-TER Program (contract , and the NSF (Award 1928631). Approved for Public Release, Distribution Unlimited. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of DARPA, IARPA, NSF, or the U.S. Government. We thank the students from the CIS-421/521 course in 2021 at the University of Pennsylvania for annotating WikiHow-Wikidata dataset. Low-rank subspaces for unsupervised entity linking. Akhil Arora, Alberto Garcia-Duran, Robert West, 10.18653/v1/2021.emnlp-main.634Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing. the 2021 Conference on Empirical Methods in Natural Language ProcessingOnline and Punta Cana, Dominican Republic. Association for Computational LinguisticsAkhil Arora, Alberto Garcia-Duran, and Robert West. 2021. Low-rank subspaces for unsupervised entity linking. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 8037-8054, Online and Punta Cana, Domini- can Republic. Association for Computational Lin- guistics. Language models are few-shot learners. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Advances in neural information processing systems. 33Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. Advances in neural information processing systems, 33:1877-1901. Bridge text and knowledge by learning multiprototype entity mention embedding. Yixin Cao, Lifu Huang, Heng Ji, Xu Chen, Juanzi Li, 10.18653/v1/P17-1149Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. the 55th Annual Meeting of the Association for Computational LinguisticsVancouver, CanadaAssociation for Computational Linguistics1Long Papers)Yixin Cao, Lifu Huang, Heng Ji, Xu Chen, and Juanzi Li. 2017. Bridge text and knowledge by learning multi- prototype entity mention embedding. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1623-1633, Vancouver, Canada. Association for Computational Linguistics. Improving entity linking using surface form refinement. Eric Charton, Marie-Jean Meurs, Ludovic Jean-Louis, Michel Gagnon, Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14). the Ninth International Conference on Language Resources and Evaluation (LREC'14)Reykjavik, IcelandEuropean Language Resources Association (ELRAEric Charton, Marie-Jean Meurs, Ludovic Jean-Louis, and Michel Gagnon. 2014. Improving entity link- ing using surface form refinement. In Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14), pages 4609- 4615, Reykjavik, Iceland. European Language Re- sources Association (ELRA). Relational inference for wikification. Xiao Cheng, Dan Roth, Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing. the 2013 Conference on Empirical Methods in Natural Language ProcessingSeattle, Washington, USAAssociation for Computational LinguisticsXiao Cheng and Dan Roth. 2013. Relational inference for wikification. In Proceedings of the 2013 Con- ference on Empirical Methods in Natural Language Processing, pages 1787-1796, Seattle, Washington, USA. Association for Computational Linguistics. Tracking state changes in procedural text: a challenge dataset and models for process paragraph comprehension. Bhavana Dalvi, Lifu Huang, Niket Tandon, Wen-Tau Yih, Peter Clark, 10.18653/v1/N18-1144Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesLong Papers; New Orleans, LouisianaAssociation for Computational Linguistics1Bhavana Dalvi, Lifu Huang, Niket Tandon, Wen-tau Yih, and Peter Clark. 2018. Tracking state changes in procedural text: a challenge dataset and models for process paragraph comprehension. In Proceedings of the 2018 Conference of the North American Chap- ter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Pa- pers), pages 1595-1604, New Orleans, Louisiana. Association for Computational Linguistics. Solving the multiple instance problem with axis-parallel rectangles. Artificial intelligence. G Thomas, Dietterich, H Richard, Tomás Lathrop, Lozano-Pérez, 89Thomas G Dietterich, Richard H Lathrop, and Tomás Lozano-Pérez. 1997. Solving the multiple instance problem with axis-parallel rectangles. Artificial intel- ligence, 89(1-2):31-71. One sense per discourse. William A Gale, Kenneth W Church, David Yarowsky, Speech and Natural Language: Proceedings of a Workshop Held at. Harriman, New YorkWilliam A. Gale, Kenneth W. Church, and David Yarowsky. 1992. One sense per discourse. In Speech and Natural Language: Proceedings of a Workshop Held at Harriman, New York, February 23-26, 1992. Robust named entity disambiguation with random walks. Semantic Web. Zhaochen Guo, Denilson Barbosa, 9Zhaochen Guo and Denilson Barbosa. 2018. Robust named entity disambiguation with random walks. Se- mantic Web, 9(4):459-479. Robust disambiguation of named entities in text. Johannes Hoffart, Mohamed Amir Yosef, Ilaria Bordino, Hagen Fürstenau, Manfred Pinkal, Marc Spaniol, Bilyana Taneva, Stefan Thater, Gerhard Weikum, Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing. the 2011 Conference on Empirical Methods in Natural Language ProcessingEdinburgh, Scotland, UK.Association for Computational LinguisticsJohannes Hoffart, Mohamed Amir Yosef, Ilaria Bordino, Hagen Fürstenau, Manfred Pinkal, Marc Spaniol, Bilyana Taneva, Stefan Thater, and Gerhard Weikum. 2011. Robust disambiguation of named entities in text. In Proceedings of the 2011 Conference on Em- pirical Methods in Natural Language Processing, pages 782-792, Edinburgh, Scotland, UK. Associa- tion for Computational Linguistics. KGTK: A toolkit for large knowledge graph manipulation and analysis. Filip Ilievski, Daniel Garijo, Hans Chalupsky, Yixiang Naren Teja Divvala, Craig Yao, Ronpeng Rogers, Jun Li, Amandeep Liu, Daniel Singh, Pedro Schwabe, Szekely, International Semantic Web Conference. SpringerFilip Ilievski, Daniel Garijo, Hans Chalupsky, Naren Teja Divvala, Yixiang Yao, Craig Rogers, Ron- peng Li, Jun Liu, Amandeep Singh, Daniel Schwabe, and Pedro Szekely. 2020. KGTK: A toolkit for large knowledge graph manipulation and analysis. In Inter- national Semantic Web Conference, pages 278-293. Springer. Distant learning for entity linking with automatic noise detection. Phong Le, Ivan Titov, 10.18653/v1/P19-1400Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. the 57th Annual Meeting of the Association for Computational LinguisticsFlorence, ItalyAssociation for Computational LinguisticsPhong Le and Ivan Titov. 2019. Distant learning for entity linking with automatic noise detection. In Proceedings of the 57th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 4081- 4090, Florence, Italy. Association for Computational Linguistics. Zero-shot entity linking by reading entity descriptions. Lajanugen Logeswaran, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, Jacob Devlin, Honglak Lee, 10.18653/v1/P19-1335Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. the 57th Annual Meeting of the Association for Computational LinguisticsFlorence, ItalyAssociation for Computational LinguisticsLajanugen Logeswaran, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, Jacob Devlin, and Honglak Lee. 2019. Zero-shot entity linking by reading entity de- scriptions. In Proceedings of the 57th Annual Meet- ing of the Association for Computational Linguistics, pages 3449-3460, Florence, Italy. Association for Computational Linguistics. Evaluating the impact of knowledge graph context on entity disambiguation models. Isaiah Onando Mulang, &apos; , Kuldeep Singh, Chaitali Prabhu, Abhishek Nadgeri, Johannes Hoffart, Jens Lehmann, Proceedings of the 29th ACM International Conference on Information & Knowledge Management. the 29th ACM International Conference on Information & Knowledge ManagementIsaiah Onando Mulang', Kuldeep Singh, Chaitali Prabhu, Abhishek Nadgeri, Johannes Hoffart, and Jens Lehmann. 2020. Evaluating the impact of knowledge graph context on entity disambiguation models. In Proceedings of the 29th ACM Interna- tional Conference on Information & Knowledge Man- agement, pages 2157-2160. . Long Ouyang, Jeff Wu, Xu Jiang, Diogo Almeida, Carroll L Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, John Schulman, 10.48550/ARXIV.2203.02155Jacob Hilton, Fraser Kelton, Luke Miller, Maddie Simens, Amanda Askell, Peter Welinder, Paul Christianoand Ryan Lowe. 2022. Training language models to follow instructions with human feedbackLong Ouyang, Jeff Wu, Xu Jiang, Diogo Almeida, Car- roll L. Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, John Schulman, Jacob Hilton, Fraser Kelton, Luke Miller, Maddie Simens, Amanda Askell, Peter Welinder, Paul Christiano, Jan Leike, and Ryan Lowe. 2022. Training language models to follow instructions with human feedback. Unsupervised entity linking with Abstract Meaning Representation. Xiaoman Pan, Taylor Cassidy, Ulf Hermjakob, Ji Heng, Kevin Knight, 10.3115/v1/N15-1119Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesDenver, ColoradoAssociation for Computational LinguisticsXiaoman Pan, Taylor Cassidy, Ulf Hermjakob, Heng Ji, and Kevin Knight. 2015. Unsupervised entity linking with Abstract Meaning Representation. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, pages 1130-1139, Denver, Colorado. Association for Computational Linguistics. Deepwalk: Online learning of social representations. Bryan Perozzi, Rami Al-Rfou, Steven Skiena, Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining. the 20th ACM SIGKDD international conference on Knowledge discovery and data miningBryan Perozzi, Rami Al-Rfou, and Steven Skiena. 2014. Deepwalk: Online learning of social representations. In Proceedings of the 20th ACM SIGKDD interna- tional conference on Knowledge discovery and data mining, pages 701-710. Deeptype: multilingual entity linking by neural type system evolution. Jonathan Raiman, Olivier Raiman, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence32Jonathan Raiman and Olivier Raiman. 2018. Deeptype: multilingual entity linking by neural type system evo- lution. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 32. Local and global algorithms for disambiguation to Wikipedia. Lev Ratinov, Dan Roth, Proceedings of the 49th. the 49thDoug Downey, and Mike AndersonLev Ratinov, Dan Roth, Doug Downey, and Mike An- derson. 2011. Local and global algorithms for disam- biguation to Wikipedia. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies. Portland, Oregon, USAAssociation for Computational LinguisticsAnnual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 1375-1384, Portland, Oregon, USA. Association for Computational Linguistics. Sentence-BERT: Sentence embeddings using Siamese BERTnetworks. Nils Reimers, Iryna Gurevych, 10.18653/v1/D19-1410Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)Hong Kong, ChinaAssociation for Computational LinguisticsNils Reimers and Iryna Gurevych. 2019. Sentence- BERT: Sentence embeddings using Siamese BERT- networks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natu- ral Language Processing (EMNLP-IJCNLP), pages 3982-3992, Hong Kong, China. Association for Com- putational Linguistics. Linking named entities to any database. Avirup Sil, Ernest Cronin, Penghai Nie, Yinfei Yang, Ana-Maria Popescu, Alexander Yates, Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning. the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language LearningJeju Island, KoreaAssociation for Computational LinguisticsAvirup Sil, Ernest Cronin, Penghai Nie, Yinfei Yang, Ana-Maria Popescu, and Alexander Yates. 2012. Linking named entities to any database. In Proceed- ings of the 2012 Joint Conference on Empirical Meth- ods in Natural Language Processing and Computa- tional Natural Language Learning, pages 116-127, Jeju Island, Korea. Association for Computational Linguistics. A dataset for tracking entities in open domain procedural text. Niket Tandon, Keisuke Sakaguchi, Bhavana Dalvi, Dheeraj Rajagopal, Peter Clark, Michal Guerquin, Kyle Richardson, Eduard Hovy, 10.18653/v1/2020.emnlp-main.520Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)Niket Tandon, Keisuke Sakaguchi, Bhavana Dalvi, Dheeraj Rajagopal, Peter Clark, Michal Guerquin, Kyle Richardson, and Eduard Hovy. 2020. A dataset for tracking entities in open domain procedural text. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 6408-6417, Online. Association for Computa- tional Linguistics. Cross-lingual wikification using multilingual embeddings. Chen-Tse Tsai, Dan Roth, 10.18653/v1/N16-1072Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesSan Diego, CaliforniaAssociation for Computational LinguisticsChen-Tse Tsai and Dan Roth. 2016. Cross-lingual wiki- fication using multilingual embeddings. In Proceed- ings of the 2016 Conference of the North American Chapter of the Association for Computational Lin- guistics: Human Language Technologies, pages 589- 598, San Diego, California. Association for Compu- tational Linguistics. Rel: An entity linker standing on the shoulders of giants. Faegheh Johannes M Van Hulst, Koen Hasibi, Krisztian Dercksen, Arjen P De Balog, Vries, Proceedings of the 43rd International ACM SI-GIR Conference on Research and Development in Information Retrieval. the 43rd International ACM SI-GIR Conference on Research and Development in Information RetrievalJohannes M van Hulst, Faegheh Hasibi, Koen Dercksen, Krisztian Balog, and Arjen P de Vries. 2020. Rel: An entity linker standing on the shoulders of giants. In Proceedings of the 43rd International ACM SI- GIR Conference on Research and Development in Information Retrieval, pages 2197-2200. Wikidata: a free collaborative knowledgebase. Denny Vrandečić, Markus Krötzsch, Communications of the ACM. 5710Denny Vrandečić and Markus Krötzsch. 2014. Wiki- data: a free collaborative knowledgebase. Communi- cations of the ACM, 57(10):78-85. Semantic parsing via staged query graph generation: Question answering with knowledge base. Ming-Wei Wen-Tau Yih, Xiaodong Chang, Jianfeng He, Gao, 10.3115/v1/P15-1128Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing. the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language ProcessingBeijing, ChinaAssociation for Computational Linguistics1Wen-tau Yih, Ming-Wei Chang, Xiaodong He, and Jian- feng Gao. 2015. Semantic parsing via staged query graph generation: Question answering with knowl- edge base. In Proceedings of the 53rd Annual Meet- ing of the Association for Computational Linguistics and the 7th International Joint Conference on Natu- ral Language Processing (Volume 1: Long Papers), pages 1321-1331, Beijing, China. Association for Computational Linguistics. Reasoning about procedures with natural language processing: A tutorial. Li Zhang, 10.48550/ARXIV.2205.07455Li Zhang. 2022. Reasoning about procedures with natu- ral language processing: A tutorial. Reasoning about goals, steps, and temporal ordering with WikiHow. Li Zhang, Qing Lyu, Chris Callison-Burch, 10.18653/v1/2020.emnlp-main.374Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)Online. Association for Computational LinguisticsLi Zhang, Qing Lyu, and Chris Callison-Burch. 2020. Reasoning about goals, steps, and temporal ordering with WikiHow. In Proceedings of the 2020 Con- ference on Empirical Methods in Natural Language Processing (EMNLP), pages 4630-4639, Online. As- sociation for Computational Linguistics. Wenzheng Zhang, Wenyue Hua, Karl Stratos, arXiv:2110.02369Entqa: Entity linking as question answering. arXiv preprintWenzheng Zhang, Wenyue Hua, and Karl Stratos. 2021. Entqa: Entity linking as question answering. arXiv preprint arXiv:2110.02369.
62,267,053
A SPEECH-FIRST MODEL FOR REPAIR DETECTION AND CORRECTION
Interpreting fully natural speech is an important goal for spoken language understanding systems. However, while corpus studies have shown that about 10% of spontaneous utterances contain self-corrections, or RE-PAIRS, little is known about the extent to which cues in the speech signal may facilitate repair processing. We identify several cues based on acoustic and prosodic analysis of repairs in a corpus of spontaneous speech, and propose methods for exploiting these cues to detect and correct repairs. We test our acoustic-prosodic cues with other lexical cues to repair identification and find that precision rates of 89-93% and recall of 78-83% can be achieved, depending upon the cues employed, from a prosodically labeled corpus.Recently, and have proposed a two-stage method for processing repairs. In the first stage, lexical pattern
[ 14701220, 2472777, 13949323, 9107734, 5222302 ]
A SPEECH-FIRST MODEL FOR REPAIR DETECTION AND CORRECTION Christine Nakatani Division of Applied Sciences Harvard University Cambridge 2D-450, AT&T Bell Laboratories 600 Mountain Avenue Murray Hill02138, 07974-0636MA, NJ Julia Hirschberg Division of Applied Sciences Harvard University Cambridge 2D-450, AT&T Bell Laboratories 600 Mountain Avenue Murray Hill02138, 07974-0636MA, NJ A SPEECH-FIRST MODEL FOR REPAIR DETECTION AND CORRECTION Interpreting fully natural speech is an important goal for spoken language understanding systems. However, while corpus studies have shown that about 10% of spontaneous utterances contain self-corrections, or RE-PAIRS, little is known about the extent to which cues in the speech signal may facilitate repair processing. We identify several cues based on acoustic and prosodic analysis of repairs in a corpus of spontaneous speech, and propose methods for exploiting these cues to detect and correct repairs. We test our acoustic-prosodic cues with other lexical cues to repair identification and find that precision rates of 89-93% and recall of 78-83% can be achieved, depending upon the cues employed, from a prosodically labeled corpus.Recently, and have proposed a two-stage method for processing repairs. In the first stage, lexical pattern Introduction Disfluencies in spontaneous speech pose serious problems for spoken language systems. First, a speaker may produce a partial word or FRAGMENT, a string of phonemes that does not form the complete intended word. Some fragments may coincidentally match words actually in the lexicon, such as fly in Example (1); others will be identified with the acoustically closest item(s) in the lexicon, as in Example (2). 1 (1) What is the earliest fli-flight from Washington to Atlanta leaving on Wednesday September fourth? (2) Actual string: What is the fare fro-on American Airlines fourteen forty three Recognized string: With fare four American Airlines fourteen forty three Even if all words in a disfluent segment are correctly recognized, failure to detect a disfluency may lead to interpretation errors during subsequent processing, as in Example (3). 1The presence of a word fragment in examples is indicated by the diacritic '-'. Self-corrected portions of the utterance appear in boldface. All examples in this paper are drawn from the ATIS corpus described below. Recognition output shown in Example (2) is from the system described in (Lee et al., 1990). (3) ... Delta leaving Boston seventeen twenty one arriving Fort Worth twenty two twenty one forty... Here, 'twenty two twenty one forty' must be interpreted as a flight arrival time; the system must somehow choose among '21: 40', '22:21', and '22:40'. Although studies of large speech corpora have found that approximately 10% of spontaneous utterances contain disfluencies involving self-correction, or REPAIRS (Hindle, 1983;, little is known about how to integrate repair processing with real-time speech recognition. In particular, the speech signal itself has been relatively unexplored as a source of processing cues for the detection and correction of repairs. In this paper, we present results from a study of the acoustic and prosodic characteristics of 334 repair utterances, containing 368 repair instances, from the AROA Air Travel Information System (ATIS) database. Our results are interpreted within our "speech-first" framework for investigating repairs, the REPAIR IN-TERVAL MODEL (RIM). RIM builds upon Labov (1966) and Hindle (1983) by conceptually extending the EDIT SIGNAL HYPOTHESIS --that repairs are acoustically or phonetically marked at the point of interruption of fluent speech. After describing acoustic and prosodic characteristics of the repair instances in our corpus, we use these and other lexical cues to test the utility of our "speech-first" approach to repair identification on a prosodically labeled corpus. Previous Computational Approaches While self-correction has long been a topic of psycholinguistic study, computational work in this area has been sparse. Early work in computational linguistics treated repairs as one type of ill-formed input and proposed solutions based upon extensions to existing text parsing techniques such as augmented transition networks (ATNs), network-based semantic grammars, case frame grammars, pattern matching and deterministic parsers. matching rules operating on orthographic transcriptions would be used to retrieve candidate repair utterances. In the second, syntactic, semantic, and acoustic information would filter true repairs from false positives found by the pattern matcher. Results of testing the first stage of this model, the lexical pattern matcher, are reported in : 309 of 406 utterance containing 'nontrivial' repairs in their 10,718 utterance corpus were correctly identified, while 191 fluent utterances were incorrectly identified as containing repairs. This represents recall of 76% with precision of 62%. Of the repairs correctly identified, the appropriate correction was found for 57%. Repaj'r candidates were filtered and corrected by deleting a portion of the utterance based on the pattern matched, and then checking the syntactic and semantic acceptability of the corrected version using the syntactic and semantic components of the Gemini NLP system. also speculate that acoustic information might be used to filter out false positives for candidates matching two of their lexical patterns --repetitions of single words and cases of single inserted words --but do not report such experimentation. This work promotes the important idea that automatic repair processing can be made more robust by integrating knowledge from multiple sources. Such integration is a desirable long-term goal. However, the working assumption that correct transcriptions will be available from speech recognizers is problematic, since current recognition systems rely primarily upon language models and lexicons derived from fluent speech to decide among competing acoustic hypotheses. These systems usually treat disfluencies in training and recognition as noise; moreover, they have no way of modeling word fragments, even though these occur in the majority of repairs. We term such approaches that rely on accurate transcription to identify repair candidates "text-first". Text-first approaches have explored the potential contributions of lexical and grammatical information to automatic repair processing, but have largely left open the question of whether there exist acoustic and prosodic cues for repairs in general, rather than potential acoustic-prosodic filters for particular pattern subclasses. Our investigation of repairs addresses the problem of identifying such general acoustic-prosodic cues to repairs, and so we term our approach "speechfirst". Finding such cues to repairs would provide early detection of repairs in recognition, permitting early pruning of the hypothesis space. One proposal for repair processing that lends itself to both incremental processing and the integration of speech cues into repair detection is that of Hindle (1983), who defines a typology of repairs and associated correction strategies in terms of extensions to a deterministic parser. For Hindle, repairs can be (1) full sentence restarts, in which an entire utterance is reinitiated; (2) constituent repairs, in which one syntactic constituent (or part thereof) is replaced by another; 2 or (3) surface level repairs, in which identical strings appear adjacent to each other. An hypothesized acousticphonetic edit signal, "a markedly abrupt cut-off of the speech signal" (Hindle, 1983, p.123), is assumed to mark the interruption of fluent speech (cf. (Labov, 1966)). This signal is treated as a special lexical item in the parser input stream that triggers certain correction strategies depending on the parser configuration. Thus, in Hindle's system, repair detection is decoupled from repair correction, which requires only that the location of the interruption is stored in the parser state. Importantly, Hindle's system allows for nonsurface-based corrections and sequential application of correction rules (Hindle, 1983, p. 123). In contrast, simple surface deletion correction strategies cannot readily handle either repairs in which one syntactic constituent is replaced by an entirely different one, as in Example (4), or sequences of overlapping repairs, as in Example (5). (4) I 'd like to a flight from Washington to Denver... (5) I 'd like to book a reser-are there f-is there a first class fare for the flight that departs at six forty p.m. Hindle's methods achieved a success rate of 97% on a transcribed corpus of approximately 1,500 sentences in which the edit signal was orthographically represented and lexical and syntactic category assignments hand-corrected, indicating that, in theory, the edit signal can be computationally exploited for both repair detection and correction. Our "speech-first" investigation of repairs is aimed at determining the extent to which repair processing algorithms can rely on the edit signal hypothesis in practice. The Repair Interval Model To support our investigation of acoustic-prosodic cues to repair detection, we propose a "speech-first" model of repairs, the REPAIR INTERVAL MODEL (RIM). RIM divides the repair event into three consecutive temporal intervals and identifies time points within those intervals that are computationally critical. A full repair comprises three intervals, the REPARANDUM INTERVAL, the DISFLUENCY INTERVAL, and the REPAIR INTERVAL. Following Levelt (1983), we identify the REPARANDUM as the lexicai material which is to be repaired. The end of the reparandum coincides with the termination of the fluent portion of the utterance, which we term the INTERRUPTION SITE (IS). The DISFLUENCY INTERVAL (nI) extends from the IS to the resumption of fluent speech, and may contain any combination of silence, pause fillers ('uh', 'urn'), or CUE PHRASES (e.g., 'Oops' 2This is consistent with Levelt (1983)'s observation that the material to be replaced and the correcting material in a repair often share structural properties akin to those shared by coordinated constituents. or 'I mean'), which indicate the speaker's recognition of his/her performance error. The REPAIR INTERVAL corresponds to the utterance of the correcting material, which is intended to 'replace' the reparandum. It extends from the offset of the DI tO the resumption of non-repair speech. In Example (6), for example, the reparandum occurs from 1 to 2, the DI from 2 to 3, and the repair interval from 3 to 4; the Is occurs at 2. (6) Give me airlines 1 [ flying to Sa-] RIM provides a framework for testing the extent to which cues from the speech signal contribute to the identification and correction of repair utterances. RIM incorporates two main assumptions of Hindle (1983): (1) correction strategies are linguisticallyrulegoverned, and (2) linguistic cues must be available to signal when a disfluency has occurred and to 'trigger' correction strategies. As Hindle noted, if the processing of disfluencies were not rule-governed, it would be difficult to reconcile the infrequent intrusion of disfluencies on human speech comprehension, especially for language learners, with their frequent rate of occurrence in spontaneous speech. We view Hindle's results as evidence supporting (1). Our study tests (2) by exploring the acoustic and prosodic features of repairs that might serve as a form of edit signal for rule-governed correction strategies. While Labov and Hindle proposed that an acoustic-phonetic cue might exist at precisely the Is, based on our analyses and on recent psychotinguistic experiments (Lickley et al., 1991), this proposal appears too limited. Crucially, in RIM, we extend the notion of edit signal to include any phenomenon which may contribute to the perception of an "abrupt cut-off" of the speech signal --including cues such as coarticulation phenomena, word fragments, interruption glottalization, pause, and prosodic cues which occur in the vicinity of the disfluency interval. RIM thus acknowledges the edit signal hypothesis, that some aspect of the speech signal may demarcate the computationally key juncture between the reparandum and repair intervals, while extending its possible acoustic and prosodic manifestations. Acoustic-Prosodic Characteristics of Repairs We studied the acoustic and prosodic correlates of repair events as defined in the RIM framework with the aim of identifying potential cues for automatic repair processing, extending a pilot study reported in (Nakatani and Hirschberg, 1993). Our corpus for the current study consisted of 6,414 utterances produced by 123 speakers from the ARPA Airline Travel and Information System (ATIS) database (MADCOW, 1992) collected at AT&T, BBN, CMU, SRI, and TL 334 (5.2%) of these utterances contain at least one repair~ where repair is defined as the self-correction of one or more phonemes (up to and including sequences of words) in an utterance) Orthographic transcriptions of the utterances were prepared by ARPA contractors according to standardized conventions. The utterances were labeled at Bell Laboratories for word boundaries and intonational prominences and phrasing following Pierrehumbert's description of English intonation (Pierrehumbert, 1980). Also, each of the three RIM intervals and prosodic and acoustic events within those intervals were labeled. Identifying the Reparandum Interval Our acoustic and prosodic analysis of the reparandum interval focuses on acoustic-phonetic properties of word fragments, as well as additional phonetic cues marking the reparandum offset. From the point of view of repair detection and correction, acoustic-prosodic cues to the onset of the reparandum would clearly be useful in the choice of appropriate correction strategy. However, recent perceptual experiments indicate that humans do not detect an oncoming disfluency as early as the onset of the reparandum (Lickley et al., 1991;Lickley and Bard, 1992). Subjects were generally able to detect disfluencies before lexical access of the first word in the repair. However, since only a small number of the test stimuli employed in these experiments contained reparanda ending in word fragments (Lickley et al., 1991), it is not clear how to generalize results to such repairs. In our corpus, 74% of all reparanda end in word fragments. 4 Since the majority of our repairs involve word fragmentation, we analyzed several lexical and acousticphonetic properties of fragments for potential use in fragment identification. a clear tendency for fragmentation at the reparandum offset to occur in content words rather than function words. 3In our pilot study of the SRI and TI utterances only, we found that repairs occurred in 9.1% of utterances (Nakatani and Hirschberg, 1993). This rate is probably more accurate than the 5.2% we find in our current corpus, since repairs for the pilot study were identified from more detailed transcriptions than were available for the larger corpus. 4Shriberg et al. (1992) found that 60.2% of repairs in their corpus contained fragments. sonant fragments occur more than six times as often as fricatives than as stops. However, fricatives and stops occur almost equally as the initial consonant in single syllable fragments. Furthermore, we observe two divergences from the underlying distributions of initial phonemes for all words in the corpus. Vowel-initial words show less tendency and fricative-initial words show a greater tendency to occur as fragments, relative to the underlying distributions for those classes. Two additional acoustic-phonetic cues, glottalization and coarticulation, may help in fragment identification. note that INTERRUPTION GLO'I~ALIZATION (irregular glottal pulses) sometimes occurs at the reparandum offset. This form of glottalization is acoustically distinct from LARYNGEALIZA-TION (creaky voice), which often occurs at the end of prosodic phrases; GLOTTAL STOPS, which often precede vowel-initial words; and EPENTHETIC GLOTTAL-tZATtON. In our corpus, 30.2% of reparanda offsets are marked by interruption glottalization. 5 Although interruption glottalization is usually associated with fragments, not all fragments are glottalized. In our database, 62% of fragments are not glottalized, and 9% of glottalized reparanda offsets are not fragments. Also, sonorant endings of fragments in our corpus sometimes exhibit coarticulatory effects of an unrealized subsequent phoneme. When these effects occur with a following pause (see below), they can be used to distinguish fragments from full phrase-final words --such as 'fli-' from 'fly' in Example (1). To summarize, our corpus shows that most reparanda offsets end in word fragments. These fragments are usually fragments of content words (based upon transcribers' identification of intended words in our corpus), are rarely more than one syllable long, exhibit different distributions of initial phoneme class depending on their length, and are sometimes glottalized and sometimes exhibit coarticulatory effects of missing subsequent phonemes. These findings suggest that it is unlikely that word-based recognition models can be applied directly to the problem of fragment identification. Rather, models for fragment identification might make use of initial phoneme distributions, in combination with information on fragment length and acoustic-phonetic events at the IS. Inquiry into the articulatory bases of several of these properties of self-interrupted speech, such as glottalization and initial phoneme distributions, may further improve the modeling of fragments. Identifying the Disfluency Interval In the RIM model, the D/includes all cue phrases and filled and unfilled pauses from the offset of the reparandum to the onset o.f the repair. The literature contains a number of hypotheses about this interval (cf. (Blackmet and Mitton, 1991). For our corpus, pause fillers or cue words, which have been hypothesized as repair cues, occur within the DI for only 9.8% (332/368) of repairs, and so cannot be relied on for repair detection. Our findings do, however, support a new hypothesis associating fragment repairs and the duration of pause following the IS. ). If we analyze repair utterances based on occurrence of fragments, the DI duration for fragment repairs is significantly shorter than for nonfragments (p<.001, tstat=3.36, df=330). The fragment repair DI duration is also significantly shorter than fluent pause intervals (p<.001, tstat=5.05, df=1439), while there is no significant difference between nonfragment DIS and fluent utterances. So, DIS in general appear to be distinct from fluent pauses, and the duration of DIS in fragment repairs might also be exploited to identify these cases as repairs, as well as to distinguish them from nonfragment repairs. Thus, pausal duration may serve as a general acoustic cue for repair detection, particularly for the class of fragment repairs. Identifying the Repair Several influential studies of acoustic-prosodic repair cues have relied upon texical, semantic, and pragmatic definitions of repair types (Levelt and Cutler, 1983;Levelt, 1983). Levelt & Cutler (1983) claim that repairs of erroneous information (ERROR REPAIRS) are marked by increased intonational prominence on the correcting information, while other kinds of repairs, such as additions to descriptions (APPROPRIATENESS REPAIRS), generally are not. We investigated whether the repair interval is marked by special intonational prominence relative to the reparandum for all repairs in our corpus and for these particular classes of repair. To obtain objective measures of relative prominence, we compared absolute f0 and energy in the sonorant center of the last accented lexical item in the reparandum with that of the first accented item in the repair interval. 6 We found a small but reliable increase in f0 from the end of the reparandum to the beginning of the repair (mean--4.1 Hz, p<.01, tstat=2.49, df=327). There was also a small but reliable increase in amplitude across the oI (mean=+l.5 db, p<.001, tstat=6.07, df=327). We analyzed the same phenomena across utterance-internal fluent pauses for the ATIS TI set and found no reliable differences in either f0 or intensity, although this may have been due to the greater variability in the fluent population. And when we compared the f0 and amplitude changes from reparandum to repair with those observed for fluent pauses, we found no significant differences between the two populations. So, while differences in f0 and amplitude exist between the reparandum offset and the repair onset, we conclude that these differences are too small help distinguish repairs from fluent speech. Although it is not entirely straightforward to compare our objective measures of intonational prominence with Levelt and Cutler's perceptual findings, our results provide only weak support for theirs. And while we find small but significant changes in two correlates of intonational prominence, the distributions of change in f0 and energy for our data are unimodal; when we further test subclasses of Levelt and Cutler's error repairs and appropriateness repairs, statistical analysis does not sup-6We performed the same analysis for the last and first syllables in the reparandum and repair, respectively, and for normalized f0 and energy; results did not substantially differ from those presented here. port Levelt and Cutler's claim that the former --and only the former --group is intonationally 'marked'. Previous studies of disfluency have paid considerable attention to the vicinity of the DI but little to the repair offset. Although we did not find comparative intonationai prominence across the DI tO be a promising cue for repair detection, our RIM analysis uncovered one general intonational cue that may be of use for repair correction, namely the prosodic phrasing of the repair interval. We propose that phrase boundaries at the repair offset can serve to delimit the region over which subsequent correction strategies may operate. We tested the idea that repair interval offsets are intonationally marked by either minor or major prosodic phrase boundaries in two ways. First, we used the phrase prediction procedure reported by Wang & Hirschberg (1992) to estimate whether the phrasing at the repair offset was predictable according to a model of fluent phrasing. 7 Second, we analyzed the syntactic and lexical properties of the first major or minor intonational phrase including all or part of the repair interval to determine whether such phrasal units corresponded to different types of repairs in terms of Hindle's typology. The first analysis tested the hypothesis that repair interval offsets are intonationally delimited by minor or major prosodic phrase boundaries. We found that the repair offset co-occurs with minor phrase boundaries for 49% of repairs in the TI set. To see whether these boundaries were distinct from those in fluent speech, we compared the phrasing of repair utterances with the phrasing predicted for the corresponding corrected version of the utterance identified by ATIS transcribers. For 40% of all repairs, an observed boundary occurs at the repair offset where one is predicted; and for 33% of all repairs, no boundary is observed where none is predicted. For the remaining 27% of repairs for which predicted phrasing diverged from observed, in 10% of cases a boundary occurred where none was predicted and in 17%, no boundary occurred when one was predicted. In addition to differences at the repair offset, we also found more general differences from predicted phrasing over the entire repair interval, which we hypothesize may be partly understood as follows: Two strong predictors of prosodic phrasing in fluent speech are syntactic constituency (Cooper and Sorenson, 1977;Gee and Grosjean, 1983;Selkirk, 1984), especially the relative inviolability of noun phrases (Wang and Hirschberg, 1992), and the length of prosodic phrases (Gee and Grosjean, 1983;Bachenko 7Wang & Hirschberg use statistical modeling techniques to predict phrasing from a large corpus of labeled ATIS speech; we used a prediction tree that achieves 88.4% accuracy on the ATIS TI corpus using only features whose values could be calculated via automatic text analysis. Results reported here are for prediction on only TI repair utterances. and Fitzpatrick, 1990). On the one hand, we found occurrences of phrase boundaries at repair offsets which occurred within larger NPs, as in Example (7), where it is precisely the noun modifier --not the entire noun phrase --which is corrected. 8 (7) Show me all n-[ round-trip flights [ from Pittsburgh [ to Atlanta. We speculate that, by marking off the modifier intonationaily, a speaker may signal that operations relating just this phrase to earlier portions of the utterance can achieve the proper correction of the disfluency. We also found cases of 'lengthened' intonational phrases in repair intervals, as illustrated in the single-phrase reparandum in (8), where the corresponding fluent version of the reparandum is predicted to contain four phrases. (8) What airport is it [ is located [ what is the name of the airport located in San Francisco Again, we hypothesize that the role played by this unusually long phrase is the same as that of early phrase boundaries in NPS discussed above. In both cases, the phrase boundary delimits a meaningful unit for subsequent correction strategies. For example, we might understand the multiple repairs in (8) as follows: First the speaker attempts a vP repair, with the repair phrase delimited by a single prosodic phrase 'is located'. Then the initially repaired utterance 'What airport is located' is itself repaired, with the reparadum again delimited by a single prosodic phrase, 'What is the name of the airport located in San Francisco'. In the second analysis of lexical and syntactic properties, we found three major classes of phrasing behaviors, all involving the location of the first phrase boundary after the repair onset: First, for 44% (163/368) of repairs, the repair offset we had initially identified 9 coincides with a phrase boundary, which can thus be said to mark off the repair interval. Of the remaining 205 repairs, more than two-thirds (140/205) have the first phrase boundary after the repair onset at the right edge of a syntactic constituent. We propose that this class of repairs should be identified as constituent repairs, rather than the lexical repairs we had initially hypothesized. For the majority of these constituent repairs (79%, 110/140), the repair interval contains a well-formed syntactic constituent (see Table 5). If the repair interval does not form a syntactic constituent, it is most often an NP-internal repair (77%, 23/30). The third class of repairs includes those in which the first boundary after the repair onset occurs neither at the repair offset nor at the right edge of a syntactic constituent. This class contains surface or lexical 8Prosodic boundaries in examples are indicated by '1'. 9Note crucially here that, in labeling repairs which might be viewed as either constituent or lexical, we preferred the shorter lexical analysis by default. Table 5: Distribution of Syntactic Categories for Constituent Repairs (N= 110) repairs (where the first phrase boundary in the repair interval delimits a sequence of one or more repeated words), phonetic errors, word insertions, and syntactic reformulations (as in Example (4)). It might be noted here that, in general, repairs involving correction of either verb phrases or verbs are far less common than those involving noun phrases, prepositional phrases, or sentences. We briefly note evidence against one alternative (although not mutually exclusive) hypothesis, that the region to be delimited correction strategies is marked not by a phrase boundary near the repair offset, but by a phrase boundary at the onset of the reparandum. In other words, it may be the reparandum interval, not the repair interval, that is intonationally delimited. However, it is often the case that the last phrase boundary before the IS occurs at the left edge of a major syntactic constituent (42%, (87/205), even though major constituent repairs are about one third as frequent in this corpus (15%, 31/205). In contrast, phrase boundaries occur at the left edge of minor constituents 27% (55/205) of the time, whereas minor constituent repairs make up 39% (79/205) of the subcorpus at hand. We take these figures as general evidence against the outlined alternative hypothesis, establishing that the demarcation repair offset is a more productive goal for repair processing algorithms. Investigation of repair phrasing in other corpora covering a wider variety of genres is needed in order to assess the generality of these findings. For example, 35% (8/23) of NP-internal constituent repairs occurred within cardinal compounds, which are prevalent in the nTIS corpus due to its domain. The preponderance of temporal and locative prepositional phrases may also be attributed to the nature of the task and domain. Nonetheless, the fact that repair offsets in our corpus are marked by intonational phrase boundaries in such a large percentage of cases (82.3%, 303/368), suggests that this is a possibility worth pursuing. Predicting Repairs from Acoustic and Prosodic Cues Despite the small size of our sample and the possibly limited generality of our corpus, we were interested to see how well the characterization of repairs derived from RIM analysis of the ATIS COrpUS would transfer to a predictive model for repairs in that domain. We examined 374 ATIS repair utterances, including the 334 upon which the descriptive study presented above was based. We used the 172 TI and SRI repair utterances from our earlier pilot study (Nakatani and Hirschberg, 1993) as training date; these served a similar purpose in the descriptive analysis presented above. We then tested on the additional 202 repair utterances, which contained 223 repair instances. In our predictions we attemped to distinguish repair Is from fluent phrase boundaries (collapsing major and minor boundaries), non-repair disfluencies, 1° and simple word boundaries. We considered every word boundary to be a potential repair site. 11 Data points are represented below as ordered pairs <wl,wj >, where wi represents the lexical item to the left of the potential IS and wj represents that on the right. For each <wi,wj >, we examined the following features as potential Is predictors: (a) duration of pause between wi and wj; (b) occurrence of a word fragment(s) within <w~,wj >; (c) occurrence of a filled pause in <wi,wj >; (d) amplitude (energy) peak within wi, both absolute and normalized for the utterance; (e) amplitude of wi relative to wi-i and to wj; (f) absolute and normalized f0 of wi; (g) f0 of wi relative to wi-i and to wj; and (h) whether or not wi was accented, deaccented, or deaccented and cliticized. We also simulated some simple pattern matching strategies, to try to determine how acoustic-prosodic cues might interact with lexical cues in repair identification. To this end, we looked at (i) the distance in words of wi from the beginning and end of the utterance; (j) the total number of words in the utterance; and (k) whether wi or wi-1 recurred in the utterance within a window of three words after wi. We were unable to test all the acoustic-prosodic features we examined in our descriptive analysis, since features such as glottalization and coarticulatory effects had not been labeled in our data base for locations other than DIs. Also, we used fairly crude measures to approximate features such as change in f0 and amplitude, since these .too had been precisely labeled in our corpus only for repair locations and not for fluent speech./2 We trained prediction trees, using Classification and Regression Tree (CART) techniques (Brieman et al., 1984), on our 172-utterance training set. We first included all our potential identifiers as possible predictors. The resulting (automatically generated) decision tree was then used to predict IS locations in our 202-l°These had been marked independently of our study and including all events with some phonetic indicator of disfluency which was not involved in a self-repair, such as hesitations marked with audible breath or sharp cut-off. llWe also included utterance-final boundaries as data points. 12We used uniform measures for prediction, however, for both repair sites and fluent regions. utterance test set. This procedure identified 186 of the 223 repairs correctly, while predicting 12 false positives and omitting 37 true repairs, for a recall of 83.4% and precision of 93.9%. Fully 177 of the correctly identified ISS were identified via presence of word fragments as well as duration of pause in the DL Repairs not containing fragments were identified from lexical matching plus pausal duration in the DI. Since the automatic identification of word fragments from speech is an unsolved problem, we next omitted the fragment feature and tried the prediction again. The best prediction tree, tested on the same 202-utterance test set, succeeded in identifying 174 of repairs correctly--in the absence of fragment information-with 21 false positives and 49 omissions (78.1% recall, 89.2% precision). The correctly identified repairs were all characterized by constraints on duration of pause in the DI. Some were further identified via presence of lexical match to the right of wi within the window of three described above, and word position within utterance. Those repairs in which no lexical match was identified were characterized by lower amplitude of wi relative to wj and cliticization or deaccenting of wi. Still other repairs were characterized by more complex series of lexical and acoustic-prosodic constraints. These results are, of course, very preliminary. Larger corpora must certainly be examined and more sophisticated versions of the crude measures we have used should be employed. However, as a first approximation to the characterization of repairs via both acoustic-prosodic and lexical cues, we find these resuits encouraging. In particular, our ability to identify repair sites successfully without relying upon the identification of fragments as such seems promising, although our analysis of fragments suggests that there may indeed be ways of identifying fragment repairs, via their relatively short DI, for example. Also, the combination of general acoustic-prosodic constraints with lexical pattern matching techniques as a strategy for repair identification appears to gain some support from our predictions. Further work on prediction modeling may suggest ways of combining these lexical and acoustic-prosodic cues for repair processing. Discussion In this paper, we have presented a"speech-first" model, the Repair Interval Model, for studying repairs in spontaneous speech. This model divides the repair event into a reparandum interval, a disfluency interval, and a repair interval. We have presented empirical results from acoustic-phonetic and prosodic analysis of a corpus of repairs in spontaneous speech, indicating that reparanda offsets end in word fragments, usually of (intended) content words, and that these fragments tend to be quite short and to exhibit particular acousticphonetic characteristics. We found that the disfluency interval can be distinguished from intonational phrase boundaries in fluent speech in terms of duration of pause, and that fragment and nonfragment repairs can also be distinguished from one another in terms of the duration of the disfluency interval. For our corpus, repair onsets can be distinguished from reparandum offsets by small but reliable differences in f0 and amplitude, and repair intervals differ from fluent speech in their characteristic prosodic phrasing. We tested our results by developing predictive models for repairs in the ATIS domain, using CART analysis; the best performing prediction strategies, trained on a subset of our data, identified repairs in the remaining utterances with recall of 78-83% and precision of 89-93%, depending upon features examined. 5Shriberg et al. (1992) report glottalization on 24 of 25 vowel-final fragments. 2 [ 2SILENCE uh SILENCE ] 3 [ flying to Boston ] 4 from San Francisco next summer that have business class. Table 1shows the broad word class of the speaker's intended word for each fragment, where the intended word was recoverable. There isTable 1: Lexical Class of Word Fragments at Reparandum Offset (N=288)Lexical Class Content Function Untranscribed Tokens % 121 42% 12 4% 155 54% Table 2 2shows the distribution of fragment repairs by length. 91% of fragments in our corpus are one syllable or less in length.Table 3shows the distri-Syllables Tokens % 0 113 39% 1 149 52% 2 25 9% 3 1 0.3% Table 2: Length of Reparandum Offset Word Frag- ments (N=288) bution of initial phonemes for all words in the corpus of 6,414 ATIS sentences, and for all fragments, single syllable fragments, and single consonant fragments in repair utterances. From Table 3 we see that single con- Class stop vowel fric nasal/ glide/ liquid h N % of % of Words Frags 23% 23% 30% 25% 13% 19% 33% 45% 28% % of One % of One Syll Frags Cons Frags 18% 17% 20% 1% 2% 4% 64896 288 11% 0% 73% 15% 1% 148 114 Table 3 : 3Feature Class of Initial Phoneme in Fragments by Fragment Length Table 4 4shows the average duration of 'silent DI'S (those not containing pause fillers or cue words) compared to that of fluent utterance-internal silent pauses for the Tt utterances. Overall, silent DIS are shorterPausal Juncture Mean Std Dev Fluent 513 msec 676 msec DI 333 msec 417 msec Frags 292 msec 379 msec Non-frags 471 msec 502 msec N 1186 332 255 77 Table 4 : 4Duration of Silent DIS vs. Utterance-Internal Fluent Pauses than fluent pauses (p<.001, tstat=4.60, df=1516 AcknowledgmentsWe thank John Bear, Barbara Grosz, Don Hindle, Chin Hui Lee, Robin Lickley, Andrej Ljolje, Jan van Santen, Stuart Shieber, and Liz Shriberg for advice and useful comments. CART analysis employed software written by Daryl Pregibon and Michael Riley. Speech analysis was done with Entropic Research Laboratory's WAVES software. A computational grammar of discourse-neutral prosodic phrasing in English. E Bachenko, Fitzpatrick, Computational Linguistics. 163j. Bachenko and E. Fitzpatrick. 1990. A computational grammar of discourse-neutral prosodic phrasing in English. Computational Linguistics, 16(3): 155- 170. Integrating multiple knowledge sources for detection and correction of repairs in humancomputer dialog. John Bear, John Dowding, Elizabeth Shriberg, Proceedings of the 30th Annual Meeting. the 30th Annual MeetingNewark DEAssociation for Computational LinguisticsJohn Bear, John Dowding, and Elizabeth Shriberg. 1992. Integrating multiple knowledge sources for detection and correction of repairs in human- computer dialog. In Proceedings of the 30th An- nual Meeting, pages 56-63, Newark DE. Associ- ation for Computational Linguistics. Theories of monitoring and the timing of repairs in spontaneous speech. Elizabeth R Blackmer, Janet L Mitton, Cognition. 39Elizabeth R. Blackmer and Janet L. Mitton. 1991. Theories of monitoring and the timing of repairs in spontaneous speech. Cognition, 39:173-194. ClassificationandRegression Trees. Leo Brieman, Jerome H Friedman, Richard A Olshen, Charles J Stone, Wadsworth & Brooks, Monterrey CALeo Brieman, Jerome H. Friedman, Richard A. Olshen, and Charles J. Stone. 1984. ClassificationandRe- gression Trees. Wadsworth & Brooks, Monterrey CA. Fundamental frequency contours at syntactic boundaries. W E Cooper, J M Sorenson, Journal of the Acoustical Society of America. 623W. E. Cooper and J. M. Sorenson. 1977. Funda- mental frequency contours at syntactic bound- aries. Journal of the Acoustical Society of Amer- ica, 62(3):683-692, September. Performance structure: A psycholinguistic and linguistic apprasial. J P Gee, E Grosjean, Cognitive Psychology. 15J. P. Gee and E Grosjean. 1983. Performance struc- ture: A psycholinguistic and linguistic apprasial. Cognitive Psychology, 15:411-458. Deterministic parsing of syntactic non-fluencies. Donald Hindle, Proceedings of the 21st Annual Meeting. the 21st Annual MeetingCambridge MAAssociation for Computational LinguisticsDonald Hindle. 1983. Deterministic parsing of syn- tactic non-fluencies. In Proceedings of the 21st Annual Meeting, pages 123-128, Cambridge MA. Association for Computational Linguistics. On the grammaticality of everyday speech. William Labov, Paper Presented at the Linguistic Society of America Annual Meeting. William Labov. 1966. On the grammaticality of ev- eryday speech. Paper Presented at the Linguistic Society of America Annual Meeting. Acoustic modeling for large vocabulary speech recognition. C.-H Lee, L R Rabiner, R Pieraccini, J Wilpon, Computer Speech and Language. 4C.-H. Lee, L. R. Rabiner, R. Pieraccini, and J. Wilpon. 1990. Acoustic modeling for large vocabulary speech recognition. Computer Speech and Lan- guage, 4:127-165, April. Prosodic marking in speech repair. William Levelt, Anne Cutler, Journal of Semantics. 2William Levelt and Anne Cutler. 1983. Prosodic mark- ing in speech repair. Journal of Semantics, 2:205- 217. Monitoring and self-repair in speech. William Levelt, Cognition. 14William Levelt. 1983. Monitoring and self-repair in speech. Cognition, 14:41-104. Processing disfluent speech: Recognising disfluency before lexical access. R J Lickley, E G Bard, Proceedings of the International Conference on Spoken Language Processing. the International Conference on Spoken Language ProcessingBanffR. J. Lickley and E. G. Bard. 1992. Processing disflu- ent speech: Recognising disfluency before lexical access. In Proceedings of the International Con- ference on Spoken Language Processing, pages 935-938, Banff, October. ICSLP. Processing disfluent speech: How and when are disfluencies found?. R J Lickley, R C Shillcock, E G Bard, Proceedings of the Second European Conference on Speech Communication and Technology. the Second European Conference on Speech Communication and TechnologyGenovaIIIR. J. Lickley, R. C. Shillcock, and E. G. Bard. 1991. Processing disfluent speech: How and when are disfluencies found? In Proceedings of the Second European Conference on Speech Communication and Technology, Vol. III, pages 1499-1502, Gen- ova, September. Eurospeech-91. Multi-site data collection for a spoken language corpus. ; Madcow, Morgan Darpa, Kaufmann, Proceedings of the Speech and Natural Language Workshop. the Speech and Natural Language WorkshopHarriman NYMADCOW. 1992. Multi-site data collection for a spoken language corpus. In Proceedings of the Speech and Natural Language Workshop, pages 7-14, Harriman NY, February. DARPA, Morgan Kaufmann. A speech-first model for repair identification in spoken language systems. Christine Nakatani, Julia Hirschberg, Proceedings of the ARPA Workshop on Human Language Technology. the ARPA Workshop on Human Language TechnologyPlainsboro, March. ARPAChristine Nakatani and Julia Hirschberg. 1993. A speech-first model for repair identification in spo- ken language systems. In Proceedings of the ARPA Workshop on Human Language Technology, Plainsboro, March. ARPA. The Phonology and Phonetics of English Intonation. B Janet, Pierrehumbert, Massachusetts Institute of Technology, September. Distributed by the Indiana University Linguistics ClubPh.D. thesisJanet B. Pierrehumbert. 1980. The Phonology and Phonetics of English Intonation. Ph.D. thesis, Massachusetts Institute of Technology, September. Distributed by the Indiana University Linguistics Club. Phonology and syntax: The relation between sound and structure. E O Selkirk, Nordic Prosody II: Proceedings of the Second Symposium on Prosody in the Nordic language. T. FreyjeimTrondheim. TAPIRE. O. Selkirk. 1984. Phonology and syntax: The relation between sound and structure. In T. Frey- jeim, editor, Nordic Prosody II: Proceedings of the Second Symposium on Prosody in the Nordic language, pages 111-140, Trondheim. TAPIR. Automatic detection and correction of repairs in human-computer dialog. Elizabeth Shriberg, John Bear, John Dowding, Proceedings of the Speech and Natural Language Workshop. the Speech and Natural Language WorkshopHarriman NY. DARPAMorgan KaufmannElizabeth Shriberg, John Bear, and John Dowding. 1992. Automatic detection and correction of re- pairs in human-computer dialog. In Proceedings of the Speech and Natural Language Workshop, pages 419--424, Harriman NY. DARPA, Morgan Kaufmann. Automatic classification of intonational phrase boundaries. Michelle Q Wang, Julia Hirschberg, Computer Speech and Language. 6Michelle Q. Wang and Julia Hirschberg. 1992. Auto- matic classification of intonational phrase bound- aries. Computer Speech and Language, 6:175- 196.
248,780,342
Data Quality Estimation Framework for Faster Tax Code Classification
This paper describes a novel framework to estimate the data quality of a collection of product descriptions to identify required relevant information for accurate product listing classification for tax-code assignment. Our Data Quality Estimation (DQE) framework consists of a Question Answering (QA) based attributevalue extraction model to identify missing attributes and a classification model to identify bad quality records. We show that our framework can accurately predict the quality of product descriptions. In addition to identifying lowquality product listings, our framework can also generate a detailed report at a category level showing missing product information resulting in a better customer experience.
[ 221845203, 2844020, 11816014, 52967399 ]
Data Quality Estimation Framework for Faster Tax Code Classification Association for Computational LinguisticsCopyright Association for Computational LinguisticsMay 26, 2022 c 2022 Ravi Kondadadi ravi.kondadadi@avalara.com Avalara Inc 255 South King St1800, 98104Suite, SeattleWA Allen Williams allen.williams@avalara.com Avalara Inc 255 South King St1800, 98104Suite, SeattleWA Nicolas Nicolov nicolas.nicolov@avalara.com Avalara Inc 255 South King St1800, 98104Suite, SeattleWA Data Quality Estimation Framework for Faster Tax Code Classification Proceedings of The Fifth Workshop on e-Commerce and NLP (ECNLP 5) The Fifth Workshop on e-Commerce and NLP (ECNLP 5)Association for Computational LinguisticsMay 26, 2022 c 2022 This paper describes a novel framework to estimate the data quality of a collection of product descriptions to identify required relevant information for accurate product listing classification for tax-code assignment. Our Data Quality Estimation (DQE) framework consists of a Question Answering (QA) based attributevalue extraction model to identify missing attributes and a classification model to identify bad quality records. We show that our framework can accurately predict the quality of product descriptions. In addition to identifying lowquality product listings, our framework can also generate a detailed report at a category level showing missing product information resulting in a better customer experience. Introduction As a global tax compliance company, Avalara enables businesses to use the correct sales tax rate by mapping their product catalogs to a tax code taxonomy built by Avalara. The tax codes, in turn, inform the tax calculation engine how to apply the tax for a transaction. This mapping process is very laborious today due to many reasons. One of the main challenges is the quality of the product catalog data we receive from customers. Many times, this data is quite vague and noisy. This can be caused by many factors. 1. Not enough context about the business: For tax code classification, we only receive a collection of product titles. This product information does not give enough context about the industry in general, causing problems in tax code mapping, especially if the language in the product information is ambiguous. This lack of context results in the mapping team having to talk to the business to get more information about the business and the corresponding industry. This is a very tedious process requiring a lot of manual effort, causing delays in the customer onboarding process. 2. Missing attributes in the product titles and descriptions: Many product descriptions do not have relevant attributes. This makes it hard for the models to map the products in the catalog to applicable tax codes. For example, a clothing product without specific attributes like knitted/crocheted cannot be mapped to the appropriate tax code. 3. Product information contains rare words, and acronyms: If the product information includes words that were not seen before, acronyms or abbreviations, it makes it harder for the model to classify. 4. The industry of the business is unknown or not currently covered by the tax code taxonomy: If the business belongs to a new sector or belongs to an industry with low tax code coverage, the mapping would be more challenging. A model including these factors to identify the quality of product titles would help the mapping team request additional information for those products from the business and accelerate the onboarding process for that customer. In this paper, we describe a novel data quality estimation framework which businesses can interact with and provide all relevant information required to map all entries in a product catalog to the corresponding tax codes. Iteratively, the tool can map input product records to tax codes, identify low quality records and present pertinent questions to the user for the bad records. The tool repeats the process until all records are fixed, and the mappings are complete for the entire product catalog. Next, we discuss our Data Quality Estimation framework. We then describe our methodology and experiments followed by relevant recent work. Data Quality Estimation Framework In this section, we present details of the Data Quality Estimation framework. The framework includes a tax code classification model, an attribute-value extraction model, and a quality assessment model. Next, we will discuss each of these components in detail. Tax Code Classification The Avalara Tax code system consists of thousands of codes hierarchically organized by categories and the nature of the business. The codes fall into a dozen major categories ranging from products to food and beverages. The automatic tax code classification system is responsible for identifying the appropriate tax code for any given product in a customer's inventory catalog. The tax codes are mapped when the customer is onboarded to the Avalara system. The classification system at Avalara uses a tiered approach where a top-level model predicts the probable category, and then a category-specific model predicts a probable tax code. This approach was chosen predominantly to keep the number of labels for each model down to a manageable number and allow for targeted improvements for each category without interfering with other categories. Each of the models is a BERT (Devlin et al., 2019) model fine-tuned for classification. Attribute Value Extraction (AVE) The most important parts of product information to determine the relevant tax code are the product title and product description. An attribute is a feature that describes a specific property of a product. Some examples of attributes include brand, color, material, etc. An attribute-value is a particular value assumed by the attribute. For example, for the product title "Apple iPhone 13 Pro, 128GB, Sierra Blue", iPhone is the main entity. The corresponding attribute-values are "Apple", "13 pro" and "Sierra Blue". Apple is the brand, "13 pro" is the model and "Sierra Blue" is the color. The presence of attributes is quite important to classifying a product title to the most relevant tax code. Often, we lack attribute information in the product title data we receive from our customers. This usually results in lot of back and forth with the customer and causes significant delays in the time to fully onboard a customer. A model that can extract attribute-values from product titles and identify missing attributes would be of great help in determining the quality of the customer data. Input to the attribute-value extraction model would include the product listing and a set of attributes. These attributes come from a tax code ontology developed internally by Avalara that covers a wide range of tax code categories. The tax code classification model is used to identify the relevant category for the product listing. We can then identify the related attributes for that category from the ontology. For our experiments, we formulated the attributevalue extraction as a Question Answering problem as mentioned in (Wang et al., 2020a). The advantage of a Question Answering (QA) formulation is that it can scale well with more attributes and can work well with unseen attributes in the training data. We can treat the product listing as the document, attribute name as the question and retrieve the value as the answer. We used the MAVE dataset (Yang et al., 2021) for training the QA model. MAVE is a product dataset for Multi-source Attribute-Value Extraction, created by Google. MAVE is the largest product attribute-value extraction dataset by the number of attribute-value examples containing over 3M attribute-value annotations from 2.2M Amazon product descriptions. Quality Assessment The goal of the quality assessment model is to identify the product listings that require more information in order to be correctly mapped to the relevant tax codes. We created a logistic regression (LR) (Cox, 1958) model for this classification task. Our features include prediction probabilities from the tax code classification model, missing attribute information, title length, and category meta data, etc. Here is an overview of the steps involved in running the framework. 1. First run the current tax code classification model. 2. Remove the records with good predictions based on the prediction probabilities. 3. For the remaining records, run the attributevalue extraction to identify missing attributes. 4. Identify the quality score using the quality assessment model. 5. Generate a detailed report listing relevant questions for each category to cover the ma-jority of the bad records and share the report with the user for feedback. 6. Repeat steps 1-5 on the updated records set from the user until the number of bad records fall below a predefined threshold. Experimental Results In this section, we present the evaluation of both the attribute-value extraction and the quality assessment models. Attribute-Value Extraction Evaluation We evaluated various attribute-value extraction methods on two different datasets. 1. MAVE dataset: A random subset of around 10,000 records from MAVE for evaluation. 2. Compliance dataset: A subset of around 1,000 product listings that was manually annotated with attribute-value information. We compared two different approaches for attributevalue extraction. 1. AVE-FT: This is a hybrid approach of lookup and classification. We created a list of 1,200 most frequent attributes and the possible values they could take. For example, for the attribute "material-type" we included phrases like "plastic", "pvc", "synthetic rubber" as possible values. We first check if we can find an attribute-value in the listing using a lookup approach. We used a SpaCy (Honnibal and Montani, 2017) matcher to identify such values. In order to work with attributes that are not in our list, we used a fastText (Bojanowski et al., 2017) model to identify if a product listing contains a specific attribute. The model was trained on our historical data where we know whether a specific attribute is present. AVE-QA: We developed a QA model fine-tuned on a DistilBERT (AVE-QA-DISTILBERT) (Sanh et al., 2019) model on a subset of records from the MAVE dataset as mentioned in (Wang et al., 2020a). We also created a different QA model by fine-tuning on the MiniLM model (AVE-QA-MINILM) (Wang et al., 2020b). We used the F1-score metric as defined in the SQuAD (Rajpurkar et al., 2016) Table 1 shows the evaluation of various attributevalue extraction methods on the MAVE and the Compliance datasets. We can see that the Question-Answering based models outperformed the Fast-Text baseline on both datasets and MiniLM performs slightly better than Distilbert version. Not surprisingly, both QA models performed well on the MAVE datasets as the QA models were finetuned on MAVE. The compliance dataset includes attributes related to domains like Insurance and Medical care whereas the MAVE data was predominantly about e-commerce. Although our performance is currently low on the compliance dataset, we are working on augmenting MAVE with compliance related information and retraining the model with more compliance data. Evaluation of Data Quality Estimation In this section, we compare different configurations of Attribute-value extraction and Data quality assessment for estimating the data quality of a set of product listings. In addition to the Logistic Regression model for quality assessment, we also evaluated two baselines. The first baseline simply predicts the quality based on the prediction probability. The second baseline includes both tax-code level precision (this can be determined from the historical performance of the tax-code) and prediction probabilities. For the evaluation, we used the same sample of 1,000 listings from the attribute-value extraction experiment. The dataset was reviewed by our tax coding experts to classify each listing as good/bad quality. Table 2 shows the evaluation of various data quality estimation methods. For this experiment, we used a threshold of 0.5 for prediction probabilities. It can be seen from the results that our classification model for data quality assessment outperformed the baselines based on prediction probabilities and tax-code level precision. Although adding the missing attribute Table 2: Comparison of various data quality estimation models information to the model did not help, it is useful in explaining why the data is inadequate to our customers. Figure 1: Importance of quality estimation features Figure 1 shows the importance of various features in the quality estimation model. It can be seen that title length and missing attribute information are the most important features for quality estimation. It also shows that using attribute value extraction model alone is not enough in assessing the data quality of product listings. We generate a summary report at a category level showing the missing attribute information to help our customers understand how they can enhance their product descriptions. Figure 2 shows a sample screenshot of the detailed report showing missing attribute information at a category level. We are currently working on including this tool in production to estimate the data quality of product catalogs from our customers. Related Work The quality of predictions of a machine learning model is dependent on the quality of the data it is trained on. Poor data results in bad predictions from the model, translating to a poor customer experience. Due to increased usage of data in businesses, researchers have been seeking to define data quality. Batini et al. (2009) compares data quality and assessment methodologies along several dimensions. Pipino et al. (2002), Cai and Zhu (2015) identify various dimensions for defining data quality. ONeill (2020) proposed a decision tree algorithm to predict data quality. Schelter et al. (2018) proposed a declarative API to "unittest" data. They also discussed methods such as anomaly detection to assess data quality. Active learning (Settles, 2009) has also been used to determine most confusing entries in a dataset. Active learning suggests labeling samples that are most uncertain based on prediction probabilities. But the prediction probabilities are not always good enough to identify data quality and to understand what information is missing from the product listings. Attribute-value extraction was predominantly solved using rule-based approaches (Nadeau and Sekine 2007;Vandic et al. 2012) in the past. The disadvantage with these methods is that they are domain-specific and require extensive feature engineering. More recently, with the advances in Neural Networks-based methods, approaches like BiLSTM-CRF (Kozareva et al. 2016;Zheng et al. 2018) have been proposed. Wang et al. (2020a) formulated attribute extraction as a Question Answering problem. They proposed a multi-task framework to address generalizability. Yang et al. (2021) extended this work by adopting an ETC encoder (Ainslie et al., 2020) to generate the contextual embeddings for title and description of the product listing to handle longer descriptions. Conclusion We presented a novel data quality estimation framework for the e-commerce domain that can identify product listings with incomplete information. The framework includes a Question Answering based attribute-value extraction model trained on the MAVE dataset. We prove that our framework can reliably identify inadequate product listings resulting in faster tax code classification. Beyond mapping products to tax codes, our framework is applicable to services (in fact, our toplevel categories already include a Services group), as well as, utilities/energy, or in general any domain where items can be described in terms of attributes and values. We are applying this framework to other tax code ontologies like the Harmonized Commodity Description and Coding System (HS) which provides codes for traded products as part of international transactions. Figure 2 : 2A sample data quality report AcknowledgementsWe would like to thank our coworkers Mike Lash and Brandon Van Volkenburgh for helping us with the data annotation. We also would like to thank Vsu Subramanian, and Rajesh Muppalla for their support and valuable feedback. Etc: Encoding long and structured inputs in transformers. Joshua Ainslie, Santiago Ontañón, Chris Alberti, Vaclav Cvicek, Zachary Kenneth Fisher, Philip Pham, Anirudh Ravula, K Sumit, Qifan Sanghai, Li Wang, Yang, EMNLP. Joshua Ainslie, Santiago Ontañón, Chris Alberti, Va- clav Cvicek, Zachary Kenneth Fisher, Philip Pham, Anirudh Ravula, Sumit K. Sanghai, Qifan Wang, and Li Yang. 2020. Etc: Encoding long and structured inputs in transformers. In EMNLP. Methodologies for data quality assessment and improvement. Carlo Batini, Cinzia Cappiello, Chiara Francalanci, Andrea Maurino, ACM computing surveys (CSUR). 413Carlo Batini, Cinzia Cappiello, Chiara Francalanci, and Andrea Maurino. 2009. Methodologies for data qual- ity assessment and improvement. ACM computing surveys (CSUR), 41(3):1-52. Enriching word vectors with subword information. Transactions of the association for computational linguistics. Piotr Bojanowski, Edouard Grave, Armand Joulin, Tomas Mikolov, 5Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the associa- tion for computational linguistics, 5:135-146. The challenges of data quality and data quality assessment in the big data era. Data science journal. Li Cai, Yangyong Zhu, 14Li Cai and Yangyong Zhu. 2015. The challenges of data quality and data quality assessment in the big data era. Data science journal, 14. The regression analysis of binary sequences. R David, Cox, Journal of the Royal Statistical Society: Series B (Methodological). 202David R Cox. 1958. The regression analysis of binary sequences. Journal of the Royal Statistical Society: Series B (Methodological), 20(2):215-232. BERT: pre-training of deep bidirectional transformers for language understanding. Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, 10.18653/v1/n19-1423Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019Minneapolis, MN, USAAssociation for Computational Linguistics1Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers), pages 4171-4186. Association for Computational Linguistics. 2017. spaCy 2: Natural language understanding with Bloom embeddings, convolutional neural networks and incremental parsing. Matthew Honnibal, Ines Montani, Matthew Honnibal and Ines Montani. 2017. spaCy 2: Natural language understanding with Bloom embed- dings, convolutional neural networks and incremental parsing. Recognizing salient entities in shopping queries. Zornitsa Kozareva, Qi Li, Ke Zhai, Weiwei Guo, Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. the 54th Annual Meeting of the Association for Computational LinguisticsShort Papers2Zornitsa Kozareva, Qi Li, Ke Zhai, and Weiwei Guo. 2016. Recognizing salient entities in shopping queries. In Proceedings of the 54th Annual Meet- ing of the Association for Computational Linguistics (Volume 2: Short Papers), pages 107-111. A survey of named entity recognition and classification. David Nadeau, Satoshi Sekine, Lingvisticae Investigationes. 30David Nadeau and Satoshi Sekine. 2007. A survey of named entity recognition and classification. Lingvis- ticae Investigationes, 30(1):3-26. Allen Oneill, arXiv:2009.06672Data quality evaluation using probability models. arXiv preprintAllen ONeill. 2020. Data quality evaluation using prob- ability models. arXiv preprint arXiv:2009.06672. Data quality assessment. L Leo, Yang W Pipino, Richard Y Lee, Wang, Communications of the ACM. 454Leo L Pipino, Yang W Lee, and Richard Y Wang. 2002. Data quality assessment. Communications of the ACM, 45(4):211-218. Squad: 100,000+ questions for machine comprehension of text. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, Percy Liang, EMNLP. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. Squad: 100,000+ questions for machine comprehension of text. In EMNLP. Distilbert, a distilled version of bert: Smaller, faster, cheaper and lighter. Victor Sanh, Debut, T Chaumond, Wolf, arXiv:1910.01108arXiv preprintVictor Sanh, L Debut, J Chaumond, and T Wolf. 2019. Distilbert, a distilled version of bert: Smaller, faster, cheaper and lighter. arxiv 2019. arXiv preprint arXiv:1910.01108. Automating large-scale data quality verification. Sebastian Schelter, Dustin Lange, Philipp Schmidt, Meltem Celikel, Felix Biessmann, Andreas Grafberger, Proceedings of the VLDB Endowment. the VLDB Endowment11Sebastian Schelter, Dustin Lange, Philipp Schmidt, Meltem Celikel, Felix Biessmann, and Andreas Graf- berger. 2018. Automating large-scale data quality verification. Proceedings of the VLDB Endowment, 11(12):1781-1794. Active learning literature survey. Burr Settles, Burr Settles. 2009. Active learning literature survey. Faceted product search powered by the semantic web. Damir Vandic, Decision Support Systems. 533Jan-Willem Van Dam, and Flavius FrasincarDamir Vandic, Jan-Willem Van Dam, and Flavius Fras- incar. 2012. Faceted product search powered by the semantic web. Decision Support Systems, 53(3):425- 437. Learning to extract attribute value from product via question answering: A multi-task approach. Qifan Wang, Li Yang, Bhargav Kanagal, Sumit Sanghai, Bin Sivakumar, Zac Shu, Jon Yu, Elsas, Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data MiningQifan Wang, Li Yang, Bhargav Kanagal, Sumit Sanghai, D Sivakumar, Bin Shu, Zac Yu, and Jon Elsas. 2020a. Learning to extract attribute value from product via question answering: A multi-task approach. In Pro- ceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pages 47-55. Minilm: Deep selfattention distillation for task-agnostic compression of pre-trained transformers. Wenhui Wang, Furu Wei, Li Dong, Hangbo Bao, Nan Yang, Ming Zhou, Advances in Neural Information Processing Systems. 33Wenhui Wang, Furu Wei, Li Dong, Hangbo Bao, Nan Yang, and Ming Zhou. 2020b. Minilm: Deep self- attention distillation for task-agnostic compression of pre-trained transformers. Advances in Neural In- formation Processing Systems, 33:5776-5788. Jon Elsas, and Bhargav Kanagal. 2021. Mave: A product dataset for multi-source attribute value extraction. Li Yang, Qifan Wang, Zac Yu, Anand Kulkarni, Sumit Sanghai, Bin Shu, Li Yang, Qifan Wang, Zac Yu, Anand Kulkarni, Sumit Sanghai, Bin Shu, Jon Elsas, and Bhargav Kanagal. 2021. Mave: A product dataset for multi-source attribute value extraction. Opentag: Open attribute value extraction from product profiles. Guineng Zheng, Subhabrata Mukherjee, Xin Luna Dong, Feifei Li, Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data MiningGuineng Zheng, Subhabrata Mukherjee, Xin Luna Dong, and Feifei Li. 2018. Opentag: Open attribute value extraction from product profiles. In Proceed- ings of the 24th ACM SIGKDD International Confer- ence on Knowledge Discovery & Data Mining, pages 1049-1058.
232,021,543
[]
Apprentissage de relations prédicat-argument pour l'extraction d'information à partir de textes conversationnels Mots-clefs : Apprentissage de relations prédicat-argument, extraction d'information TALN 2005, Dourdan, 6-10 juin 2005 Narjès Boufaden boufaden@iro.umontreal.ca Université de Montréal Département d'Informatique et de Recherche Opérationnelle Université de Montréal C.P. 6128 succ. Centre-Ville Montréal H3C 3J7QuébecCanada Guy Lapalme lapalme@iro.umontreal.ca Université de Montréal Département d'Informatique et de Recherche Opérationnelle Université de Montréal C.P. 6128 succ. Centre-Ville Montréal H3C 3J7QuébecCanada Narjès Boufaden Université de Montréal Département d'Informatique et de Recherche Opérationnelle Université de Montréal C.P. 6128 succ. Centre-Ville Montréal H3C 3J7QuébecCanada Guy Lapalme Université de Montréal Département d'Informatique et de Recherche Opérationnelle Université de Montréal C.P. 6128 succ. Centre-Ville Montréal H3C 3J7QuébecCanada Apprentissage de relations prédicat-argument pour l'extraction d'information à partir de textes conversationnels Mots-clefs : Apprentissage de relations prédicat-argument, extraction d'information TALN 2005, Dourdan, 6-10 juin 2005Learning predicat-argument relations, information extraction Nous présentons les résultats de notre approche d'apprentissage de relations prédicat-argument dans le but de générer des patrons d'extraction pour des textes conversationnels. Notre approche s'effectue en trois étapes incluant la segmentation linguistique des textes pour définir des unités linguistiques à l'instar de la phrase pour les textes bien formés tels que les dépêches journalistiques. Cette étape prend en considération la dimension discursive importante dans ces types de textes. La deuxième étape effectue la résolution des anaphores pronominales en position de sujet. Cela tient compte d'une particularité importante des textes conversationnels : la pronominalisation du thème. Nous montrons que la résolution d'un sous ensemble d'anaphores pronominales améliore l'apprentissage des patrons d'extraction. La troisième utilise des modèles de Markov pour modéliser les séquences de classes de mots et leurs rôles pour un ensemble de relations données. Notre approche expérimentée sur des transcriptions de conversations téléphoniques dans le domaine de la recherche et sauvetage identifie les patrons d'extraction avec un F-score moyen de 73,75 %.AbstractWe present the results of our approach for the learning of patterns for information extraction from conversational texts. Our three step approach is based on a linguistic segmentation stage that defines units suitable for the pattern learning process. Anaphora resolution helps to identify more relevant relations hidden by the pronominalization of the topic. This stage precedes the pattern learning stage, which is based on Markov models that include wild card states designed to handle edited words and null transitions to handle omissions. We tested our approach on manually transcribed telephone conversations in the domain of maritime search and rescue, and succeeded in identifying extraction patterns with an F-score of 73.75 %. Introduction Nous présentons notre approche d'apprentissage de patrons dans le contexte de l'extraction d'information à partir de textes conversationnels spécialisés. Cette étape est la dernière de notre approche proposée pour l'extraction d'information à partir de ces textes que nous avons présentée dans nos travaux précédents (Boufaden et al., 2002;Boufaden et al., 2005). Nous proposons une modélisation utilisant des modèles de Markov pour apprendre des relations prédicatargument (séquences de classes sémantiques étiquetant le verbe et ses arguments) et les rôles 1 des arguments à partir de textes étiquetés sémantiquement. Ces textes sont des transcriptions 2 manuelles de conversations téléphoniques portant sur des incidents survenus en mer. Ce sont des compte rendus où les locuteurs se communiquent des informations sur un incident, par exemple un bateau en difficulté, sur les conditions météorologiques lors d'une mission de recherche ou sur le lieu de l'incident. Un exemple de conversation est donné au tableau 1. Le système repose sur trois étapes et prend en entrée des séquences de classes sémantiques étiquetant les mots clé des énoncés où les étiquettes sont définies dans une ontologie du domaine. La première étape segmente les conversations en unités linguistiques à l'instar de la phrase pour les textes bien formés tels que les dépêches journalistiques (section 2.1). Cette étape prend en considération la dimension discursive très importante dans ce types de textes (Levelt, 1989). La deuxième effectue la résolution des anaphores pronominales en position de sujet (section 2.2). Cette étape tient compte d'une particularité des textes conversationnels : la pronominalisation du thème. Nous montrons que la résolution d'un sous ensemble des anaphores pronominales améliore l'apprentissage des patrons d'extraction. La troisième utilise les modèles de Markov pour modéliser les séquences de classes de mots et leurs rôles pour un ensemble de relations données (section 3). La comparaison de notre approche avec celles développées pour les textes bien formés montrent la pertinence de notre approche (section 4). Problématique de l'apprentissage des patrons d'extraction Un patron d'extraction est une structure qui permet le repérage des informations que nous voulons extraire et établit une relation entre ces éléments d'information. Il se caractérise par des contraintes syntaxiques (position des arguments dans une relation sujet-verbe-objet) et sémantiques (type de classes sémantiques) permettant le filtrage d'un sous-ensemble d'énoncés qui contiennent des informations pertinentes au domaine d'application. Parmi les principales difficultés de l'apprentissage des patrons d'extraction à partir de textes bien formés mentionnés dans la littérature (Grishman, 1998;Surdeanu et al., 2003), nous retenons: (1) la diversité des constructions phrastiques contenant l'information pertinente et (2) l'association de nouveaux éléments d'information à des objets référencés par une anaphore. Dans le contexte des textes conversationnels, ces difficultés sont amplifiées. D'une part, les irrégularités langagières telles que les répétitions et les reprises modifient la structure syntaxique des énoncés, tandis que l'aspect conversationnel a pour effet de répartir l'information sur plus d'un énoncé, par exemple lors d'échanges de type question-réponse. D'autre part, la présence importante de pronoms notamment à l'intérieur des unités thématiques augmente le nombre de relations partielles (par opposition à une relation complète où tous les arguments sont définis). L'approche que nous proposons tient compte de ses difficultés. Tout d'abord, nous effectuons une segmentation en paires d'adjacence 3 qui détecte, par exemple, les paires de type questionréponse pour repgrouper dans une seule unité linguistique les éléments d'information présents dans une question et sa réponse. Ensuite, nous procédons à la résolution des anaphores pronominales en position de sujet pour diminuer le nombre des relations partielles. Enfin, nous relaxons la contrainte de contiguité des arguments de la relation "sujet-verbe-objet", en apprenant les patrons à partir de séquences d'étiquettes sémantiques de longueur variable. Segmentation en unités linguistiques À l'instar des travaux en segmentation linguistique de conversations (Stolcke, 1997), nous avons utilisé un modèle de Markov d'ordre 1 pour modéliser des séquences de traits composés de marques lexicales telles que ok, well et ? caractéristiques des paires d'adjacence, mais aussi la longueur d'un énoncé ainsi que l'identité de locuteur. Contrairement aux approches proposées, nous n'avons pas utilisé la prosodie car celle-ci est absente de nos textes. Le modèle contient deux états représentant la classe des énoncés indépendants (E) et la classe des énoncés complétant une paire d'adjacence (PA). Nous avons validé notre modèle en effectuant 10 validations croisées sur notre corpus contenant 64 conversations (3481 énoncés) avec 80 % réservé à l'entraînement. La moyenne des erreurs de classification obtenue à partir des 10 validations croisées est de 15,9 %. L'analyse des erreurs de classification a montré que la source principale des erreurs est due à l'absence de marques lexicales pour certains énoncés de la classe PA. Dans ces cas, l'information prosodique absente dans nos transcriptions permettrait de combler le manque d'information lexicale. Résolution des anaphores pronominales Nous nous intéressons aux anaphores pronominales they, we, she, he et it en position de sujet 4 . Notre approche se base sur la structure thématique des conversations et sur une liste des étiquettes sémantiques 5 extraites à partir de chaque énoncé d'une unité thématique. L'importance de la structure thématique a déjà été soulignée pour la résolution des coréférences dans les conversations (Grosz et al., 1995). Le choix d'un antécédent est dirigé par deux contraintes de compatibilité: sémantique et thématique. La première fixe des associations possibles entre les étiquettes sémantiques et les pronoms. Tandis que la seconde fournie un antécédent par défaut, lorsqu'aucun antécédent compatible avec l'anaphore n'a été détecté dans les énoncés précédents de l'unité thématique courante ou de la précédente portant sur le même thème. Les valeurs par défaut sont les étiquettes les plus fréquentes calculées sur 31 conversations du corpus. L'évaluation de notre approche a été effectuée sur 31 conversations de notre corpus, soit 161 anaphores pronominales en position de sujet. Le taux moyen d'erreurs de résolution obtenu est de 79,5 %. Bien que le résultat soit encourageant, certains choix de notre approche ont contribué à augmenter le taux d'erreurs, en particulier, le choix d'une approche linéaire (non hiérarchique) de segmentation en unités thématiques (Boufaden et al., 2002) dans la segmentation automatique et la simplicité de notre approche dans le calcul des antécédents par défaut qui se base sur les fréquences obtenues sur le corpus. Apprentissage des patrons d'extraction Le but de cette étape est d'exploiter les associations entre les étiquettes sémantiques afin d'apprendre des patrons d'extraction qui expriment une relation prédicat-argument où les arguments ont un rôle spécifique pour une relation donnée. Des exemples d'étiquettes sémantiques utilisées sont présentées dans l'extrait de conversation du tableau 1. Approche Nous avons considéré cinq relations dans nos expériences: 1. Missing-object qui décrit Le bateau en difficulté, c'est-à-dire sa description, le nom de son propriétaire. 2. Incident qui décrit le type d'incident, la cause, le type d'appel de détresse. 4 Levelt (Levelt, 1989), montre que les pronoms position de sujet sont souvent le résultat de la pronominalisation du thème d'une unité thématique. 5 La structure thématique et les étiquettes sémantiques sont générées de manière automatique par des systèmes développés dans nos travaux précédents (Boufaden et al., 2005). Expériences et résultats Conclusion Nous avons analysé la problématique de l'apprentissage des patrons d'extraction pour des textes complexes peu étudiés en EI: les transcriptions de conversations. Nous avons modélisé les patrons d'extraction par des modèles de Markov qui associent des rôles aux arguments des prédicats avec un F-score de 73,75 %. Bien que les modèles de Markov aient été utilisés pour l'apprentissage de patrons (Seymore et al., 1999), peu de travaux les ont utilisés pour apprendre les rôles sémantiques. De ces travaux, nous retenons ceux de Gildea (Gildea & Palmer, 2002) effectués sur des textes journalistiques avec un F-score de 82 %. D'autres approches ont été utilisées, notamment les arbres de décisions sur des textes bien formés avec un F-score de 83,7 % (Surdeanu et al., 2003). Cependant, cette approche ne permet pas de tenir compte des séquences de longueurs variables que l'on retrouve avec les textes conversationnels. Nous avons ajouté une étape de résolution des anaphores pronominales en amont de l'étape d'apprentissage de patrons. Notre approche a permis un taux de résolution des anaphores de 79,5 % améliorant ainsi le F-score moyen pour l'apprentissage de patrons de 68,6 %. Quelques travaux Surdeanu (Surdeanu & Harabagiu, 2002) ont utilisé une approche similaire pour améliorer l'extraction des informations en résolvant les coréférences aux entités nommées. .................................Incident ................................... 5 b: it's on the south east coast of Newfoundland L O C A T I O N . ...................................Incident ................................... 6 b: this is been going on for, ...............................Search-unit ................................. .................................Search-unit ................................. Table 1 : 1Exemple de conversation dans le domaine de Recherche et sauvetage. Les mots soulignés sont les informations que nous voulons extraire. Les étiquettes sous les barres en soulignés sont des classes de mots importants. Les pointillés sont les frontières des unités linguistiques que nous détectons dans la section (2.1). Incident et Search-unit sont des exemples de relations que nous voulons modéliser par des modèles de Markov. Table 2 : 2Pour chaque type de relation, nous avons modélisé les séquences des étiquettes avec un modèle de Markov. Nous avons entraîné chaque modèle sur un sous-ensemble du corpus qui contient des exemples positifs du type de relation ciblée.Rappel, précision et F-score de l'apprentissage des patrons d'extraction pour les for- mulaires Incident, Mission, Search-unit et Missing-object. Le rappel et la précision sont obtenus par la méthode de validation croisée "Leaving one out" pour les deux modèles de Markov. Le F-score est la moyenne des F-scores du meilleur modèle. 3. Search-unit qui parle de la ressource utilisée dans une mission de recherche. 4. Mission qui décrit le lieu de la mission, les conditions météorologiques, la date. Nous constatons que le patron d'extraction associé à la relation Search-mission présente une meilleure performance avec le modèle de Markov d'ordre 1, tandis que les autres patrons d'extraction Missing-object, Incident et Search-unit montrent de meilleurs résultats avec les modèles d'ordre 2.Le choix de l'ordre du modèle dépend du taux des étiquettes sémantiques ayant plusieurs rôles possibles. Par exemple, dans l'unité thématique Mission, l'étiquette la plus fréquente est WEATHER-CONDITIONS avec une fréquence relative de 37,7 %. Cette dernière a un seul rôle dans la relation Mission, contrairement à l'étiquette NUMBER qui peut avoir le rôle d'une date ou d'une position géographique (en degré par exemple). Le choix de l'ordre dépend également du bruit introduit par les irrégularités langagières, notamment les reprises, agrandit la taille du contexte nécessaire pour désambiguïser un rôle.Nous avons effectué deux expériences afin de déterminer l'ordre du modèle de Markov qui donne les meilleures performances pour chaque patron d'extraction. Nous avons testé un mod- èle de Markov d'ordre 1 et un modèle d'ordre 2. Étant donné la taille modeste des corpus d'entraînement (<100) pour les différents patrons d'extraction, nous avons opté pour une val- idation croisée avec l'approche "Leaving one out". Les rappels 6 , précisions et F-scores des meilleures performances sont indiquées au tableau 2. Un rôle est un nom de champ défini dans un formulaire. 2 Ces textes ont été fournis par le Centre de Recherche de la défense Canadienne. Ils ne sont pas annotés prosodiquement et nous n'avions pas les enregistrements originaux pour reconstituer la prosodie. Les paires d'adjacence sont deux tours de parole, chacun venant d'un locuteur distinct où le premier tour nécessite un second tour de parole d'un certain type (source http://www.sil.org/linguistics/ GlossaryOfLinguisticTerms/). Le rappel correspond au nombre de rôles corrects générés par le système sur le nombre de rôles dans le corpus de test, tandis que la précision est le nombre de rôles corrects générés par le système sur le nombre de rôles qu'il fournit. Découpage thématique des conversations: un outil d'aide à l'extraction. Boufaden N, G &amp; Lapalme, Bengio Y, Actes de la 9 e conférence annuelle sur le traitement automatique des langues naturelles. s de la 9 e conférence annuelle sur le traitement automatique des langues naturellesNancy, FranceIBOUFADEN N., LAPALME G. & BENGIO Y. (2002). Découpage thématique des conversations: un outil d'aide à l'extraction. In Actes de la 9 e conférence annuelle sur le traitement automatique des langues naturelles (TALN 2002), volume I, p. 377-382, Nancy, France. Repérage de mots informatifs à partir de textes conversationnels. Boufaden N, G &amp; Lapalme, Bengio Y, 45Traitement Automatique de la LangueBOUFADEN N., LAPALME G. & BENGIO Y. (2005). Repérage de mots informatifs à partir de textes conversationnels. Traitement Automatique de la Langue, 45(3). The necessity of syntactic parsing for predicate argument recognition. Gildea D Palmer M, Proceedings of the 40th Annual Conference of the Association for Computational Linguistics. the 40th Annual Conference of the Association for Computational LinguisticsPhiladelphie, PennsylvanieGILDEA D. & PALMER M. (2002). The necessity of syntactic parsing for predicate argument recog- nition. In Proceedings of the 40th Annual Conference of the Association for Computational Linguistics (ACL 2002), p. 239-246, Philadelphie, Pennsylvanie. Information extraction and speech recognition. Grishman R, Proceedings of the DARPA Broadcast Transcription and Understanding Workshop. the DARPA Broadcast Transcription and Understanding WorkshopLansdowne, VirginieMorgan Kaufmann PublishersGRISHMAN R. (1998). Information extraction and speech recognition. In Proceedings of the DARPA Broadcast Transcription and Understanding Workshop, Lansdowne, Virginie: Morgan Kaufmann Pub- lishers. Centering: A Framework for Modeling the local Coherence of Discourse. Grosz B, Joshi A, Weinstein S, Computational Linguistics. 212GROSZ B., JOSHI A. & WEINSTEIN S. (1995). Centering: A Framework for Modeling the local Co- herence of Discourse. Computational Linguistics, 21(2), 203-225. Speaking: From Intention to Articulation. J M Levelt W, Natural Language Processing. MIT PressLEVELT W. J. M. (1989). Speaking: From Intention to Articulation. ACL-MIT Press Series in Natural Language Processing. MIT Press. Learning hidden Markov structure for information extraction. Seymore K, Mccallum A, Rosenfeld R, Proceedings of the AAAI-99 Workshop on Machine Learning for Information Extraction. the AAAI-99 Workshop on Machine Learning for Information ExtractionOrlando, FlorideSEYMORE K., MCCALLUM A. & ROSENFELD R. (1999). Learning hidden Markov structure for in- formation extraction. In Proceedings of the AAAI-99 Workshop on Machine Learning for Information Extraction, p. 37-42, Orlando, Floride. Modeling linguistic segment and turn boundaries for n-best rescoring of spontaneous speech. Stolcke A, Proceedings of EUROSPEECH 1997. EUROSPEECH 1997Rhodes, Grèce5STOLCKE A. (1997). Modeling linguistic segment and turn boundaries for n-best rescoring of sponta- neous speech. In Proceedings of EUROSPEECH 1997, volume 5, p. 2779-2782, Rhodes, Grèce. Using predicate-argument structures for information extraction. M Surdeanu, S Harabagiu, Williams J &amp; Aarseth P, Proceedings of ACL 2003. E. HINRICHS & D. ROTHACL 2003SURDEANU M., HARABAGIU S., WILLIAMS J. & AARSETH P. (2003). Using predicate-argument structures for information extraction. In E. HINRICHS & D. ROTH, Eds., Proceedings of ACL 2003, p. 8-15. Infrastructure for Open-Domain Information Extraction. M Surdeanu M. &amp; Harabagiu S, Proceedings of HLT 2002. M. MITCHELLHLT 2002San Diego, CalifornieSURDEANU M. & HARABAGIU S. M. (2002). Infrastructure for Open-Domain Information Extraction. In M. MITCHELL, Ed., Proceedings of HLT 2002, p. 325-330, San Diego, Californie.
14,967,295
Quantifying Constructional Productivity with Unseen Slot Members
This paper is concerned with the possibility of quantifying and comparing the productivity of similar yet distinct syntactic constructions, predicting the likelihood of encountering unseen lexemes in their unfilled slots. Two examples are explored: variants of comparative correlative constructions (CCs, e.g. the faster the better), which are potentially very productive but in practice lexically restricted; and ambiguously attached prepositional phrases with the preposition with, which can host both large and restricted inventories of arguments under different conditions. It will be shown that different slots in different constructions are not equally likely to be occupied productively by unseen lexemes, and suggested that in some cases this can help disambiguate the underlying syntactic and semantic structure.
[ 219307970, 5410054, 11433707, 368524, 3098317, 129886 ]
Quantifying Constructional Productivity with Unseen Slot Members Association for Computational LinguisticsCopyright Association for Computational LinguisticsJune 2009. 2009 Amir Zeldes amir.zeldes@rz.hu-berlin.de Institut für deutsche Sprache Linguistik Humboldt-Universität zu Berlin Unter den Linden 6 10099BerlinGermany Quantifying Constructional Productivity with Unseen Slot Members Proceedings of the NAACL HLT Workshop on Computational Approaches to Linguistic Creativity the NAACL HLT Workshop on Computational Approaches to Linguistic CreativityBoulder, ColoradoAssociation for Computational LinguisticsJune 2009. 2009 This paper is concerned with the possibility of quantifying and comparing the productivity of similar yet distinct syntactic constructions, predicting the likelihood of encountering unseen lexemes in their unfilled slots. Two examples are explored: variants of comparative correlative constructions (CCs, e.g. the faster the better), which are potentially very productive but in practice lexically restricted; and ambiguously attached prepositional phrases with the preposition with, which can host both large and restricted inventories of arguments under different conditions. It will be shown that different slots in different constructions are not equally likely to be occupied productively by unseen lexemes, and suggested that in some cases this can help disambiguate the underlying syntactic and semantic structure. Introduction Some syntactic constructions 1 are more productive than others. Innovative coinages like the CC: The bubblier the Mac-ier (i.e. the more bubbly a program looks, the more it feels at home on a Macintosh computer) are possible, but arguably more surprising and marked than: I have a bubblier operating system with a Mac-ier look in their respective construction, despite the same novel lexemes. The aim of this paper is to measure differences in the productivity of slots in such partially-filled constructions and also to find out if this productivity can be used to disambiguate constructions. 1 I use the term 'construction' in a construction grammar sense following Goldberg (1995Goldberg ( , 2006 to mean mentally stored hierarchically organized form-meaning pairs with empty, partially-filled or fully specified lexical material. In this sense, both comparative adjectives and the pattern The [COMP] the [COMP] are constructions, and the productivity of such patterns is the quantity being examined here. As one of the defining properties of language, productivity has received much attention in debates about the nature of derivational processes, the structure of the mental lexicon and the interpretation of key terms such as compositionality, grammaticality judgments or well-formedness. However in computational linguistics it is probably fair to say that it can be regarded most of all as a problem. Familiar items present in training data can be listed in lexical resources, the probabilities of their different realizations can be estimated from corpus frequency distributions etc. Thus using lexical information (statistically extracted or handcrafted resources) is the most successful strategy in resolving syntactic ambiguities such as PP-attachment (Hindle and Rooth, 1993;Ratnaparkhi, 1998;Stetina and Nagao, 1997;Pantel and Lin, 2000;Kawahara and Kurohashi, 2005), basing decisions on previous cases with identical lexemes or additional information about those lexemes. Yet because of productivity, even very large training data will never cover examples for all inputs being analyzed. In morphological theory (and corresponding computational linguistic practice), the situation has been somewhat different: a much larger part of the word formations encountered in data can be listed in a lexicon, with neologisms being the exception, whereas in syntax most sentences are novel, with recurring combinations being the exception. 2 The focus in morphology has therefore often been on which word formation processes are productive and to what extent, with the computational counterpart being whether or not corresponding rules should be built into a morphological analyzer. Syntacticians, conversely, may ask which apparently regular constructions are actually lexicalized or have at least partly non-compositional properties (e.g. collocations, see Choueka, 1988, Evert, 2005, 2009; multiword expressions, Sag et al., 2002;lexical bundles, Salem, 1987, Altenberg and Eeg-Olofsson, 1990, Biber et al., 1999, 2004. In morphology, the realization that productivity is a matter of degree, rather than a binary trait of word formation processes (see e.g. Bauer, 2001:125-162), has lead to the exploration of quantitative measures to assess and compare different aspects of the fertility of various patterns (esp. the work of Baayen, 2001Baayen, , 2009). Yet syntactic applications of these measures have only very recently been proposed, dealing with one slot of a pattern much like the stem operated on by a morphological process (cf. Barðdal, 2006;Kiss, 2007). In this paper I will examine the application of measures based on Baayen's work on morphology to different variants of syntactic constructions with more or less variable slots. The goal will be to show that different constructions have inherently different productivity rates, i.e. they are more or less liable to produce new members in their free slots. If this view is accepted, it may have consequences both theoretically (novelty in certain positions will be more surprising or marked) and practically, e.g. for parsing ambiguous structures with novel arguments, since one parse may imply a construction more apt to novelty than another. The remainder of this article is structured as follows: the next section introduces concepts underlying morphological productivity and related corpusbased measures following Baayen (2009). The following two sections adapt and apply these measures to different types of CCs (such as the faster the better) and NP/VP-attached PPs, respectively, using the BNC 3 as a database. The final section discusses the results of these studies and their implications for the study of syntactic productivity. Morphological Productivity Measures Productivity has probably received more attention as a topic in morphology than in syntax, if for no other reason than that novel words are comparatively rare and draw attention, whereas novel phrases or sentences are ubiquitous. The exact definition of a novel word or 'neologism' is however less than straightforward. For the present purpose we may use Bauer's (2001:97-98) working definition as a starting point: [Productivity] is a feature of morphological processes which allow for new coinages, […] coining must be repetitive in the speech community […] Various factors appear to aid productivity: type frequency of appropriate bases, phonological and semantic transparency, naturalness, etc., but these are aids to productivity, not productivity itself. For Bauer, productivity is defined for a morphological process, which is ideally frequently and consistently found and coins ideally transparent novel forms. The word 'coining' in this context implies that speakers use the process to construct the transparent novel forms in question, which in turn means the process has a regular output. Yet novelty, transparency and regularity are difficult to judge intuitively, and the definitions of "new" vs. "existing" words cannot be judged reliably for any one speaker, nor with any adequacy for a speaker community (cf. Bauer, 2001:34-35). This problem has led researchers to turn to corpus data as a sort of 'objective' model of language experience, in which the output of a process can be searched for, categorized and tagged for evaluation. Baayen (e.g. 2001Baayen (e.g. , 2009 proposes three corpus-based measures for the productivity of word formation processes. The first measure, which he terms extent of use, is written V(C,N) and is simply the proportion of types produced by a process C in a corpus of size N, e.g. the count of different nouns in -ness out of all the types in N. According to this measure, -ness would have a much higher realized productivity than the -th in warmth since it is found in many more words. However, this measure indiscriminately deals with all existing material -all words that have already been generated -and hence it cannot assess how likely it is that novel words will be created using a certain process. Baayen's other two measures address different aspects of this problem and rely on the use of hapax legomena, words appearing only once in a corpus. The intuitive idea behind looking at such words is that productively created items are oneoff unique occurrences, and therefore they must form a subset of the hapax legomena in a corpus. Baayen uses V(1,C,N), the number of types from category C occurring once in a corpus of N words and V(1,N), the number of all types occurring once in a corpus of N words. The second measure, termed hapax-conditioned degree of productivity is said to measure expanding productivity, the rate at which a process is currently creating neologisms. It is computed as V(1,C,N)/V(1,N), the proportion of hapax legomena from the examined category C within the hapax legomena from all categories in the corpus. Intuitively, if the amount of hapax legomena could be replaced by 'true' neologisms only, this would be the relative contribution of a process to productivity in the corpus, which could then be compared between different processes 4 . The third measure, category-conditioned degree of productivity measures the potential productivity of a process, meaning how likely it is to produce new members, or how saturated a process is. This measure is the proportion of hapax legomena from category C divided by N(C), the total token count from this category: V(1,C,N)/N(C). It intuitively represents the probability of the next item from category C, found in further corpus data of the same type, to be a hapax legomenon. Baayen's measures (hence p1, p2 and p3 respectively) are appealing since they are rigorously defined, easily extractable from a corpus (provided the process can be identified reliably in the data) and offer an essential reduction of the corpus wide behavior of a process to a number between 1 and 0, that is, an item producing no hapax legomena would score 0 on p2 and p3, and an item with 100% hapax legomena would score 1 on p3, even if it is overall rather insignificant for productivity in the corpus as a whole (as reflected in a low score for p2). The measure p3 is the most important one in the present context, since it allows us to reason conversely that, given that an item is novel and could belong to one of two processes, it is more likely to have come from whichever process is more productive, i.e. has a higher p3 score. Indeed the assumptions made in these measures do not necessarily fit syntactic productivity at a first glance: that the process in question has a clearly defined form (e.g. a suffix such as -ness) that it accommodates one variable slot (the stem, e.g. good-in goodness), and that each different stem forms a distinct type. Applying these measures to syntactic constructions requires conceptual and mathematical adaptation, which will be discussed in the next section using the example of comparative correlative constructions. Measuring Productivity in CCs Comparative correlatives are a complex yet typologically well attested form of codependent clauses expressing a corresponding monotonous positive or negative change in degree between two properties (see den Dikken, 2005 for a cross-linguistic overview). For example, in the faster we go, the sooner we'll get there, speed is monotonously correlated with time of arrival. A main reason for syntactic interest in this type of sentence is a proposed 'mismatch' (see McCawley, 1988, Culicover andJackendoff, 1999) between its syntax, which appears to include two identically constructed paratactic clauses, and its semantics, which imply possible hypotaxis of the first clause as a sort of 'conditional' (if and in so much as we go fast…). Two other noteworthy features of this construction in use (the following examples are from the BNC) are the frequent lack of a verb (the larger the leaf the better quality the tea) and even of a subject noun (the sooner the better) 5 and a tendency for the (at least partial) lexicalization of certain items. The verbless variant often houses these, e.g. the more the merrier, but also with verbs, e.g. the bigger they come the harder they fall. A context-free grammar might describe a simplified variant of such clauses in the following terms: S cc > the COMP (NP (VP)) S > S cc S cc where S cc is one of the comparative correlative clauses, COMP represents either English comparative allomorph (in -er like bigger or analytic with more or less in more/less important), and NP and VP are optional subjects and corresponding predicates for each clause. 6 However like many CFG rules, these rules may be too general, since it is clearly the case that not all comparatives, nouns and verbs fit in this construction, if only because of semantic limitations, i.e. they must be plausibly capable of forming a pair of monotonously correlated properties. Corpus data shows that comparatives in CC clauses select quite different lexemes than comparatives at large, that the first and second slots (hence cc1 and cc2) have different preferences, and that the presence or absence of a VP and possibly a subject NP also interact with these choices. Table 1 shows comparatives in the BNC sorted by frequency in general, along with their frequencies in cc1 and cc2. Some frequent comparatives do not or hardly appear in CCs given their frequency 7 while others prefer a certain slot exclusively (e.g. more likely in cc2) or substantially (e.g. higher in cc1). Columns Ø1 and Ø2 show bare comparatives (no subject or verb) in cc1 or 2 and the next two columns show subsets of bare cc1 or 2 given that the other clause is also bare. The last columns show CCs with only NPs and no verb, either in one clause or both. In bare CCs we find that better selects cc2 exclusively, in fact making up some 88% of cc2s in this construction (the COMP the better) in the BNC. 7 Occurrences of items which cannot serve attributively, such as more with no adjective and sooner, have been excluded, since they are not comparable to the other items. Most occurrences of the most frequent item, further, should arguably be excluded too, since it is mostly used as a lexicalized adverb and not a canonical comparative. However comparative usage is also well-attested, e.g.: he was going much further than that. A look at the list of lexemes typical to cc1 vs. cc2 shows that cc1 tends to express a dependent variable with spatiotemporal semantics (higher, older, longer), whereas cc2 typically shows an independent evaluative (better, more likely), though many common lexemes appear in both. 8 Although the results imply varying degrees of preference and lexicalization in different constructions, they do not yet tell us whether or not, or better how likely, we can expect to see new lexemes in each slot. This can be assessed using Baayen's measures, by treating each construction as a morphological process and the comparative slot as the lexical base forming the type (see Kiss, 2007 for a similar procedure). 9 The results in Table 2 show that all constructions are productive to some extent, though clearly some yield fewer new types. p1 and p2 show that CCs are responsible for very little of the productive potential of comparatives in the corpus. This is not only a function of the relative rarity of CCs: if we look at their rate of vocabulary growth (Figure 1), general comparatives gather new types more rapidly than CCs even for the same sample size 10 . Using a Finite Zipf Mandelbrot Model (FZM, Evert, 2004), we can extrapolate from the observed data to predict the gap will grow with sample size. toks types hpx p1 p2 p3 266703 5988 2616 0.00772 0.00651 0.0098 8 I thank Livio Gaeta and an anonymous reviewer for commenting on this point. 9 In fact, one could also address the productivity of the construction as a whole by regarding each argument tuple as a type, e.g. <more ergonomic, better> could be a hapax legomenon despite better appearing quite often. Since each slot multiplies the chances a construction has to be unique, the n th root of the value of the measure would have to be taken in order to maintain comparability, thus the square root of p k for 2 slots, the cube root for 3 slots and so on. Another option, if one is interested in the chance that any particular slot will be unique, is to take the average of p k for all slots. However for the present purpose the individual score of each slot is more relevant. 10 The comparative curve is taken from 2000 occurrences evenly distributed across the sections of the BNC, to correspond topically to the CCs, which cover the whole corpus. Table 2. Productivity scores for comparatives, ccclauses in general and specifically for bare CCs However, p3 shows the surprising result that CCs have more potential productivity than comparatives in general, with the bare cc1 slot leading, both general CC slots somewhat behind, and the bare cc2 last. This means our data does not begin to approach covering this category -the next CC is much likelier to be novel, given the data we've seen so far. With this established, the question arises whether a CFG rule like the one above should take account of the likelihood of each slot to contain novel vs. familiar members. For instance, if a PCFG parser correctly identifies a novel comparative and the input matches the rule, should it be more skeptical of an unseen bare cc1 than an unseen bare cc2 (keeping in mind that the latter have so far been better in 88% of cases)? To illustrate this, we may consider the output of a PCFG parser (in this case the Stanford Parser, Klein and Manning, 2003) for an ambiguous example. Since CCs are rather rare, PCFGs will tend to prefer most other parses of a sentence, if these are available. Where no other reading is available we may get the expected two clause structure, as in the example in Figure 2 Here The less cloudy and the better views form one NP, separate from the VP complex. Such a reading is not entirely impossible: the sentence could mean 'less cloudy, better views' appositively. However despite the overall greater frequency of appositions and the fact that less cloudy has probably not been observed in cc1 in training data, the pattern of a novel form for cc1 and better in cc2 is actually consistent with a novel CC. With these ideas in mind, the next section examines the potential of productivity to disambiguate a much more prevalent phenomenon, namely PP attachment. PP Attachment and Productivity The problem of attaching prepositional phrases as sister nodes of VP or as adjuncts to its object nouns is a classic case of syntactic ambiguity that causes trouble for parsers (see Hindle and Rooth, 1993;Manning and Schütze, 1999:278-287;Atterer and Schütze, 2007), e.g. the difference between I ate a fish with a fork and I ate a fish with bones 12 , i.e. denoting the instrument or an attribute of the fish. There are also two further common readings of the preposition with in this context, namely attached either high or low in the VP in a comitative sense: I ate a fish with Mary and I ate a fish with potatoes respectively, though most approaches do not distinguish these, rather aiming at getting the attachment site right. Already in early work on PP attachment (Hindle and Rooth, 1993) it was realized that the lexical identity of the verb, its object, the preposition and in later approaches also the prepositional object noun (Ratnaparkhi et al., 1994) are useful for predicting the attachment site, casting the task as a classification of tuples <v, n1, p, n2> into the classes V (VP attachment) and N (NP attachment). Classifiers are commonly either supervised, with disambiguated training data, or more recently unsupervised (Ratnaparkhi, 1998) using data from unambiguous cases where no n1 or v appears. Other approaches supplement this information with hand-built or automatically acquired lexical resources and collocation databases to determine the relationship between the lexemes, or, for lexemes unattested in the tuples, for semantically similar ones (Stetina and Nagao, 1997;Pantel and Lin, 2000). Although the state of the art in lexically based systems actually approaches human performance, they lose their power when confronted with unfamiliar items. For example, what is the likeliest attachment for the following BNC example: I can always eat dim-sum with my dybbuk? It is safe to assume that the (originally Hebrew) loan-word dybbuk '(demonic) possession' does not appear in most training datasets, though dim-sum is attested more than once as an object of eat in the BNC. Crucially, the triple (eat, dim-sum, with) alone cannot reliably resolve the attachment site (consider soy-sauce vs. chopsticks as n2). It is thus worth examining how likely a novel item is in the relevant slot of each reading's construction. The rest of this section therefore examines productivity scores for the slots in eat NP with NP and their correlation with different readings as an example. Since these cases cannot be identified automatically in an unparsed text with any reliability, and since there is not enough hand-parsed data containing these constructions, a conservative proximity assumption was made (cf. Ratnaparkhi, 1998) and all occurrences of eat and related forms within ten words of with and with no intervening punctuation in the BNC were evaluated and tagged manually for this study. This also allowed for head-noun and anaphor resolution to identify the referent of a slot in the case of pronominal realization; thus all slot types in the data including pronouns are evaluated in terms of a single head noun. Results show that out of 131 hits, the largest group of PPs (59 tokens) were object noun modifiers, almost all comitatives 13 , justifying the prevalent heuristic to prefer low attachment. However verbal instrumentals and high comitatives (25 and 23 respectively) come at a very close second. The remaining 24 cases were adverbial modifications (e.g. with enthusiasm). Looking at hapax legomena in the respective slots we can calculate the measures in Table 3. The scores show that the verbal instrumental reading is the least likely to exhibit a novel head at the n2 slot, which is semantically plausible -the repertoire of eating instruments is rather conventionalized and slow to expand. The comitative reading is very likely to innovate in n2, but much less so in n1, fitting e.g. the "dim-sum with dybbuk"scenario. This fits the fact that one may eat together with many distinct persons etc., but when Table 3. p3 for the first and second head noun in nominal and three types of verbal PP attachment for eat n with n in the BNC. these are specified, the exact nature of the meal or food is often left unspecified 14 . The adverbial reading is likely to innovate in both slots, since many ways or circumstances of eating can be specified and these hardly restrict the choice of object for eat. Interestingly, the choice of object maintains a very stable productivity in all but the high comitative construction. n2 innovation in nominal modifiers is actually lower than for adverbials and comitatives, meaning low attachment may not be the preferred choice for unknown nouns. While these results imply what some reasonable expectations may be to find a novel member of each slot in each reading, they do not take the identity of the lexemes into account. In order to combine the general information about the slot with knowledge of a known slot member, we may simultaneously attempt to score the productivity of the construction's components, namely the noun or verb in question, for PP modifiers. This raises the problem of what exactly should be counted. One may argue that high-attached comitatives and adverbials should be counted separately, since they are almost always optional regardless of the verb (one can equally well eat or do anything else with someone in some way), unlike instrumentals which may be more closely linked to the verb. On the other hand, the exact constructional sense of such PPs is colored by the verb, e.g. eating a meal with someone has a rather particular meaning (as opposed to coincidentally performing the act of eating alongside another eater). If the decision is only between high and low attachment, then grouping all variants together may be sensible in any case. Depending on the argument and verb, it is possible to make fine distinctions, provided enough cases are found. For dim-sum, for example, no cases of NP modifying with (novel or otherwise) are found, making the (correct) high comitative reading likely. By contrast, for the head noun fish, which is a common object of eat, 37 hits with with-PPs are found in the BNC, forming 32 prepositional object noun types of which 28 are hapax legomena in this slot. All high readings of with-PPs with eat (including intransitive eat) form 92 tokens, 68 noun types and 44 hapax legomena. Thus fish + PP scores p3=0.756 while eat + PP scores 14 In fact the non-food specific nouns breakfast, lunch, dinner, dish and meal cover 16 of the high comitative n1 tokens, almost 70%. 0.478, corresponding to less productivity. This means novel prepositional objects are substantially less probable for the high attachment given that the direct object is fish. Conclusion The above results show that similar yet distinct constructions, which vary slightly in either constituent structure (high vs. low attachment), semantics (comitative or instrumental PPs), number of arguments (more and less bare CCs) or position (cc1 vs. cc2), show very different lexical behavior, exhibiting more or less variety in different slots and differing proportions of hapax legomena. The inference which should become apparent from the sharp contrasts in slot scores (especially in p3) given the size of the data, is that these differences are not coincidental but are indicative of inherently different productivity rates for each slot in each construction. These properties need not be attributed to system internal, linguistic reasons alone, but may also very well reflect world knowledge and pragmatic considerations. 15 However, from a construction grammar point of view, the entrenchment of these constructions in speakers and therefore in data is inextricably connected with interaction in the world, thus making syntactic productivity a plausible and relevant quantity both theoretically and potentially for NLP practice. It remains to be seen whether or not productivity scores can help automatically disambiguate structures with unseen arguments (e.g. PP attachment with unencountered n2), or even distinguish semantic classes such as comitatives, instrumentals etc. for novel nouns, for which a classification into helpful semantic categories (animate, human and so forth) is not available. A large-scale evaluation of this question will depend on how easily and reliably productivity scores can be extracted automatically from data for the relevant constructions. . 11 The 11Stanford Parser fares quite well in cases like these, since the pronoun (it, I) can hardly be modified by the comparative (*[ NP the closer it] or *[ NP the more worried I]), and similarly for NPs with articles (*[ NP the closer the time]). Yet articleless NPs and bare CCs cause problems, as in the tree inFigure 3. Figure 2 . 2Stanford Parser tree for: The closer it gets, the more worried I become. Figure 1 . 1Vocabulary growth curves and FZM extrapolations for comparatives in cc1, cc2 and at large in the BNC. Figure 3 . 3Stanford Parser tree for: The less cloudy, the better views can be seen to the south. Compounding represents an exception to this generalization, standing, at least for some languages, between syntax and word formation and often generating an unusually large amount of items unlisted in lexica (cf.Bauer, 2001:36-7). The British National Corpus (http://www.natcorp.ox.ac.uk/), with over 100 million tokens of British English. This statement must be restricted somewhat: in items showing multiple processes, e.g. bullishness, the processes associated with the suffixes -ish and -ness are not statistically independent, creating a difficulty in using such cases for the comparison of these two processes (seeBaayen, 2009). In syntax the extent of this problem is unclear, since even occurrences of NPs and VPs are not independent of each other. The latter form has been analyzed as a case of ellipsis of the copula be (Culicover and Jackendoff, 1999:554; similarly for German:Zifonun et al., 1997Zifonun et al., :2338. It is my position that this is not the case, as the bare construction has distinct semantic properties as well as different productive behavior, see below. 6 These rules should be understood as agnostic with respect to the parataxis/hypotaxis question mentioned above. The parentheses mean NP may appear without VP but not vice versa. The X nodes conform to the Penn Treebank II Bracketing Guidelines for CCs(Bies et al., 1995:178). Though in some cases the distinction is not so tenable, e.g. we have not signed a settlement agreement with them(Manning and Schütze, 1999:286), where with them can arguably be attached low or high. Incidentally, the 'fish' examples are actually attested in the BNC in a linguistic context. Only 4 hits were truely non-comitative noun modifiers, e.g. <eat, anything, with, preservatives>, where a comitative reading is clearly not intended. Since the group was so small, all noun modifiers have been treated here together. In this context it is worth mentioning that similar ongoing examinations of German CCs reveal different lexical preferences, implying that some of this behavior is language dependent and to some extent language internally lexicalized. Phraseology in Spoken English. Bengt Altenberg, Mats Eeg-Olofsson, Theory and Practice in Corpus Linguistics. Rodopi, AmsterdamBengt Altenberg and Mats Eeg-Olofsson. 1990. Phrase- ology in Spoken English. In: Jan Aarts and Willem Meijs, editors, Theory and Practice in Corpus Lin- guistics. Rodopi, Amsterdam: 1-26. Prepositional Phrase Attachment without Oracles. Michaela Atterer, Hinrich Schütze, Computational Linguistics. 334Michaela Atterer and Hinrich Schütze. 2007. Preposi- tional Phrase Attachment without Oracles. Computa- tional Linguistics, 33(4): 469-476. Word Frequency Distributions. (Text, Speech and Language Technologies 18. R , Harald Baayen, Kluwer Academic PublishersDordrecht / Boston / LondonR. Harald Baayen. 2001. Word Frequency Distributions. (Text, Speech and Language Technologies 18.) Klu- wer Academic Publishers, Dordrecht / Boston / Lon- don. R , Harald Baayen, Corpus Linguistics in Morphology: Morphological Productivity. BerlinMouton de Gruyter2Corpus Linguistics. An International HandbookR. Harald Baayen. 2009. Corpus Linguistics in Mor- phology: Morphological Productivity. In: Anke Lüdeling and Merja Kytö, editors, Corpus Linguis- tics. An International Handbook, vol. 2. Mouton de Gruyter, Berlin: 899-919. Predicting the Productivity of Argument Structure Constructions. Jóhanna Barðdal, The 32nd Annual Meeting of the Berkeley Linguistics Society. Jóhanna Barðdal. 2006. Predicting the Productivity of Argument Structure Constructions. In: The 32nd An- nual Meeting of the Berkeley Linguistics Society. . Berkeley Linguistics Society, Berkeley. Available at. Berkeley Linguistics Society, Berkeley. Available at: http://ling.uib.no/barddal/BLS-32.barddal.pdf. Laurie Bauer, Morphological Productivity. (Cambridge Studies in Linguistics 95.) Cambridge University Press. Cambridge, UKLaurie Bauer. 2001. Morphological Productivity. (Cam- bridge Studies in Linguistics 95.) Cambridge Univer- sity Press, Cambridge, UK. Bracketing Guidelines for Treebank II Style Penn Treebank Project. Ann Bies, Mark Ferguson, Karen Katz, Robert Mac-Intyre, University of PennsylvaniaTechnical reportAnn Bies, Mark Ferguson, Karen Katz and Robert Mac- Intyre. 1995. Bracketing Guidelines for Treebank II Style Penn Treebank Project. Technical report, Uni- versity of Pennsylvania. If you look at…: Lexical Bundles in University Teaching and Textbooks. Douglas Biber, Susan Conrad, Viviana Cortes, Applied Linguistics. 253Douglas Biber, Susan Conrad and Viviana Cortes. 2004. If you look at…: Lexical Bundles in University Teaching and Textbooks. Applied Linguistics, 25(3): 371-405. The Longman Grammar of Spoken and Written English. Douglas Biber, Stig Johansson, Geoffrey Leech, Susan Conrad, Edward Finegan, Longman, LondonDouglas Biber, Stig Johansson, Geoffrey Leech, Susan Conrad and Edward Finegan. 1999. The Longman Grammar of Spoken and Written English. Longman, London. Looking for Needles in a Haystack. Yaacov Choueka, Proceedings of RIAO '88. RIAO '88Cambridge, MAYaacov Choueka. 1988. Looking for Needles in a Hay- stack. In: Proceedings of RIAO '88. Cambridge, MA, 609-623. The View from the Periphery: The English Comparative Correlative. W Peter, Ray Culicover, Jackendoff, Linguistic Inquiry. 304Peter W. Culicover and Ray Jackendoff. 1999. The View from the Periphery: The English Comparative Correlative. Linguistic Inquiry 30(4): 543-571. . Marcel Den Dikken, Comparative Correlatives Comparatively. Linguistic Inquiry. 364Marcel den Dikken. 2005. Comparative Correlatives Comparatively. Linguistic Inquiry, 36(4): 497-532. A simple LNRE model for random character sequences. Stefan Evert, Proceedings of JADT. JADTStefan Evert. 2004. A simple LNRE model for random character sequences. In: Proceedings of JADT 2004: 411-422. Stefan Evert , The Statistics of Word Cooccurrences: Word Pairs and Collocations. University of StuttgartPhD dissertationStefan Evert. 2005. The Statistics of Word Cooccur- rences: Word Pairs and Collocations. PhD disserta- tion, University of Stuttgart. Corpora and Collocations. Stefan Evert , Corpus Linguistics. An International Handbook. BerlinMouton de Gruyter2Anke Lüdeling and Merja KytöStefan Evert. 2009. Corpora and Collocations. In: Anke Lüdeling and Merja Kytö, editors, Corpus Linguis- tics. An International Handbook, vol. 2. Mouton de Gruyter, Berlin: 1212-1248. Constructions: A Construction Grammar Approach to Argument Structure. Adele E Goldberg, University of Chicago PressChicago and LondonAdele E. Goldberg. 1995. Constructions: A Construc- tion Grammar Approach to Argument Structure. University of Chicago Press, Chicago and London. Constructions at Work: The Nature of Generalization in Language. Adele E Goldberg, Oxford University PressOxford, UKAdele E. Goldberg. 2006. Constructions at Work: The Nature of Generalization in Language. Oxford Uni- versity Press, Oxford, UK. Structural Ambiguity and Lexical Relations. Donald Hindle, Mats Rooth, Computational Linguistics. 191Donald Hindle and Mats Rooth. 1993. Structural Ambi- guity and Lexical Relations. Computational Linguis- tics, 19(1): 103-130. PP-Attachment Disambiguation Boosted by a Gigantic Volume of Unambiguous Examples. Daisuke Kawahara, Sadao Kurohashi, Proceedings of the 2nd International Joint Conference on Natural Language Processing. the 2nd International Joint Conference on Natural Language ProcessingIJCNLP-05Daisuke Kawahara and Sadao Kurohashi. 2005. PP- Attachment Disambiguation Boosted by a Gigantic Volume of Unambiguous Examples. In: Proceedings of the 2nd International Joint Conference on Natural Language Processing (IJCNLP-05): 188-198. Produktivität und Idiomatizität von Präposition-Substantiv-Sequenzen. Tibor Kiss, Zeitschrift für Sprachwissenschaft. 262Tibor Kiss. 2007. Produktivität und Idiomatizität von Präposition-Substantiv-Sequenzen. Zeitschrift für Sprachwissenschaft, 26(2): 317-345. Accurate Unlexicalized Parsing. Dan Klein, Christopher D Manning, Proceedings of the 41st Meeting of the Association for Computational Linguistics. the 41st Meeting of the Association for Computational LinguisticsDan Klein and Christopher D. Manning. 2003. Accurate Unlexicalized Parsing. In: Proceedings of the 41st Meeting of the Association for Computational Lin- guistics: 423-430. Foundations of Statistical Natural Language Processing. D Christopher, Hinrich Manning, Schütze, MIT PressCambridge, MAChristopher D. Manning and Hinrich Schütze. 1999. Foundations of Statistical Natural Language Proc- essing. MIT Press, Cambridge, MA. The Comparative Conditional in English, German and Chinese. D James, Mccawley, Proceedings of the Fourteenth Annual Meeting of the Merkeley Linguistics Society. Berkeley Linguistics Society. the Fourteenth Annual Meeting of the Merkeley Linguistics Society. Berkeley Linguistics SocietyBerkeleyJames D. McCawley. 1988. The Comparative Condi- tional in English, German and Chinese. In: Proceed- ings of the Fourteenth Annual Meeting of the Merkeley Linguistics Society. Berkeley Linguistics Society, Berkeley: 176-187. An Unsupervised Approach to Prepositional Phrase Attachment using Contextually Similar Words. Patrick Pantel, Dekang Lin, Proceedings of the 38th Annual Meeting of the Association for Computational Linguistics. the 38th Annual Meeting of the Association for Computational LinguisticsPatrick Pantel and Dekang Lin. 2000. An Unsupervised Approach to Prepositional Phrase Attachment using Contextually Similar Words. In: Proceedings of the 38th Annual Meeting of the Association for Computa- tional Linguistics: 101-108. Statistical Models for Unsupervised Prepositional Phrase Attachment. Adwait Ratnaparkhi, Proceedings of COLING-ACL98. COLING-ACL98Montreal CanadaAdwait Ratnaparkhi. 1998. Statistical Models for Unsu- pervised Prepositional Phrase Attachment. In: Pro- ceedings of COLING-ACL98, Montreal Canada: 1079-1085 A Maximum Entropy Model for Prepositional Phrase Attachment. Adwait Ratnaparkhi, Jeff Reynar, Salim Roukos, Proceedings of the ARPA Human Language Technology Workshop. the ARPA Human Language Technology WorkshopPlainsboro, NJAdwait Ratnaparkhi, Jeff Reynar and Salim Roukos. 1994. A Maximum Entropy Model for Prepositional Phrase Attachment. In: Proceedings of the ARPA Human Language Technology Workshop. Plainsboro, NJ: 250-255. Multiword Expressions: A Pain in the Neck for NLP. Ivan Sag, Timothy Baldwin, Francis Bond, Ann Copestake, Dan Flickinger, Proceedings of the Third International Conference on Intelligent Text Processing and Computational Linguistics. the Third International Conference on Intelligent Text Processing and Computational LinguisticsMexico City, MexicoIvan Sag, Timothy Baldwin, Francis Bond, Ann Copestake and Dan Flickinger. 2002. Multiword Ex- pressions: A Pain in the Neck for NLP. In: Proceed- ings of the Third International Conference on Intelligent Text Processing and Computational Lin- guistics (CICLING 2002). Mexico City, Mexico: 1- 15. Pratique des segments répétés. Institut National de la Langue Française. André Salem, ParisAndré Salem. 1987. Pratique des segments répétés. Institut National de la Langue Française, Paris. Corpus Based PP Attachment Ambiguity Resolution with a Semantic Dictionary. Jiri Stetina, Makoto Nagao, Proceedings of the Fifth Workshop on Very Large Corpora. Beijing and Hong Kong. Jou Zhao and Kenneth Church, editorsthe Fifth Workshop on Very Large Corpora. Beijing and Hong KongJiri Stetina and Makoto Nagao. 1997. Corpus Based PP Attachment Ambiguity Resolution with a Semantic Dictionary. In: Jou Zhao and Kenneth Church, edi- tors, Proceedings of the Fifth Workshop on Very Large Corpora. Beijing and Hong Kong: 66-80. Grammatik der deutschen Sprache. Gisela Zifonun, Ludger Hoffmann, Bruno Strecker, De GruyterBerlin / New YorkBd. 3. (Schriften des Instituts für deutsche SpracheGisela Zifonun, Ludger Hoffmann and Bruno Strecker, editors. 1997. Grammatik der deutschen Sprache, Bd. 3. (Schriften des Instituts für deutsche Sprache 7.) De Gruyter, Berlin / New York.
248,780,538
Flexible Visual Grounding
Existing visual grounding datasets are artificially made, where every query regarding an entity must be able to be grounded to a corresponding image region, i.e., answerable. However, in real-world multimedia data such as news articles and social media, many entities in the text cannot be grounded to the image, i.e., unanswerable, due to the fact that the text is unnecessarily directly describing the accompanying image. A robust visual grounding model should be able to flexibly deal with both answerable and unanswerable visual grounding. To study this flexible visual grounding problem, we construct a pseudo dataset and a social media dataset including both answerable and unanswerable queries. In order to handle unanswerable visual grounding, we propose a novel method by adding a pseudo image region corresponding to a query that cannot be grounded. The model is then trained to ground to ground-truth regions for answerable queries and pseudo regions for unanswerable queries.In our experiments, we show that our model can flexibly process both answerable and unanswerable queries with high accuracy on our datasets. 1
[ 6308361, 52010710, 52967399 ]
Flexible Visual Grounding May 22-27, 2022 Yongmin Kim yongmin@nlp.ist.i.kyoto-u.ac.jp Kyoto University KyotoJapan Chenhui Chu chu@nlp.ist.i.kyoto-u.ac.jp Kyoto University KyotoJapan Sadao Kurohashi Kyoto University KyotoJapan Flexible Visual Grounding Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics Student Research Workshop the 60th Annual Meeting of the Association for Computational Linguistics Student Research WorkshopMay 22-27, 2022 Existing visual grounding datasets are artificially made, where every query regarding an entity must be able to be grounded to a corresponding image region, i.e., answerable. However, in real-world multimedia data such as news articles and social media, many entities in the text cannot be grounded to the image, i.e., unanswerable, due to the fact that the text is unnecessarily directly describing the accompanying image. A robust visual grounding model should be able to flexibly deal with both answerable and unanswerable visual grounding. To study this flexible visual grounding problem, we construct a pseudo dataset and a social media dataset including both answerable and unanswerable queries. In order to handle unanswerable visual grounding, we propose a novel method by adding a pseudo image region corresponding to a query that cannot be grounded. The model is then trained to ground to ground-truth regions for answerable queries and pseudo regions for unanswerable queries.In our experiments, we show that our model can flexibly process both answerable and unanswerable queries with high accuracy on our datasets. 1 Introduction Starting from conventional vision-and-language tasks such as image captioning (Vinyals et al., 2015) and visual question answering (Wu et al., 2017), many studies have been conducted to promote joint vision-and-language understanding. Visual grounding, which aims to find a specific region in an image given a query regarding an entity, is a fundamental task for enhancing the performance of various joint vision-and-language tasks (Plummer et al., 2015). For instance, in image captioning, it is important to ground to the corresponding image region while generating words for that region; in Figure 1: A comparison between previous visual grounding work and our flexible visual grounding work. In previous work, a query must be able to be grounded (see the left sub-figure), while our work can deal with both answerable and unanswerable visual grounding flexibly (in the right sub-figure, "two wonderful horses" can be grounded, while "my favorite picture," "a beautiful sunrise," and "a frosty day" cannot be grounded). The green bounding boxes are the ground-truth for answerable queries. VQA, it is crucial to understand to which image region the question is referring. Because of the importance of visual grounding, many research efforts have been dedicated to improve its accuracy (Plummer et al., 2015;Wang et al., 2016a;Fukui et al., 2016;Wang et al., 2016b;Yeh et al., 2017;Plummer et al., 2017;Chen et al., 2017;Yu et al., 2018b;Yang et al., 2020a,b;Dong et al., 2021). Previous visual grounding work assume that a query must be able to be grounded to an image region and create many datasets such as the Flickr30k entities (Plummer et al., 2015), RefClef (Kazemzadeh et al., 2014), RefCOCO, RefCOCO+ (Yu et al., 2016), RefCOCOg (Mao et al., 2016), and Visual7W datasets for the task. However, this assumption is not true in realworld multimedia data such as news, TV dramas, and social media, where entities in the text are not always able to be grounded to the visual data due to the fact that text and visual data in these multimedia data are unnecessarily directly corresponding to each other. We name the case that a query can be grounded to an image region as answerable visual grounding; otherwise, unanswerable visual grounding from here. The ignorance of unanswerable visual grounding in previous work can lead to problems for downstream tasks. For instance, in VQA, if the VQA model cannot understand the case that entities in the question cannot be grounded to the image, it cannot deal with the case that a question cannot be answered given the image either. Therefore, a robust visual grounding model should be able to flexibly deal with both answerable and unanswerable visual grounding. In this work, we study this flexible visual grounding problem. Figure 1 compares our work with previous work. To study flexible visual grounding, we construct two types of datasets. The first one is a pseudo dataset, which is constructed by randomly selecting queries from other images and combining it with a target image in the RefCOCO+ dataset (Yu et al., 2016). The second one is a social media dataset (SMD4FVG), which contains unanswerable realworld queries. We construct the SMD4FVG dataset by crawling tweets consisting of both images and text and annotating answerable and unanswerable queries via crowdsourcing. Previous visual grounding models cannot handle unanswerable visual grounding. To give a model the ability to flexibly identify whether the input query can be grounded or not, we propose a novel method for unanswerable visual grounding by adding a pseudo region corresponding to a query that cannot be grounded. The model is then trained to ground to ground-truth regions for answerable queries and pseudo regions for unanswerable queries. Experiments conducted on both the pseudo and SMD4FVG datasets indicate that our model can flexibly process both answerable and unanswerable queries with high accuracy. In addition, we study the possibility of the usage of using the pseudo dataset to improve the accuracy on the SMD4FVG dataset. The contributions of this paper are in three-folds: • We propose a flexible visual grounding task that includes unanswerable visual grounding, where the unanswerable visual grounding problem has not been studied before. • We construct a pseudo dataset based on the RefCOCO+ dataset and a social media dataset based on tweets consisting of both images and text via crowdsourcing for studying the flexible visual grounding task. • We propose a flexible visual grounding model, which can deal with both answerable and unanswerable queries and achieves high accuracy on our datasets. Related Work Previous visual grounding studies have been conducted on different datasets. In the Flickr30k entities dataset (Plummer et al., 2015), a query corresponds to a noun phrase (i.e., entity) containing in a caption of an image. In the RefClef (Kazemzadeh et al., 2014), RefCOCO, RefCOCO+ (Yu et al., 2016), and RefCOCOg (Mao et al., 2016) datasets, a query is an phrase referring to an object in an image. In the Visual7W dataset , a query corresponds to a question regarding an image region. However, all these datasets do not consider unanswerable visual grounding. In contrast, we propose flexible visual grounding and construct a pseudo dataset and a social media dataset. Regarding visual grounding models, Plummer et al. (2015) proposed a method based on canonical correlation analysis (Hardoon et al., 2004) that learns joint embeddings of phrases and image regions. Wang et al. (2016a) proposed a two-branch neural network for joint phrasal and visual embeddings. Fukui et al. (2016) used multimodal compact bilinear pooling to fuse phrasal and visual embeddings. proposed a method to first detect a candidate region for a given phrase and then reconstruct the phrase using the detected region. Wang et al. (2016b) proposed an agreement-based method, which encourages semantic relations among phrases to agree with visual relations among regions. Yeh et al. (2017) proposed a framework that can search over all possible regions instead of a fixed number of region proposals. Plummer et al. (2017) used spatial relationships between pairs of phrases connected by verbs or prepositions. Chen et al. (2017) proposed a reinforcement learning-based model that rewards the grounding results with image-level context. Yu et al. (2018b) improved the region proposal network by training it on the Visual Genome dataset (Krishna et al., 2016) to increase the diversity of object classes and attribute labels. Sadhu et al. (2019) proposed to combine object detection and grounding models to deal with unseen nouns during training. Yang et al. (2020a) propagated relations among noun phrases in a query based on the linguistic structure of it. Yang et al. (2020b) addressed the long and complex queries by recursive sub-query construction. Dong et al. (2021) proposed a crosslingual visual grounding task, which transfers the knowledge from an English model to improve the performance of a French model. Inspired by the success of pre-training language models such as BERT (Devlin et al., 2019), visionand-language pre-training on large image caption datasets such as the conceptual captions dataset (Sharma et al., 2018) has been proposed such as ViLBERT (Lu et al., 2019) VL-BERT (Su et al., 2020;, and UNITER (Chen et al., 2020). Those vision-and-language pre-training models differ from the model architecture. Visionand-language pre-training is evaluated on tasks including visual grounding. However, same to previous studies, the visual grounding task does not consider unanswerable cases (Lu et al., 2019;Su et al., 2020;Chen et al., 2020). Our flexible visual grounding model is based on the multi-task ViLBERT model , which achieves state-of-the-art performance on visual grounding. Dataset Construction Because there are no existing visual grounding datasets where unanswerable queries are contained, we present two ways to construct two types of datasets to study the flexible visual grounding problem. RefCOCO+ Pseudo Dataset As the construction of a new large-scale dataset is costive and time-consuming, firstly, we constructed a pseudo dataset based on the RefCOCO+ dataset (Yu et al., 2016) using the negative pair sampling method presented in (Yu et al., 2018a). To generate unanswerable data, we randomly select an image and a query of another image from the Re-fCOCO+ dataset and combine them as a pair of visual grounding data. Because the query is from a different image, we can assume that the query cannot be grounded to the selected image. However, there is still a possibility that the randomly selected query can be grounded to the image, which may lead to noise. We will discuss this problem in Section 6.1. Next, we combined the generated unanswerable data to the original RefCOCO+ dataset to make a pseudo dataset containing both answerable and pseudo unanswerable queries. Social Media Dataset (SMD4FVG) Unanswerable visual grounding exists in real-world multimedia data consisting of both text and visual information such as news, TV dramas, and social media. Among these, social media is one typical case where there are many unanswerable visual grounding data because the text and visual information posted by users are not necessarily closely related to each other. Due to this characteristic, in social media, there could be more unanswerable visual grounding data than answerable ones. This might result in an unbalanced dataset, making training and evaluation difficult. In order to construct a balanced dataset, we propose a pipeline shown in Figure 2. We describe each step in detail in this section. Data Crawling To construct the SMD4FVG dataset, we first crawled image and text pairs from Twitter. We will follow the fair use policy of Twitter regarding copyright of the crawled data. 2 We used Twitter's official library tweepy 3 for this process. In order to inherit previous visual grounding studies, we decided to crawl data from the same domain as the RefCOCO+ dataset. To this end, we searched the hashtags in Twitter that match the object classes in the RefCOCO+ dataset and only crawled the data that hit. As a result, 20, 941 tweets of images and text pairs were crawled. Image Filtering In order to construct a visual grounding dataset balanced on both answerable and unanswerable queries, we further conducted image filtering from the crawled tweets. For the image filtering process, we used EfficientnNet (Tan and Le, 2019) to classify images, Yolov4 (Bochkovskiy et al., 2020) to detect objects and CRAFT (Baek et al., 2019) to detect text in images. The EfficientNet model was pre-trained on the ImageNet dataset (Deng et al., 2009 The pipeline for constructing the social media dataset. After crawling tweets containing both images and text, we first filter images that do not belong to the RefCOCO+ classes, contain less than two objects, or are dominated with text in the image step by step. After that, we extract noun phrases as queries in the tweet text. Finally, we annotate answerable and unanswerable queries via crowdsourcing in two steps where in the first step, unanswerable queries are identified, and in the second step, bounding boxes are annotated for answerable queries. same purpose of inheriting previous visual grounding studies, from the ImageNet classes output by EfficientNet, we only chose the classes similar to RefCOCO+ classes and removed the others. When determining the similarities between the Re-fCOCO+ classes, we calculated the Wu & Palmer similarity (Wu and Palmer, 1994) and chose classes that surpassed a similarity score of 0.85. It calculates similarity by considering the depths of the two synsets (s1, s2) within the WordNet (Feinerer and Hornik, 2020) hierarchy, along with the depth of the least common subsumer (LCS) as: Wu − Palmer = 2 * depth(LCS(s1, s2)) depth(s1) + depth(s2) (1) As a result of the image classification-based filtering, the crawled 20, 941 tweets decreased to 6, 813 tweets. For the next step, we filtered more tweets using the Yolov4 object detection model. The object detection model was pre-trained with the Microsoft COCO dataset (Lin et al., 2014). We chose images that had two or more objects because images with only one single object or background are considered to be too easy for our task. As a result, 4, 028 tweets were chosen from the 6, 813 tweets. In the crawled tweets, we found that many images consisted of mostly text and website information. As visual grounding is almost impossible for text/website-dominated images, we further filtered those images. To this end, we used the optical character recognition model of CRAFT. Based on the results of the optical character recognition model, we calculated a text proportion ratio in an image. We only kept images that had a proportion ratio lower than 0.05 with respective to the entire image. As a result, 3, 425 images were left. Due to the limitations of the above image processing models, advertisement, inappropriate, and duplicate images were still left in the dataset after the above filtering process. Therefore, we further manually checked the data and discarded them. As a result, 988 tweets were finally left. Query Extraction Tweets contain emoji, links, and mentions, which make query extraction difficult. Therefore, we preprocessed the data and eliminated those expressions. From the pre-processed text, we extracted sentences and used the chunking model (Akbik et al., 2018) to chunk the noun phrases within the sentences. We did not use the pronoun (such as he, her, she) and relative pronoun (such as which, who, that) as queries. As for complex noun phrases that contain other noun phrases within them, we split them and only used single noun phrases as queries. As a result, we obtained 8, 827 queries for the 988 images. Crowdsourcing Annotation From the 8, 827 pairs of image and query obtained, we annotated image regions that can be grounded by queries and finally constructed the SMD4FVG dataset. For the annotation, we used Amazon Mechanical Turk. The compensation was 8-9 dollars per hour. 288 The annotation process consists of two steps. 4 The first step is the "bounding box requirement" task. In this step, we asked workers if a query can be grounded, and if not, which of the following cases it belongs to: 1) What the query refers to cannot be seen in the image. 2) The query does not refer to something specific in the image but rather to the background. 3) The query is an abstract noun that might be confusing based on the contents of the image. In case 1, the query refers to an entity, but the image does not contain that entity. For instance, in the right part of Figure 1, the query "my favorite picture" entity does not appear in the image. In case 2, if the query is the background of an image, it might make the annotation regions different by different workers, or as there are many objects in the background, it might make the definition of background vague. For instance, in the right part of Figure 1, it is hard to clearly determine the region for the query "a beautiful sunrise." Also, there might be many objects in the annotation. Therefore, we asked workers to annotate this case as unanswerable. In case 3, if the query is an abstract noun, the judgment of annotation might differ from workers. For instance, if the query is "sport," and some workers might define "sport" as a person doing a sport and determine the query as answerable based on the contents of an image, and some workers might define "sport" as something invisible and determine the query as unanswerable. Thus, we set this case as unanswerable. As a result of the crowdsourcing annotation for this step, we obtained 6, 941 unanswerable queries in total. The second step is the "drawing the bounding box" task. In this step, the annotation was done for data that were not annotated as unanswerable in the first step. Workers were asked to draw a bounding box for an image region corresponding to a query. The difficult part of this process was when there were multiple instances that corresponded to one query in an image. In this case, we instructed the workers to annotate multiple instances to one bounding box if the instances are not clearly separated; otherwise, we annotate them with individual bounding boxes. Besides that, queries in social media data can contain proper nouns, which are special compared to previous datasets and could be interesting to study; thus, we asked workers to indicate if an answerable query belongs to these. In total, 1, 886 answerable queries were annotated, among which 576 queries belong to proper nouns. Finally, we manually checked the results of the two steps. We checked 100 unanswerable pairs and found that 7 of them were wrongly labeled. Most of them were simple misses where the entity that the query refers to does exist in an image, which we plan to improve as our future work. In addition, we checked and corrected the bounding boxes that were miss-labeled by workers of all answerable pairs. As a result, we obtained 8, 827 annotated query and image pairs for our SMD4FVG dataset. Flexible Visual Grounding Model We propose to add a pseudo region to a visual grounding model to achieve flexible visual grounding for both answerable and unanswerable queries. An overview of our proposed model is shown in Figure 3. In this section, we first present our visual grounding model, followed by the way to add pseudo regions for unanswerable queries. Visual Grounding Model Our visual grounding model follows , which consists of 2 stages. In the first stage, we extract region proposals and feature vectors of all regions with an object detection model. We employ the Faster RCNN (Ren et al., 2015) model in the first stage. In the second stage, a similarity score between a region proposal and an input query is calculated. We utilize the multi-task ViLBERT for the calculation of the similarity between a region proposal and the input query. Our model is trained to minimize a binary cross-entropy (BCE) loss between a label vector and a similarity score vector similar to (Sadhu et al., 2019). In inference, the input query will be grounded to the region with the highest similarity score. In detail, after extracting a feature vector f v ∈ R dv for a region proposal by Faster RCNN, a spatial vector f s ∈ R 5 is incorporated to it. The spatial vector is encoded to a 5-d vector from normalized top-left and bottom-right coordinates as: f s = x tl W , y tl H , x br W , y br W , wh W H ,(2) where (x tl , y tl ) is the top-left coordinate, (w br , y br ) is the bottom-right coordinate, w and h are the the width and the height of the region, and W and H are the width and the height of the image, respectively. The spatial vector is then projected to match Figure 3: The proposed flexible visual grounding model. For an unanswerable query, we add a pseudo region and train the model to ground the query to the pseudo region. the dimension of the visual feature by a learnable weight matrix W s ∈ R 5×dv and then added to f v to generate the final region feature vector v r as: v r = f v + W s f s .(3) The query is given in both training and inference. It is denoted as q. Next, v r and q are input to the multi-task ViLBERT model, which generates a representation h i ∈ R d i for the ith region and the query as: h i = ViLBERT(v r , q).(4) h i is then used to calculate a similarity score for the ith region by: s i = W i h i ,(5) where W i ∈ R d i ×1 is a learnable weight matrix. The ground-truth label score is set to 1 if the IoU between a region proposal and the ground-truth region is larger than 0.5; otherwise, it is set to 0. The similarity score vector s ji and the ground-truth label vector l ji for the ith region in the jth image are then used to minimize a BCE loss as: BCE = − 1 N N j=1 M i=1 ljilog(sji)+(1−lji)log(1−sji),(6) where N is the number of image and query pairs in a dataset, and M is the number of region proposals for an image. Pseudo Region To make our visual grounding model deal with unanswerable queries, we propose to incorporate a pseudo region corresponding to an unanswerable query into the region proposals. An example is shown in Figure 3. In Figure 3, the input query "man is playing baseball" is not related to the input image, where the image is about feet and clocks; thus, the query cannot be grounded to the image. For this query, we add a pseudo region to the regions proposed by Faster RCNN (Ren et al., 2015). The position of the pseudo region is set to the topleft of the input image, and all the x and y coordinate values of its spatial vector are set to 0 in Eq. (2). All components of the feature vector f v ∈ R dv for the pseudo region are set to +1. Our visual grounding model calculates the similarity score between the pseudo region incorporated region vectors and the query same as Section 4.1. The model is then trained to give the highest similarity score for the pseudo region when the query cannot be grounded. During inference, the model will output the region with the highest score as the prediction. For instance, in the example of Figure 3, the pseudo region will be chosen for the input query because the input query is not corresponding to the input image. Experimental Settings In our experiments, we verify the effectiveness of the proposed model on both the RefCOCO+ pseudo and SMD4FVG datasets. Here, we first describe the statistics of each dataset and settings, followed by training details. Settings on the RefCOCO+ Pseudo Dataset For the pseudo dataset, based on the RefCOCO+ dataset, we generated unanswerable data and com- bined them with the original dataset with the ratio of 1:2. The upper part of Table 1 shows the statistics of the pseudo dataset. For the pseudo dataset, we investigated the performance of our model with the following settings: • RefCOCO+: A baseline that trained our visual grounding model in Section 4 on the original RefCOCO+ dataset to evaluate answerable visual grounding only, and compared the performance with . • RefCOCO+Thres: A baseline based on the RefCOCO+ setting but sets a threshold according to the similarity score (Eq. (4)) distribution for all queries during inference. Queries with the highest similarity scores below the threshold were treated as unanswerable otherwise answerable. The threshold was tuned on the validation split of the pseudo dataset to achieve the highest accuracy for all queries. • Pseudo: We directly trained and evaluated our model on the pseudo dataset. • SM→Pseudo: We first trained our model on the training data of the SMD4FVG dataset and then further fine-tuned it on the pseudo dataset. We hope that the annotated SMD4FVG dataset could boost the performance on the pseudo dataset. Settings on the SMD4FVG Dataset The lower part of Table 1 shows the statistics of the SMD4FVG dataset, where we split the annotated 8, 827 query and image pairs into train/validation/test with a 69%:16%:15% distribution. We evaluated the performance on the SMD4FVG dataset with the following settings: • RefCOCO+Thres: A baseline similar to the RefCOCO+Thres setting on the pseudo dataset, but the threshold was tuned on the validation split of the SMD4FVG dataset. • Pseudo: Aiming to investigate the difference between the pseudo and SMD4FVG datasets, we trained our model on the training data of the pseudo dataset and evaluated it on the SMD4FVG dataset. • SM: This is a straightforward setting that directly trained and evaluated our visual grounding model on the SMD4FVG dataset. • Pseudo→SM: We first trained our model on the training data of the pseudo dataset and then further fine-tuned it on the SMD4FVG dataset. We hope that the large scale of the pseudo dataset could boost the performance on the SMD4FVG dataset. Training Details Visual features and region proposals were extracted from the ResNeXT-152 Faster-RCNN model (Ren et al., 2015) trained on the Visual Genome dataset (Krishna et al., 2016) with an attribute loss. It was not fine-tuned during training. We used the multi-task ViLBERT model for calculating the similarity score between region proposals and the query, which contains a 6 / 12 layer of transformer blocks for visual/linguistic streams individually. The multi-task ViLBERT was trained simultaneously with 4 vision-and-language tasks on 12 datasets. We set the region feature dimension d v to 2, 048, the joint ViLBERT representation dimension d i to 1, 024, and the number of region proposals N to 100. We trained our model on 8 TitanX GPUs with a batch size of 256, 20 epochs, and the AdamW optimizer with a linear warmup and linear decay learning rate scheduler following for all settings. Results Results on the Pseudo Dataset The upper part of Table 2 shows the accuracy of our model on the pseudo dataset. For the RefCOCO+ setting, our model achieves an accuracy of 73.3%, which is almost the same as the result 73.2% when we evaluated the original model of using their codes. This indicates that adding a pseudo region has little effect on the performance for answerable visual grounding. However, it cannot deal with unanswerable queries due to the absence of such data in the RefCOCO+ dataset. The RefCOCO+Thres setting works well for answerable queries but fails for answerable ones. The similarity score distribution is in Appendix B. For the pseudo setting, our model achieves an accuracy of 69.7% and 91.2% for answerable and unanswerable queries, respectively. Our model can ground unanswerable queries with high accuracy. However, it drops 2.6% point for answerable queries compared to the RefCOCO+ setting. We think the reason for this is due to the mixture of unanswerable queries to the original RefCOCO+ dataset, leading the judgment to answerable visual grounding be more complex. SM→Pseudo only slightly boots the All accuracy due to the smallscale of the SMD4FVG dataset. Some incorrect predictions for unanswerable queries are due to the randomness of the dataset, and qualitative examples can be found in Appendix C. Results on the SMD4FVG Dataset The lower part of Table 2 shows the accuracy of our model on the SMD4FVG dataset. We can see that the RefCOCO+Thres setting forces all queries to be unanswerable ones. The similarity score distribution can be found in Appendix B. Among the other three settings, the pseudo setting achieves the highest accuracy of 49.7% for answerable queries. We think the reason for this is that there are only a few answerable queries in the SMD4FVG dataset, while both the amount and ratio for that are higher in the pseudo dataset, making the model learn answerable grounding well. However, the accuracy for unanswerable queries is only 65.6%, which is significantly worse than the other two settings that use the SMD4FVG dataset for training. We think this is due to the different characteristics of unanswerable queries in the pseudo and SMD4FVG datasets, wherein the pseudo dataset the unanswerable queries are unrelated to the images, but in the SMD4FVG dataset they are more complex. The SM setting achieves high accuracy of 95.0% for unanswerable queries and the best accuracy of 81.7% for all queries. The reason for this can be that our model is optimized in the SMD4FVG dataset directly with the SM setting. However, the accuracy for answerable queries with the SM setting is the lowest due to the small ratio of answerable queries and complex answerable queries in the SMD4FVG dataset. The Pseudo→SM setting achieves a trade-off between the pseudo and SM settings, where there is an improvement for answerable queries compared to the SM setting and a big improvement for unanswerable queries compared to the pseudo setting. We think the reason for this is that Pseudo→SM can take the balance between the pseudo and SM settings via fine-tuning the model pre-trained on the pseudo dataset to the SMD4FVG dataset. We also observe a 1% accuracy drop of all queries from SM to Pseudo→SM. We think it is caused by the big ratio of unanswerable queries in the SMD4FVG dataset. The SM model was more biased to unanswerable queries and thus performed better in accuracy for all queries because of the big ratio of unanswerable queries. Qualitative examples can be found in Appendix C. For both the pseudo and SMD4FVG datasets, we observe better performance on unanswerable queries than answerable queries besides Ref-COCO+Thres on the pseudo dataset. We think the reason could be that it is much easier to learn that a query is unrelated to an image (i.e., unanswerable) instead of finding the exact region that a query refers to (i.e., answerable) by our models. Conclusion Previous studies on visual grounding ignored the case of unanswerable queries, which is common in real-world such as social media data. In this paper, we proposed flexible visual grounding to address both answerable and unanswerable visual grounding. To this end, we constructed a pseudo dataset based on the RefCOCO+ dataset and a social media dataset based on tweets consisting of both images and text via crowdsourcing. In addition, we proposed a flexible visual grounding model, which can deal with both answerable and unanswerable queries. Experiments on our datasets indicated that our model could achieve high accuracy, especially for unanswerable queries, but there is still room for further improvement. To make our social media dataset balanced, we constrained it to the RefCOCO+ classes, which may also limit the ability of our model on realworld data. In the future, we plan to construct a dataset without such constraints. A Annotation Interfaces Figure 4 shows the screenshot of the first step of crowdsourcing. This step is the "bounding box requirement" task. We instruct workers to check if the given query is answerable or not. For unanswerable queries, we further ask workers to check which unanswerable type the query is. Figure 5 shows the screenshot of the second step of crowdsourcing. This step is the "drawing bounding box" task. For an answerable query, we instruct workers to draw bounding boxes to which the query refers. B Similarity Score Distribution Figure 6 shows the similarity score distribution of the RefCOCO+Thres setting on the testsets of the pseudo dataset and SMD4FVG dataset, respectively. We can see that the similarity score and the grounding possibility have a very low correlation. Figure 7 shows examples of our model with the RefCOCO+ setting on unanswerable queries in the pseudo dataset. We can see that the RefCOCO+ setting cannot identify unanswerable queries, which gives wrong predictions for them. However, there are also some ambiguous queries, such as the ones in examples 1, 6, and 7, for which we cannot confidently claim that the predictions are wrong due to the random combination characteristics of unanswerable queries in the pseudo dataset. Figure 8 shows example outputs of our model with the pseudo setting. Examples 1 and 2 in Figure 8 are two successful examples for answerable visual grounding; we can see that our model can ground queries with and without modifiers. Examples 3 and 4 in Figure 8 are two successful examples for unanswerable visual grounding; we can see that for the queries that are unrelated to the images, our model can correctly identify that they cannot be grounded. Examples 5 and 6 in Figure 8 are two unsuccessful examples for answerable visual grounding; our model fails on example 5 in Figure 8 where the ground-truth is the other person with the number 160 on the vest; for example 6 in Figure 8, the query "taller one" itself is actually ambiguous, and our model makes the judgment that it cannot be grounded, while the ground-truth is annotated for the "taller refrigerator" in the Re-fCOCO+ dataset. Although our model achieves 91.2% accuracy for unanswerable queries, it still makes some mistakes. Examples 7 and 8 in Figure 8 show two unsuccessful examples for unanswerable visual grounding; we can see that for example 7 in Figure 8, the query "lady" actually can be grounded, but it is annotated as unanswerable in our pseudo dataset due to the fact that the query is taken from another image randomly and it could be grounded in coincidence; the query for example 8 in Figure 8 is again ambiguous, and thus it is actually difficult to claim that our model is wrong here. Figure 9 shows example outputs of our model with the SNS setting, which achieves the best overall accuracy among the three settings. Examples 1 and 2 in Figure 9 are two successful examples for answerable visual grounding; we can see that our model can do grounding for both a single object (example 1) and multiple objects (example 2). Examples 3 and 4 in Figure 9 are two successful examples for unanswerable visual grounding; we can see that our model correctly identifies that the abstract noun query "sport" and the query "the east coast" that cannot be inferred from the image directly, cannot be grounded. Examples 5 and 6 in Figure 9 are two unsuccessful examples for answerable visual grounding; for example 5, the query "airbus320ceo" is a proper noun, which is difficult for grounding; while for example 6, "coach" is difficult to infer from the image though "bus" is clear. Examples 7 and 8 in Figure 9 show two unsuccessful examples for unanswerable visual grounding; for example 7, due to the failure of our query extraction model, an adjective query "automotive" is generated, which should not be grounded; for example 8, it is a human dressed up as a bear but not a real bear, and thus should not be grounded. Figure 4: The bounding box requirement interface. This is the first step of crowdsourcing. In this step, we instruct workers to check whether the given query is answerable or not. If the query is unanswerable, we ask workers to further check which unanswerable type the query is. C Qualitative Examples 296 More Instruction Figure 5: The drawing bounding box interface. This is the second step of crowdsourcing. In this step, we instruct workers to draw bounding boxes to which the query refers. The annotation is done for query and image pairs that are classified as answerable in the first step. 297 Pseudo dataset SMD4FVG dataset Figure 6: The similarity score distribution of the RefCOCO+Thres setting on the testsets of the pseudo dataset and SMD4FVG dataset, respectively. X-axis and Y-axis denote the similarity/confidence score and density, respectively. The solid blue and orange curves represent answerable and unanswerable queries, respectively. The vertical dotted red lines denote the thresholds. Figure 2 : 2Figure 2: The pipeline for constructing the social media dataset. After crawling tweets containing both images and text, we first filter images that do not belong to the RefCOCO+ classes, contain less than two objects, or are dominated with text in the image step by step. After that, we extract noun phrases as queries in the tweet text. Finally, we annotate answerable and unanswerable queries via crowdsourcing in two steps where in the first step, unanswerable queries are identified, and in the second step, bounding boxes are annotated for answerable queries. Figure 7 : 7Examples of visual grounding for unanswerable queries in the pseudo dataset. The blue bounding boxes are the prediction of our model with the RefCOCO+ setting. Figure 8 :Figure 9 : 89Examples of successful (top) and unsuccessful (bottom) visual grounding for answerable and unanswerable queries in the pseudo dataset. The green and blue bounding boxes are ground-truth and the prediction of our model with the pseudo setting, respectively. Examples of successful (top) and unsuccessful (bottom) visual grounding for answerable and unanswerable queries in the SMD4FVG dataset. The green and blue bounding boxes are ground-truth and the prediction of our model with the SNS setting, respectively. ). With theCrowdsourcing Image Classification web-site Similarity with RefCOCO+ < 0.85 Object Detection The Number of Objects < 2 Optical Character Recognition Text Area The Whole Ratio of Text Area > 0.05 I think my favourite picture of the morning, two wonderful horses, Angel and Rannoch and a beautiful sunrise on a frosty day. Box Requirement & Annotation Filtered Tweets Unfiltered Tweets Crawled Tweet Text & Image Tweets ⋯ Text & Image Tweets Text Pre-processing & Extraction of queries I think my favourite picture of the morning, two wonderful horses, Angel and Rannoch and a beautiful sunrise on a frosty day. The social media dataset is available at https:// github.com/ku-nlp/SMD4FVG. https://help.twitter.com/en/ rules-and-policies/fair-use-policy 3 https://www.tweepy.org/ The screenshot of the interfaces for these two steps can be found in Appendix A. AcknowledgementThis work was supported by ACT-I, JST. Contextual string embeddings for sequence labeling. Alan Akbik, Duncan Blythe, Roland Vollgraf, COLING 2018, 27th International Conference on Computational Linguistics. Alan Akbik, Duncan Blythe, and Roland Vollgraf. 2018. Contextual string embeddings for sequence labeling. In COLING 2018, 27th International Conference on Computational Linguistics, pages 1638-1649. Character region awareness for text detection. Youngmin Baek, Bado Lee, Dongyoon Han, Sangdoo Yun, Hwalsuk Lee, abs/1904.01941CoRRYoungmin Baek, Bado Lee, Dongyoon Han, Sangdoo Yun, and Hwalsuk Lee. 2019. Character region awareness for text detection. CoRR, abs/1904.01941. YOLOv4: Optimal Speed and Accuracy of Object Detection. Alexey Bochkovskiy, Chien-Yao Wang, Hong-Yuan Mark Liao, arXiv:2004.10934arXiv e-printsAlexey Bochkovskiy, Chien-Yao Wang, and Hong- Yuan Mark Liao. 2020. YOLOv4: Optimal Speed and Accuracy of Object Detection. arXiv e-prints, page arXiv:2004.10934. Query-guided regression network with context policy for phrase grounding. Kan Chen, Rama Kovvuri, Ram Nevatia, ICCV. Kan Chen, Rama Kovvuri, and Ram Nevatia. 2017. Query-guided regression network with context policy for phrase grounding. In ICCV, pages 824-832. Uniter: Universal image-text representation learning. Yen-Chun Chen, Linjie Li, Licheng Yu, Ahmed El Kholy, Faisal Ahmed, Zhe Gan, Yu Cheng, Jingjing Liu, ECCV. Yen-Chun Chen, Linjie Li, Licheng Yu, Ahmed El Kholy, Faisal Ahmed, Zhe Gan, Yu Cheng, and Jingjing Liu. 2020. Uniter: Universal image-text representation learning. In ECCV. Imagenet: A large-scale hierarchical image database. J Deng, W Dong, R Socher, L Li, Kai Li, Li Fei-Fei, 10.1109/CVPR.2009.52068482009 IEEE Conference on Computer Vision and Pattern Recognition. J. Deng, W. Dong, R. Socher, L. Li, Kai Li, and Li Fei- Fei. 2009. Imagenet: A large-scale hierarchical im- age database. In 2009 IEEE Conference on Computer Vision and Pattern Recognition, pages 248-255. BERT: Pre-training of deep bidirectional transformers for language understanding. Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, NAACL: HLT. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In NAACL: HLT, pages 4171-4186. Cross-lingual visual grounding. Wenjian Dong, Mayu Otani, Noa Garcia, Yuta Nakashima, Chenhui Chu, 10.1109/ACCESS.2020.3046719IEEE Access. 9Wenjian Dong, Mayu Otani, Noa Garcia, Yuta Nakashima, and Chenhui Chu. 2021. Cross-lingual visual grounding. IEEE Access, 9:349-358. wordnet: Word-Net Interface. Ingo Feinerer, Kurt Hornik, R package version 0.1-15Ingo Feinerer and Kurt Hornik. 2020. wordnet: Word- Net Interface. R package version 0.1-15. Multimodal compact bilinear pooling for visual question answering and visual grounding. Akira Fukui, Dong Huk Park, Daylen Yang, Anna Rohrbach, Trevor Darrell, Marcus Rohrbach, EMNLP. Akira Fukui, Dong Huk Park, Daylen Yang, Anna Rohrbach, Trevor Darrell, and Marcus Rohrbach. 2016. Multimodal compact bilinear pooling for vi- sual question answering and visual grounding. In EMNLP, pages 457-468. Canonical correlation analysis: An overview with application to learning methods. David R Hardoon, Sandor R Szedmak, John R Shawe-Taylor, Neural Computation. 1612David R. Hardoon, Sandor R. Szedmak, and John R. Shawe-taylor. 2004. Canonical correlation analysis: An overview with application to learning methods. Neural Computation, 16(12):2639-2664. ReferItGame: Referring to objects in photographs of natural scenes. Sahar Kazemzadeh, Vicente Ordonez, Mark Matten, Tamara Berg, 10.3115/v1/D14-1086Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)Doha, QatarAssociation for Computational LinguisticsSahar Kazemzadeh, Vicente Ordonez, Mark Matten, and Tamara Berg. 2014. ReferItGame: Referring to objects in photographs of natural scenes. In Proceed- ings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 787- 798, Doha, Qatar. Association for Computational Linguistics. Ranjay Krishna, Yuke Zhu, Oliver Groth, Justin Johnson, Kenji Hata, Joshua Kravitz, Stephanie Chen, Yannis Kalantidis, Li-Jia Li, David A Shamma, Michael Bernstein, Li Fei-Fei, Visual genome: Connecting language and vision using crowdsourced dense image annotations. Ranjay Krishna, Yuke Zhu, Oliver Groth, Justin John- son, Kenji Hata, Joshua Kravitz, Stephanie Chen, Yannis Kalantidis, Li-Jia Li, David A Shamma, Michael Bernstein, and Li Fei-Fei. 2016. Visual genome: Connecting language and vision using crowdsourced dense image annotations. Microsoft coco: Common objects in context. Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, C Lawrence Zitnick, ECCV. Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C Lawrence Zitnick. 2014. Microsoft coco: Common objects in context. In ECCV, pages 740- 755. Vilbert: Pretraining task-agnostic visiolinguistic representations for vision-and-language tasks. Jiasen Lu, Dhruv Batra, Devi Parikh, Stefan Lee, NeurIPS. Jiasen Lu, Dhruv Batra, Devi Parikh, and Stefan Lee. 2019. Vilbert: Pretraining task-agnostic visiolinguis- tic representations for vision-and-language tasks. In NeurIPS, pages 13-23. 12-in-1: Multi-task vision and language representation learning. Jiasen Lu, Vedanuj Goswami, Marcus Rohrbach, Devi Parikh, Stefan Lee, The IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Jiasen Lu, Vedanuj Goswami, Marcus Rohrbach, Devi Parikh, and Stefan Lee. 2020. 12-in-1: Multi-task vision and language representation learning. In The IEEE/CVF Conference on Computer Vision and Pat- tern Recognition (CVPR). Generation and comprehension of unambiguous object descriptions. Junhua Mao, Jonathan Huang, Alexander Toshev, Oana Camburu, Alan L Yuille, Kevin Murphy, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)Junhua Mao, Jonathan Huang, Alexander Toshev, Oana Camburu, Alan L. Yuille, and Kevin Murphy. 2016. Generation and comprehension of unambiguous ob- ject descriptions. In Proceedings of the IEEE Con- ference on Computer Vision and Pattern Recognition (CVPR). Phrase localization and visual relationship detection with comprehensive image-language cues. A Bryan, Arun Plummer, Christopher M Mallya, Julia Cervantes, Svetlana Hockenmaier, Lazebnik, ICCV. Bryan A. Plummer, Arun Mallya, Christopher M. Cer- vantes, Julia Hockenmaier, and Svetlana Lazebnik. 2017. Phrase localization and visual relationship de- tection with comprehensive image-language cues. In ICCV, pages 1928-1937. Flickr30k entities: Collecting region-to-phrase correspondences for richer imageto-sentence models. A Bryan, Liwei Plummer, Chris M Wang, Juan C Cervantes, Julia Caicedo, Svetlana Hockenmaier, Lazebnik, ICCV. Bryan A. Plummer, Liwei Wang, Chris M. Cervantes, Juan C. Caicedo, Julia Hockenmaier, and Svetlana Lazebnik. 2015. Flickr30k entities: Collecting region-to-phrase correspondences for richer image- to-sentence models. In ICCV, pages 2641-2649. Faster r-cnn: Towards real-time object detection with region proposal networks. Kaiming Shaoqing Ren, Ross He, Jian Girshick, Sun, NIPS. Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. 2015. Faster r-cnn: Towards real-time object detection with region proposal networks. In NIPS, pages 91-99. Grounding of textual phrases in images by reconstruction. Anna Rohrbach, Marcus Rohrbach, Ronghang Hu, Trevor Darrell, Bernt Schiele, ECCV. Anna Rohrbach, Marcus Rohrbach, Ronghang Hu, Trevor Darrell, and Bernt Schiele. 2016. Ground- ing of textual phrases in images by reconstruction. In ECCV, pages 817-834. Zeroshot grounding of objects from natural language queries. Arka Sadhu, Kan Chen, Ram Nevatia, The IEEE International Conference on Computer Vision (ICCV). Arka Sadhu, Kan Chen, and Ram Nevatia. 2019. Zero- shot grounding of objects from natural language queries. In The IEEE International Conference on Computer Vision (ICCV). Conceptual captions: A cleaned, hypernymed, image alt-text dataset for automatic image captioning. Piyush Sharma, Nan Ding, Sebastian Goodman, Radu Soricut, ACL. Piyush Sharma, Nan Ding, Sebastian Goodman, and Radu Soricut. 2018. Conceptual captions: A cleaned, hypernymed, image alt-text dataset for automatic im- age captioning. In ACL, pages 2556-2565. Vl-bert: Pre-training of generic visual-linguistic representations. Weijie Su, Xizhou Zhu, Yue Cao, Bin Li, Lewei Lu, Furu Wei, Jifeng Dai, ICLR. Weijie Su, Xizhou Zhu, Yue Cao, Bin Li, Lewei Lu, Furu Wei, and Jifeng Dai. 2020. Vl-bert: Pre-training of generic visual-linguistic representations. In ICLR. Efficientnet: Rethinking model scaling for convolutional neural networks. Mingxing Tan, V Quoc, Le, abs/1905.11946CoRRMingxing Tan and Quoc V. Le. 2019. Efficientnet: Re- thinking model scaling for convolutional neural net- works. CoRR, abs/1905.11946. Show and tell: A neural image caption generator. Oriol Vinyals, Alexander Toshev, Samy Bengio, Dumitru Erhan, CVPR. Oriol Vinyals, Alexander Toshev, Samy Bengio, and Dumitru Erhan. 2015. Show and tell: A neural image caption generator. In CVPR, pages 3156-3164. Learning deep structure-preserving image-text embeddings. Liwei Wang, Yin Li, Svetlana Lazebnik, CVPR. Liwei Wang, Yin Li, and Svetlana Lazebnik. 2016a. Learning deep structure-preserving image-text em- beddings. In CVPR, pages 5005-5013. Structured matching for phrase localization. Mingzhe Wang, Mahmoud Azab, Noriyuki Kojima, Rada Mihalcea, Jia Deng, ECCV. Mingzhe Wang, Mahmoud Azab, Noriyuki Kojima, Rada Mihalcea, and Jia Deng. 2016b. Structured matching for phrase localization. In ECCV, pages 696-711. Visual question answering: A survey of methods and datasets. Qi Wu, Damien Teney, Peng Wang, Chunhua Shen, Anthony Dick, Anton Van Den, Hengel, 10.1016/j.cviu.2017.05.001CVIUQi Wu, Damien Teney, Peng Wang, Chunhua Shen, Anthony Dick, and Anton van den Hengel. 2017. Visual question answering: A survey of methods and datasets. CVIU, pages 1-20. Verb semantics and lexical selection. Zhibiao Wu, Martha Palmer, CoRRZhibiao Wu and Martha Palmer. 1994. Verb semantics and lexical selection. CoRR, abs/cmp-lg/9406033. Propagating over phrase relations for one-stage visual grounding. Sibei Yang, Guanbin Li, Yizhou Yu, ECCV. Sibei Yang, Guanbin Li, and Yizhou Yu. 2020a. Prop- agating over phrase relations for one-stage visual grounding. In ECCV. Improving one-stage visual grounding by recursive sub-query construction. Zhengyuan Yang, Tianlang Chen, Liwei Wang, Jiebo Luo, ECCV. Zhengyuan Yang, Tianlang Chen, Liwei Wang, and Jiebo Luo. 2020b. Improving one-stage visual grounding by recursive sub-query construction. In ECCV. Interpretable and globally optimal prediction for textual grounding using image concepts. Raymond Yeh, Jinjun Xiong, W Wen-Mei, Minh Hwu, Alexander G Do, Schwing, NIPS. Raymond Yeh, Jinjun Xiong, Wen-Mei W. Hwu, Minh Do, and Alexander G. Schwing. 2017. Interpretable and globally optimal prediction for textual grounding using image concepts. In NIPS, pages 1909-1919. Mattnet: Modular attention network for referring expression comprehension. Licheng Yu, Zhe Lin, Xiaohui Shen, Jimei Yang, Xin Lu, Mohit Bansal, Tamara L Berg, CVPR. Licheng Yu, Zhe Lin, Xiaohui Shen, Jimei Yang, Xin Lu, Mohit Bansal, and Tamara L Berg. 2018a. Mattnet: Modular attention network for referring expression comprehension. In CVPR. Modeling context in referring expressions. Licheng Yu, Patrick Poirson, Shan Yang, Alexander C Berg, Tamara L Berg, ECCV. Licheng Yu, Patrick Poirson, Shan Yang, Alexander C. Berg, and Tamara L. Berg. 2016. Modeling context in referring expressions. In ECCV. Rethinking diversified and discriminative proposal generation for visual grounding. Zhou Yu, Jun Yu, Chenchao Xiang, Zhou Zhao, Qi Tian, Dacheng Tao, 10.24963/ijcai.2018/155IJCAI. Zhou Yu, Jun Yu, Chenchao Xiang, Zhou Zhao, Qi Tian, and Dacheng Tao. 2018b. Rethinking diversified and discriminative proposal generation for visual ground- ing. In IJCAI, pages 1114-1120. Visual7w: Grounded question answering in images. Yuke Zhu, Oliver Groth, Michael Bernstein, Li Fei-Fei, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)Yuke Zhu, Oliver Groth, Michael Bernstein, and Li Fei- Fei. 2016. Visual7w: Grounded question answering in images. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
170,785,117
[]
Une évaluation de l'impact des types de textes sur la tâche de segmentation thématique Montréal, 19-23 juillet 2010 Clémentine Adam adam@univ-tlse2.fr CLLE Université de Toulouse Philippe Muller muller@irit.fr IRIT Université de Toulouse & Alpage / INRIA Cécile Fabre cfabre@univ-tlse2.fr CLLE Université de Toulouse Une évaluation de l'impact des types de textes sur la tâche de segmentation thématique TALN 2010 Montréal, 19-23 juillet 2010Mots-clés : Segmentation thématiqueorganisation textuellecohésion lexicalevoisins distribu- tionnels Keywords: Text segmentationtextual organisationlexical cohesiondistributional neighbours Cette étude a pour but de contribuer à la définition des objectifs de la segmentation thématique (ST), en incitant à prendre en considération le paramètre du type de textes dans cette tâche. Notre hypothèse est que, si la ST est certes pertinente pour traiter certains textes dont l'organisation est bien thématique, elle n'est pas adaptée à la prise en compte d'autres modes d'organisation (temporelle, rhétorique), et ne peut pas être appliquée sans précaution à des textes tout-venants. En comparant les performances d'un système de ST sur deux corpus, à organisation thématique "forte" et "faible", nous montrons que cette tâche est effectivement sensible à la nature des textes.Abstract. This paper aims to contribute to a better definition of the requirements of the text segmentation task, by stressing the need for taking into account the types of texts that can be appropriately considered. Our hypothesis is that while TS is indeed relevant to analyse texts with a thematic organisation, this task is ill-fitted to deal with other modes of text organisation (temporal, rhetorical, etc.). By comparing the performance of a TS system on two corpora, with either a "strong" or a "weak" thematic organisation, we show that TS is sensitive to text types. Introduction La tâche de segmentation thématique (ST), qui consiste à délimiter des segments textuels de contenu homogène sur la base d'indices de rupture lexicale, a fait la preuve de sa faisabilité et de son apport dans différentes tâches de TAL (Hearst, 1997;Chen et al., 2009). Même si certains travaux ont cherché à prendre en compte des indices de rupture (cue-phrases) de différentes natures -par exemple (Litman & Passonneau, 1995) -, cette tâche repose généralement sur l'hypothèse que les textes s'organisent principalement selon un plan thématique, chaque thème se singularisant par le recours à un vocabulaire suffisamment spécifique pour le distinguer des autres. Cette hypothèse est pourtant loin de faire consensus dans les travaux menés sur le discours, qui montrent au contraire que l'organisation thématique n'est qu'un mode d'organisation des textes parmi d'autres (Péry-Woodley & Scott, 2006). Elle n'est pas pertinente pour tous les types de textes, et elle n'est pas exclusive, pour un même texte, d'autres types d'organisation alternatifs. En particulier, de nombreux travaux ont étudié les textes sous l'angle de leur organisation rhétorique, qui s'articule autour de segments identifiés comme des unités fonctionnelles déterminées par des buts argumentatifs spécifiques, 'argumentative moves' (Swales, 1990), 'argumentative zoning' (Teufel, 1999). Ces études ont montré que différents segments sont caractérisables par des faisceaux de traits linguistiques de nature essentiellement grammaticale, et ne considèrent pas forcément la répartition du vocabulaire comme un critère discriminant (Biber et al., 2007). De fait, il ne va pas du tout de soi que l'on puisse aborder par les mêmes méthodes de segmentation des textes organisés thématiquement, rhétoriquement, temporellement, voire par une combinaison de ces modes, et que les indices lexicaux soient toujours discriminants pour placer des ruptures entre segments textuels. Les tableaux 1 et 2 donnent des exemples de textes tirés de Wikipédia de manière à illustrer cette diversité des modes d'organisation. La liste des titres de section de premier niveau donne un bon aperçu de la façon dont le texte s'organise. Le tableau 1 montre des textes dont l'organisation thématique est manifeste. Le Malawi Le panda géant -Histoire -Historique -Politique -Légende -Géographie -Alimentation -Économie -Reproduction -Démographie -Protection -Culture 2 La segmentation thématique : un panorama des méthodes La segmentation thématique a pour but le découpage linéaire d'un texte en unités présentant une cohérence autour d'un sujet. La grande majorité des approches de ST se fondent sur la méthode initiée par (Hearst, 1997) : le texte est divisé en blocs contigus correspondant à une unité fixée à l'avance (un nombre n de mots, de phrases ou de paragraphes), puis on définit une fenêtre glissante qui parcourt le texte linéairement et permet de calculer un score de similarité à chaque intersection entre blocs dans le texte. Une méthode de segmentation s'attache alors à trouver les points où la similarité présente des évolutions fortes, interprétées comme des indications de rupture de la continuité thématique. Une alternative à cette approche par pavage (tiling) est de supposer des thèmes sous-jacents qu'il s'agit de détecter : chaque unité du texte considéré est rapportée à un ou plusieurs thèmes, et la segmentation consiste à trouver ces thèmes (Chen et al., 2009;Ferret, 2007). Les thèmes peuvent être prédits par des « topic models » (Chen et al., 2009), une forme d'Allocation de Dirichlet Latente (ADL), qui sont associés à des distributions lexicales différentes, ou bien par des associations lexicales calculées à partir des textes, par exemple par un clustering en amont (Ferret, 2007). Une fois les thèmes identifiés pour chaque unité de texte, les segments correspondent aux blocs d'unités contiguës partageant le même thème. Dans les approches par pavage, la mesure de similarité entre blocs la plus simple est basée sur le nombre de répétitions lexicales (souvent uniquement les noms), rapporté au nombre des unités présentes.Les variantes consistent alors à jouer sur le lissage de l'évolution de la similarité en prenant en compte des contextes différents (plus de blocs voisins, et plus d'interactions entre eux) ou sur la normalisation des liens de cohésion lexicale, avec des mesures de tf.idf locales par exemple (Malioutov & Barzilay, 2006). Mais la similarité peut également dériver d'autres sources : collocations (Ferret, 2002), similarité dans un espace lexical de dimension réduite par exemple par analyse sémantique latente (Choi et al., 2001;Bestgen & Piérard, 2006) -proche de l'ADL mentionnée ci-dessus -, ou bien similarité de distribution des unités lexicales (Adam & Morlane-Hondère, 2009), toutes méthodes qui sont censées apporter une forme de lissage pour prévenir d'éventuels fossés dans les répétitions de forme, dans la mesure où elles font émerger une large gamme de liens de proximité sémantique. Dans le système que nous avons développé et que nous décrivons dans la section suivante, nous comparons deux types de similarité, en utilisant d'une part les répétitions simples et d'autre part une mesure de similarité distributionnelle. Nous aurions pu faire appel à d'autres approches utilisant des similarités plus riches, mais pour notre propos (la comparaison des performances d'un système de ST en fonction des types de textes), il est suffisant que notre système ait des performances comparables à celles de l'état de l'art. Par ailleurs les approches génératives (topics models) ont l'inconvénient de devoir être entraînées sur des textes représentatifs des thèmes à retrouver, ce qui rend la technique un peu circulaire dans le cadre de ce que nous cherchons à étudier 1 . Notre système de ST, développé dans le cadre du projet VOILADIS 2 , utilise une approche linéaire, à la manière de (Hearst, 1997), et fait appel pour le calcul des scores de similarité lexicale à une base de voisins distributionnels. La base de voisins distributionnels utilisée a été générée à partir d'un corpus constitué de l'ensemble des articles de la version francophone de Wikipédia, soit plus de 470000 articles pour 194 millions de mots. Le programme d'analyse distributionnelle est lancé en aval des sorties de l'analyse SYNTEX (Bourigault, 2007). Les triplets syntaxiques <gouverneur, relation, dépendant> (ex : <départ,POUR,destination>) fournissent les données permettant de rapprocher les paires de voisins en utilisant la mesure de Lin. Nous donnons ci-dessous le détail de notre chaîne de traitement : -Les liens de voisinage, éventuellement pondérés par leur score de Lin, sont projetés sur les textes ; les répétitions sont également prises en compte, et dotées d'un score de 1. Quelques paramètres de filtrage des voisins sont testés : seuils sur le nombre de voisins que peut avoir un mot, et sur l'écart quadratique moyen de ces voisins à la position du mot (ainsi, les mots ayant peu de voisins ou des voisins proches de leur position dans le texte seront favorisés) -Le texte est parcouru par une fenêtre glissante, afin de calculer localement des scores de cohésion. L'unité de segmentation, ainsi que la taille de la fenêtre en nombre d'unités, sont paramétrables. Les unités de segmentation possibles sont : (i) la phrase ; (ii) le bloc de mots de taille fixe. Par exemple, si l'unité choisie est la phrase, et que la taille de la fenêtre est fixée à 6, on calculera à la fin de chaque phrase un score basé sur le nombre de liens entretenus par le groupe de trois phrases qui précède, et le groupe de trois phrases qui suit ( fig. 1) ; ce nombre est normalisé par le nombre de liens possibles (le produit des noms, verbes et adjectifs situés à gauche et à droite de la fenêtre). Les barres verticales indiquent la segmentation de référence (c'est-à-dire les positions des titres de section). -Les vallées (creux de la courbe) dont la profondeur dépasse un écart-type à la moyenne des profondeurs sont considérées comme correspondant aux ruptures du texte. Ces ruptures, qui, selon l'unité choisie, se trouvent dans le meilleur des cas à la frontière d'une phrase, mais peuvent également intervenir en plein milieu d'une phrase, sont ramenées à la frontière de paragraphe la plus proche, ce qui produit le texte segmenté final. On constate que de nombreux paramètres sont ajustables dans notre système ; nous les récapitulons dans le tableau 3. C'est pourquoi dans la phase expérimentale de notre étude, qui fait l'objet de la prochaine section, nous recourons à un corpus de développement pour optimiser ces paramètres. Nous donnons deux résultats pour notre système, selon les types de liens de cohésion lexicale pris en compte : répétitions simples de lemmes, ou voisinage distributionnel. Nous avons en outre généré deux baselines simplifiées qui permettent de donner une idée des écarts que l'on peut avoir sur les mesures Pk et WD, qui ne sont pas nécessairement simples à interpréter. La première baseline (nommée plus bas « hasard exact ») place des ruptures au hasard, mais en nombre correspondant à la référence. Elle permet de contrôler la facilité de se rapprocher des vraies bornes par rapport au nombre moyen de segments rapporté à la taille du texte. Une deuxième baseline (« hasard bruité ») proche consiste à perturber le nombre exact de ruptures, en le faisant varier au hasard dans un intervalle de 30% du vrai nombre de ruptures. Méthode Pk WD edit nb seg/txt référence 0 0 0 7,89 baseline "hasard bruité" 0,3659 0,3738 1,6492 9,46 baseline "hasard exact" 0,3417 0,3452 1,5789 7,89 répétitions 0,3114 0,3144 1,5907 4,93 voisins 0,3091 0,3129 1,5837 5,09 Une autre indication de la représentativité des scores pourrait être prise dans la littérature, même si la variété des approches, des entrées et des évaluations (vrai textes ou concaténations artificielles) doit inciter à la prudence. Si l'on se réfère au très récent (Chen et al., 2009), qui opère sur un corpus similaire à une partie du nôtre (les articles de villes dans le Wikipédia anglais), l'état de l'art précédent représenté par (Eisenstein & Barzilay, 2008) atteint un Pk de 0,317 et un WD de 0,376 sur ce corpus, avec la connaissance du nombre de segments ; l'approche de (Chen et al., 2009) à base de topic models enrichis de contraintes globales atteint quant à elle sur leur meilleure configuration les très bons scores de 0,28 pour le Pk et de 0,25 pour le WD, sans connaissance du nombre de segments, mais en posant une borne supérieure sur le nombre de thèmes présents dans tout le corpus (fixée à 10 ou 20 thèmes), ce qui limite un peu la généralisation. Au vu de nos résultats, on constate que l'hypothèse globale d'une différence entre les deux types de textes THEM et NON-THEM se vérifie assez nettement, quelle que soit la métrique considérée, et que les algorithmes de segmentation choisis sont meilleurs sur les textes du sous-corpus THEM, même si les variances (non rapportées dans le tableau) sont importantes. Pour évaluer les différences entre méthodes, (Chen et al., 2009) affirment que les tests de significativité statistique sur cette tâche ne sont pas standardisés, et ils n'en reportent pas. Ceux qui font de tels tests utilisent un t-test sans préciser s'il est apparié ou pas, (Choi et al., 2001;Galley et al., 2003;Ferret, 2007 9 -Résultats par sous-catégories par Pk/WD croissant se reportent sur les catégories qui les composent, à l'exception de la catégorie concepts qui obtient des résultats légèrement meilleurs que ceux de la catégorie villes. Encore une fois les variances sont fortes. Il s'avère que notre découpage volontairement grossier a priori (dans un souci de ne pas trop biaiser l'étude) pourrait s'affiner -à condition de poser clairement les paramètres de ce que nous avons appelé pour l'instant le caractère thématique fort ou faible des textes -, mais qu'il semble valide. Nous avons montré dans cette étude que les types de textes ont un impact important sur la ST, et qu'il s'agit donc d'un paramètre à ne pas négliger dans le cadre de cette tâche. Néanmoins, le bilan de l'expérience menée, s'il comporte la confirmation de l'hypothèse de départ, doit être mitigé par des résultats effectifs sur la tâche. Même si les résultats sont proches de ceux de l'état de l'art sur le corpus THEM (surtout si on tient compte du fait que le nombre de segments n'est jamais donné), ils montrent des variances très fortes sur les textes, et n'ont pu confirmer le rôle d'une similarité lexicale plus riche que la simple répétition de formes. Il ressort de l'observation du corpus que les données que nous avons recueillies étaient finalement assez hétérogènes, avec des sections de longueurs très différentes qui ont posé de gros problèmes aux approches qui se basent sur un niveau de variation moyen. La diversité des niveaux de finalisation des articles de Wikipédia explique en particulier la succession de paragraphes très développés et de paragraphes réduits à une seule phrase. Concernant la segmentation de référence, la subdivision par sections n'est pas toujours le bon mode de segmentation. On trouve beaucoup de sections hétérogènes sur le plan thématique parce que la répartition thématique est faite au niveau des sous-sections (ex : une section « Domaines influencés par le positivisme » se décline en sous-sections : « médecine », « philosophie », « enseignement », « droit », etc.). L'intérêt de prendre la structuration en titres comme segmentation de référence était de fournir facilement des données annotées, et nous avions bien sûr conscience du bruit que cela devait entraîner par rapport à la tâche évaluée. Cela nous a fourni une première analyse qui nous incite à reprendre ces données pour aller vers une évaluation moins artificielle. Mais au-delà d'un simple nettoyage qui court le risque d'être biaisé par l'objectif, on peut aussi poser le problème autrement et partir de l'observation des endroits où les programmes coupent, pour chercher à déterminer si ces lieux sont « interprétables », plutôt que de chercher un alignement avec une segmentation de référence problématique. Enfin, la question du mode de différenciation des textes à traiter se pose également. Le fait de choisir de comparer des textes appartenant à un même genre textuel (l'article d'encyclopédie), limite leur diversité. La distinction que nous avons considérée se situe au niveau du sujet traité. C'est un premier point d'entrée, qui n'est pas entièrement satisfaisant. Le rapport que nous avons posé au préalable entre catégorie de sujet et type d'organisation n'est pas systématiquement vérifié : si les articles traitant de personnalités sont quasi systématiquement organisés temporellement, certaines notions sont malgré tout traitées de manière au moins partiellement thématique. Une étude privilégiant cette fois une distinction par genre de textes permettrait d'établir un classement sur des critères plus fiables, et d'opposer des textes au fonctionnement plus marqué, renforçant sans doute le contraste déjà observé. Références FIGURE 1 - 1Représentation de la fenêtre glissante avec -fen=3 et -unit=phrase -La courbe des scores obtenue est lissée ; nous avons opté pour un lissage gaussien 3 , avec deux paramètres ajustables : le nombre d'itérations et le degré du lissage. La figure 2 présente les courbes brute et lissée pour l'article Wikipédia Bulgarie. TABLE 1 - 1Exemples d'organisation textuelle thématiqueLe tableau 2 montre d'abord un exemple d'organisation temporelle, typique des biographies, qui se clôt par une partie bilan. Les deux autres textes (leadership et mythe) illustrent un mode de progression rhétorique (sur le principe des 'moves' de Swales) qui permet dans ces deux cas d'organiser la présentation d'une notion selon un schéma argumentatif similaire : d'abord définir la notion, puis présenter une typologie, enfin détailler certaines de ses instances.Laurent Truguet Leadership Mythe -Jeunesse jusqu'à la Rév. -Terminologie -Définition -sous la Révolution -Types de leadership -Aspects du mythe -L'Empire -Caractéristiques du leadership -Typologie et éléments du mythe -sous la Monarchie -Le leadership de droit et de fait -Postérité du mythe -Le bilan -Le paradigme des leaderships multiples TABLE 2 - 2Exemples d'organisation textuelle non thématique L'impact des types de textes sur la procédure de ST a rarement été pris en considération par les travaux qui mettent en oeuvre cette tâche -exception faitede (Ferret et al., 1998) -, au point que, comme le déplorent(Bestgen & Piérard, 2006), les mêmes algorithmes sont parfois appliqués à une tâche de segmentation de texte et de délimitation de textes concaténés. Les expériences de ST sont généralement menées sur des types de textes qui se prêtent intuitivement à cette approche -par exemple les articles encyclopédiques sur les villes chez (Chen et al., 2009) ou (Adam & Morlane-Hondère, 2009) -, sans qu'on cherche à établir explicitement la nature des textes qui sont adaptés à la tâche.Notre objectif est d'intégrer le paramètre du type de textes dans la tâche de segmentation en comparant les performances d'un système de ST sur deux groupes de textes, déterminés selon leur propension à s'organiser plutôt thématiquement ou à obéir à d'autres principes de présentation -rhétorique, temporelle. Nous montrons dans cet article que la tâche de ST est effectivement sensible à la nature des textes, en montrant que même une approche relativement naïve de la notion de type de texte permet de faire émerger des différences significatives de performances.Après avoir brossé un panorama des méthodes actuelles en 2 . 2Projet financé par le PRES de Toulouse 3. En fait une estimation par noyau, le noyau étant gaussien ; il correspond ici à une moyenne sur le voisinage de chaque FIGURE 2 -Exemple de courbe avec lissageParamètre Description Valeurs -unit unité de segmentation phrase / bloc -bloc taille du bloc <nb mots> -fen taille de la demi-fenêtre glissante <nb blocs> -it nb d'itérations du lissage <nb> -deg degré du lissage <nb> -lin pondération des liens par score de Lin oui / non -filtNb seuil sur le nb max de voisins différents par item non ou <nb voisins> -filtPos seuil sur l'écart moyen des voisins à la position de l'item non ou <nb tokens> TABLE 3 - 3Paramètres de notre système de ST 4 Procédure et évaluation L'hypothèse que nous souhaitons valider par cette expérience est que le recours à la ST se justifie pour des textes dont la structuration est effectivement ressentie comme thématique, mais n'est pas motivé pour aborder d'autres modes d'organisation textuelle. Ainsi, nous voulons inciter à mieux définir quel peut être l'objet de la tâche de ST, et à ne pas appliquer cette tâche sans précaution à des textes tout-venant. bien à une organisation thématique. Le sous-corpus à organisation thématique faible (corpusNON-THEM) réunit des biographies, dont l'organisation est typiquement temporelle, et des textes présentant des notions abstraites, des concepts, pour lesquels nous avons montré (tab. 2) que l'approche thématique est généralement mal adaptée. L'intervention humaine se concentre donc en amont de la constitution du corpus, par la définition des critères de sélection et de répartition dans les sous-corpus. Aucun traitement n'est effectué en aval (post-sélection, nettoyage, etc.). La caractérisation des corpus obtenus est donnée dans le tableau 4. Nous précisons pour chaque sous-corpus le nombre de paragraphes, qui correspond au nombre de segments potentiels pour notre système de ST et le nombre de titres de premier niveau, c'est-à-dire le nombre de ruptures dans notre segmentation de référence.Caractérisation du corpus utilisé Nous avons pour cette expérience constitué deux sous-corpus à par- tir de la version d'avril 2007 de l'encyclopédie en ligne Wikipédia (nous avons choisi de sélectionner des textes qui appartiennent tous au corpus sur lequel la ressource lexicale a été construite). Les articles ont été extraits de manière automatique, sur la base de critères de sélection fixés par nous. Les critères de sélection sont les suivants : pour être retenu, un article doit avoir au minimum 1000 mots, au moins 4 titres de sections (qui fournissent la segmentation de référence), et un maximum de 2 niveaux de profondeur de titres (une profondeur trop importante aurait amené à faire des choix délicats quant aux titres retenus pour la segmentation de référence) ; il doit également appartenir à une liste de catégories établie. Nous avons en effet pris le niveau des catégories définies dans l'encyclopédie comme critère de répartition des textes dans les deux sous-corpus. Le sous-corpus à organisation thématique forte (corpus THEM) rassemble des textes consacrés à la description de pays, de villes et d'animaux, dont on sait qu'ils se prêtent généralement point, pondérée selon la distance au point selon une gaussienne. TABLE 4 - 4Caractérisation des corpus THEM et NON-THEMCorpus de développement et optimisation des paramètres La procédure de segmentation décrite section 3 dépend de nombreux paramètres dont les conséquences ne sont pas toujours prédictibles a priori. Beaucoup d'auteurs fixent des paramètres similaires selon des critères empiriques pas toujours explicites. Nous avons choisi d'isoler une partie du corpus de départ pour l'utiliser comme un corpus de développement explicite, sur lequel nous avons fait varier un certain nombre de paramètres afin d'ajuster la segmentation. Pour cela, nous avons extrait au hasard un peu moins de 10% du corpus rassemblé initialement, en prenant autant (i.e. 21) de textes des sous-corpus THEM et NON-THEM (rappelons que le corpus initial n'est pas tout à fait équilibré ; nous avons équilibré celui de développement pour ne pas favoriser la classe majoritaire). Les variations faites sur les 8 paramètres sujet de l'optimisation ont généré plus de 2000 configurations.Nous avons conservé la configuration ayant obtenu les meilleurs résultats selon l'indice classique WindowDiff (noté WD) de comparaison de segmentation ; elle est donnée dans le tableau 5.Évaluation Pour évaluer les résultats de la segmentation, nous prenons comme référence les positions des titres de premier niveau au sein des articles ; pour comparer les résultats du système de ST à cette référence, nous appliquons les mesures classiques pour cette tâche : les indices Pk et WindowDiff. Ces mesures sont moins strictes sur les positions des bornes de segments que la précision et le rappel, qui ne permettent pas de juger de la proximité d'une prédiction avec la borne réelle. Les deux mesures Pk et WD « adoucissent » l'évaluation en estimant le nombre moyen de bornes correctes dans une fenêtre de taille donnée projetée sur le texte. Nous avons ajouté une mesure proposée par (Bestgen, 2009), appelée par lui « distance de Hamming généralisé » et qui est en fait une distance d'édition avec des coefficients particuliers pour les coûts d'insertion/d'effacement/de déplacement, rapportée au nombre de points de coupure possibles. Elle est notée "edit" dans nos tables de résultats. La distance d'édition est censée corriger certains biais de WD, elle-même censée corriger certains biais de Pk ; nous ne rentrons pas dans les détails ici, les mesures étant relativement cohérentes entre elles sur nos résultats 4 . Ces mesures étant souvent difficiles à interpréter et à comparer, la table 6 donne les résultats pour la pire configuration, la configuration moyenne et la meilleure configuration, que nous allons appliquer au reste de notre corpus. Il faut noter que ces mesures sont des mesures de distance, la distance de la référence à elle-même est donc 0., et un score plus bas indique une plus grande proximité avec la référence.-unit -bloc -fen -it -deg -lin -filtNb -filtPos bloc 10 mots 10 blocs 2 3 non 10 voisins max. 500 tokens TABLE 5 -Configuration de paramètres retenue TABLE 6 - 6Résultats sur le corpus de développement avec différentes configurations de paramètres5 Résultats et analyse Nous avons appliqué notre système de ST, avec la configuration de paramètres optimisée sur le corpus de développement, sur les deux sous-corpus THEM et NON-THEM. Les tables 7 et 8 synthétisent les résultats. TABLE 7 - 7Résultats pour le sous-corpus THEMMéthode Pk WD edit nb seg/txt référence 0 0 0 8,07 baseline "hasard bruité" 0,3569 0,3616 1,8032 6,68 baseline "hasard exact" 0,3149 0,3181 1,5645 8,07 répétitions 0,3612 0,3662 1,8846 5,08 voisins 0,3613 0,3676 1,9291 5,16 TABLE 8 - 8Résultats pour le sous-corpus NON-THEM TABLE ADAM C. & MORLANE-HONDÈRE F. (2009). Détection de la cohésion lexicale par voisinage distributionnel : application à la segmentation thématique. In Actes du colloque RECITAL, Senlis, France. BESTGEN Y. (2009). Quel indice pour mesurer l'efficacité en segmentation de textes ? In Actes de TALN'09, Senlis, France. BESTGEN Y. & PIÉRARD S. (2006). Comment évaluer les algorithmes de segmentation automatiques ? Essai de construction d'un matériel de référence. Actes de TALN : Verbum ex machina, Louvain-la-neuve, 6, 407-414. . Nous pourrons de toute façon étendre la portée de la comparaison dans une étude plus large, notamment en transposant l'approche de Ferret au réseau lexical induit par la similarité distributionnelle. . On peut se référer à(Georgescul et al., 2006) pour une discussion de la pertinence des procédures d'évaluation de la ST. Discourse on the move : Using corpus analysis to describe discourse structure. Connor U Biber D, Upton T, John Benjamins Publishing CoBIBER D., CONNOR U. & UPTON T. (2007). Discourse on the move : Using corpus analysis to describe discourse structure. John Benjamins Publishing Co. Un analyseur syntaxique operationnel : Syntex. Bourigault D, CNRS & Université de Toulouse-Le MirailBOURIGAULT D. (2007). Un analyseur syntaxique operationnel : Syntex. CNRS & Université de Toulouse-Le Mirail. Content Modeling Using Latent Permutations. Chen H Branavan, S Barzilay R. &amp; Karger D, Journal of Artificial Intelligence Research. 36CHEN H., BRANAVAN S., BARZILAY R. & KARGER D. (2009). Content Modeling Using Latent Per- mutations. Journal of Artificial Intelligence Research, 36, 129-163. Latent semantic analysis for text segmentation. Y Y Choi F, Wiemer-Hastings P, Moore J, Proceedings of the 2001 Conference on Empirical Methods in Natural Language Processing. the 2001 Conference on Empirical Methods in Natural Language ProcessingPittsburghCHOI F. Y. Y., WIEMER-HASTINGS P. & MOORE J. (2001). Latent semantic analysis for text segmen- tation. In Proceedings of the 2001 Conference on Empirical Methods in Natural Language Processing, p. 109-117, Pittsburgh. Bayesian unsupervised topic segmentation. J Eisenstein, Barzilay R, EMNLP '08 : Proceedings of the Conference on Empirical Methods in Natural Language Processing. Morristown, NJ, USAAssociation for Computational LinguisticsEISENSTEIN J. & BARZILAY R. (2008). Bayesian unsupervised topic segmentation. In EMNLP '08 : Proceedings of the Conference on Empirical Methods in Natural Language Processing, p. 334-343, Morristown, NJ, USA : Association for Computational Linguistics. Segmenter et structurer thématiquement des textes par l'utilisation conjointe de collocations et de la récurrence lexicale. Ferret O, TALN'02 : 9e conférence sur le Traitement Automatique des Langues Naturelles. Nancy, FranceFERRET O. (2002). Segmenter et structurer thématiquement des textes par l'utilisation conjointe de collocations et de la récurrence lexicale. In TALN'02 : 9e conférence sur le Traitement Automatique des Langues Naturelles, p. 155-164, Nancy, France. Finding document topics for improving topic segmentation. Ferret O, Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics. the 45th Annual Meeting of the Association of Computational LinguisticsPrague, Czech RepublicAssociation for Computational LinguisticsFERRET O. (2007). Finding document topics for improving topic segmentation. In Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, p. 480-487, Prague, Czech Republic : Association for Computational Linguistics. Thematic segmentation of texts : two methods for two kinds of texts. Ferret O, &amp; Grau B, Masson N, Proceedings of the 36th Annual Meeting of the Association for Computational Linguistics and 17th International Conference on Computational Linguistics. the 36th Annual Meeting of the Association for Computational Linguistics and 17th International Conference on Computational LinguisticsAssociation for Computational Linguistics1FERRET O., GRAU B. & MASSON N. (1998). Thematic segmentation of texts : two methods for two kinds of texts. In Proceedings of the 36th Annual Meeting of the Association for Computational Linguis- tics and 17th International Conference on Computational Linguistics-Volume 1, p. 392-396 : Association for Computational Linguistics. Discourse segmentation of multi-party conversation. M Galley, R Mckeown K, Fosler-Lussier E. &amp; Jing H, Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics (ACL-03). E. HINRICHS & D. ROTHthe 41st Annual Meeting of the Association for Computational Linguistics (ACL-03)Sapporo, JapanGALLEY M., MCKEOWN K. R., FOSLER-LUSSIER E. & JING H. (2003). Discourse segmentation of multi-party conversation. In E. HINRICHS & D. ROTH, Eds., Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics (ACL-03), p. 562-569, Sapporo, Japan. An Analysis of Quantitative Aspects in the Evaluation of Thematic Segmentation Algorithms. M Georgescul, Clark A Armstrong S, Proceedings of the 7th SIGdial Workshop on Discourse and Dialogue. the 7th SIGdial Workshop on Discourse and DialogueSydney, AustraliaAssociation for Computational LinguisticsGEORGESCUL M., CLARK A. & ARMSTRONG S. (2006). An Analysis of Quantitative Aspects in the Evaluation of Thematic Segmentation Algorithms. In Proceedings of the 7th SIGdial Workshop on Discourse and Dialogue, p. 144-151, Sydney, Australia : Association for Computational Linguistics. TextTiling : segmenting text into multi-paragraph subtopic passages. M A Hearst, Computational Linguistics. 231HEARST M. A. (1997). TextTiling : segmenting text into multi-paragraph subtopic passages. Computa- tional Linguistics, 23(1), 33-64. Combining multiple knowledge sources for discourse segmentation. Litman D. &amp; Passonneau R, Proceedings of the 33rd annual meeting on Association for Computational Linguistics. the 33rd annual meeting on Association for Computational LinguisticsAssociation for Computational LinguisticsLITMAN D. & PASSONNEAU R. (1995). Combining multiple knowledge sources for discourse seg- mentation. In Proceedings of the 33rd annual meeting on Association for Computational Linguistics, p. 108-115 : Association for Computational Linguistics. Minimum cut model for spoken lecture segmentation. Malioutov I, Barzilay R, ACL-44 : Proceedings of the 21st International Conference on Computational Linguistics and the 44th annual meeting of the Association for Computational Linguistics. Morristown, NJ, USAAssociation for Computational LinguisticsMALIOUTOV I. & BARZILAY R. (2006). Minimum cut model for spoken lecture segmentation. In ACL- 44 : Proceedings of the 21st International Conference on Computational Linguistics and the 44th annual meeting of the Association for Computational Linguistics, p. 25-32, Morristown, NJ, USA : Association for Computational Linguistics. Discours et Document : traitements automatiques. Péry-Woodley M.-P Scott, 47Numéro thématique. revue T.A.L.PÉRY-WOODLEY M.-P. & SCOTT (2006). Discours et Document : traitements automatiques. Numéro thématique. revue T.A.L., 47(2), 7-19. Genre analysis : English in academic and research settings. J Swales, Cambridge University PressNew YorkSWALES J. (1990). Genre analysis : English in academic and research settings. New York :Cambridge University Press. Argumentative Zoning : Information Extraction from Scientific Text. Teufel S, University of EdinburghPhD thesisTEUFEL S. (1999). Argumentative Zoning : Information Extraction from Scientific Text. PhD thesis, University of Edinburgh.
252,624,440
-BY-NC 4.0 Phonetics of Negative Headshake in Russian Sign Language: A Small-Scale Corpus Study
We analyzed negative headshake found in the online corpus of Russian Sign Language. We found that negative headshake can co-occur with negative manual signs, although most of these signs are not accompanied by it. We applied OpenFace, a Computer Vision toolkit, to extract head rotation measurements from video recordings, and analyzed the headshake in terms of the number of peaks (turns), the amplitude of the turns, and their frequency. We find that such basic phonetic measurements of headshake can be extracted using a combination of manual annotation and Computer Vision, and can be further used in comparative research across constructions and sign languages.
[]
-BY-NC 4.0 Phonetics of Negative Headshake in Russian Sign Language: A Small-Scale Corpus Study June 2022 Anastasia Chizhikova apchizhikova@edu.hse.ru HSE University University of Bergen Moscow (Russia) BergenNorway Vadim Kimmelman vadim.kimmelman@uib.no HSE University University of Bergen Moscow (Russia) BergenNorway -BY-NC 4.0 Phonetics of Negative Headshake in Russian Sign Language: A Small-Scale Corpus Study Proceedings of the 10th Workshop on the Representation and Processing of Sign Languages (sign-lang@LREC 2022) the 10th Workshop on the Representation and Processing of Sign Languages (sign-lang@LREC 2022)MarseilleJune 202229negative headshakenonmanual markingComputer Vision We analyzed negative headshake found in the online corpus of Russian Sign Language. We found that negative headshake can co-occur with negative manual signs, although most of these signs are not accompanied by it. We applied OpenFace, a Computer Vision toolkit, to extract head rotation measurements from video recordings, and analyzed the headshake in terms of the number of peaks (turns), the amplitude of the turns, and their frequency. We find that such basic phonetic measurements of headshake can be extracted using a combination of manual annotation and Computer Vision, and can be further used in comparative research across constructions and sign languages. Introduction While the importance of nonmanual markers in sign language grammar is well understood (Pfau and Quer, 2010;Wilbur, 2021;Lackner, 2021), only a small number of studies so far focused on phonetic properties of nonmanual movements (Baker-Shenk, 1983;De Vos et al., 2009;Weast, 2011;Dachkovsky et al., 2013;Puupponen et al., 2015;Tyrone and Mauk, 2016;Harmon, 2017). An important reason for the scarcity of phonetic investigation of nonmanuals has been methodological: manual annotation of nonmanuals is difficult, time-consuming and not very reliable while more reliable methods like using Motion Capture are expensive and also very time-consuming in terms of analysis of the data (Puupponen et al., 2015). Recent advances is Deep Learning lead to significant breakthroughs in Computer Vision (CV): currently, multiple instruments exist that allow automatic detection and tracking of the human body in video recordings, OpenPose being probably the most famous to date (Wei et al., 2016;Cao et al., 2017;Cao et al., 2018). CV has been applied to sign language data especially in the context of automatic sign language recognition and translation (Ko et al., 2018;Koller et al., 2016;Saunders et al., 2020). However, only a few studies have used CV for linguistic analysis of sign language data, and especially for analyzing phonetic properties of nonmanuals (Kimmelman et al., 2020). At the moment, it is not well understood whether existing CV instruments are even suitable for linguistic analysis of sign languages, but it is already clear that extensive testing and adjusting of CV solutions is necessary before they can be applied to sign languages at scale . In this paper, we report the results of an initial investigation of phonetics of nonmanual headshake in Russian Sign Language (RSL). We use naturalistic corpus data from the online corpus of Russian Sign Language (Burkova, 2015). We attempt to identify all neg-ative utterances in the corpus, and then manually select the utterances containing negative headshakes. Consequently, we apply a CV instrument OpenFace (Baltrusaitis et al., 2018) to extract information about head rotation in these video files in order to further analyze phonetic properties of these movements quantitatively. The aim of the study is thus two-fold. First, we describe basic phonetic properties of negative headshake in RSL, which can be a first step towards more detailed research on phonetics of headshakes in this and other sign languages. Second, we test and discuss the applicability of CV-tools for phonetic analysis of headshake. Negative Headshake in SLs One of the most common linguistic nonmanuals crosslinguistically is the side-to-side negative headshake (Zeshan, 2006;Pfau, 2008;Oomen and Pfau, 2017). 1 In different sign languages, the headshake can accompany the negative sign alone or spread across parts or the whole sentence; in some sign languages (often called non-manually dominant), the headshake alone can express the negative polarity, without any manual negative sign. Recent studies based on corpus data have shown that, in naturalistic data, negative headshake can be frequent but by no means obligatory (Johnston, 2018;Kuder et al., 2018). In a recent study (Rudnev and Kuznetsova, 2021), RSL has been classified as a manually-dominant sign language: negative sentences must contain a manual negative sign. The negative signs almost always occur in the clause-final position, as in (1). Negative headshake is also extensively used, and can also spread, as in (1). (1) neg INDEX 1 THINK NOT 'I did not think.' Our knowledge of phonetic properties of negative headshake across sign languages is very limited. 2 In a recent small-scale study, Harmon (2017) described some aspects of phonetics of headshake in American Sign Language (ASL). She argued that ASL has two main types of headshake: canonical nonmanual negation, which begins with a wide arc and continues with smaller and smaller arcs, and intense negation, which has the same general shape, but with shorter (by 30-50%) arcs of movements. Both types of nonmanual negation can spread, and are generally temporally aligned with sign and sentence boundaries. Despite employing quantitative and CV-related techniques for data extraction, the paper does not report any quantitative results concerning phonetic properties of the headshake, and thus it is impossible to compare it to our findings below. Methodology In order to study phonetic properties of negative headshake in RSL, we applied the following steps, which we describe in more detail below: (1) Searching for negative signs and sentences in the online corpus of RSL (Burkova, 2015); (2) Manual identification of segments containing negative headshake; (3) Manual annotation of the boundaries of negative headshake and negative manual signs in ELAN (Crasborn and Sloetjes, 2008); (4) Extraction of head rotation measurements using OpenFace (Baltrusaitis et al., 2018); (5) Quantitative analysis of a subset of the measurements. Corpus Data The online corpus of RSL is a collection of over 230 video recordings produced by 43 RSL signers of different ages and from different regions, filmed mostly between 2010 and 2012 (Burkova, 2015). The total duration of the video recordings is approximately 4 hours 30 minutes, and it contains around 20 000 sign tokens. The corpus is fully available on-line, but registration is required to access the data. For more details and a case study, see Bauer and Kyuseva (2022). Most recordings in the corpus are narrative monologues, although some dialogues are also included. Each recording is annotated on 3 tiers: right hand glosses, left hand glosses, and sentence translation, in Russian. The annotations were created in ELAN, but are also accessible and searchable via the on-line interface of the corpus. In order to identify negative structures in the data, we searched in the ELAN annotation files for words that are used to express negation in Russian, including negative particles (most prominently ne 'not'), negative adverbs and negative pronouns. We then watched the found segments in order to identify (1) whether they were indeed negative structures and (2) whether they contained negative headshake. Boundary Annotation As mentioned above, the RSL corpus does not contain annotations of the nonmanual component. Because the horizontal position and the head movement along the horizontal plane are not exclusively associated with negation, we do not see an obvious way of automatically detecting negative headshake in the data. It might be possible to develop an ML solution, but we do not yet have sufficient data to train a model for automatic identification of headshake (see also a discussion in Section 5.3). Thus, we decided to manually annotate the boundaries of headshake in the segments that we selected before proceeding to further analysis of the data. We used the following criteria. We consider the onset of the headshake to occur on the first frame of leftward or rightward turn of the head from the position that was maintained in the previous context. We consider the offset of the headshake to occur on the last frame of the leftward or rightward turn before the head is maintained in some position afterwards. Note that, in both cases, the maintained position is not always forwardfacing, as head turns can be used for functions not related to negation (see further discussion in Section 5.2). This procedure is subjective and based on laborious visual inspection of the data. In fact, in order to test reliability, the two authors independently annotated 65 instances of headshake, and only found 68% of raw overlap between the annotations. However, if manual annotations are combined with visual inspection of the results of CV data extraction, it is possible to identify the boundaries more reliably (Section 5.2). We also annotated the boundaries of the manual negative signs to explore alignment with the boundaries of the headshake. We used commonly accepted criteria (as used for example in the corpus of Sign Language of the Netherlands ): the sign starts in the frame where the (initial) handshape is fully formed and the initial location is reached, and ends in the frame where the hand starts moving away from the final location and/or the handshape starts to change from the (final) handshape. Measurement Extraction and Analysis We used a Python script to cut video fragments based on annotation boundaries extracted from ELAN annotation files. These fragments served as input to Open-Face, a toolkit for face landmark detection, head pose estimation, and facial action unit recognition. (Baltrusaitis et al., 2018). Importantly, this software reconstructs a 3D model of the face from 2D video recordings, and estimates not only facial landmark locations, but also head position along the 3 axes in radians. Most relevantly for us is the estimation of head position along the horizontal axis (also know as pitch), as negative headshake is rotation of the head on this axis. We used the find peak function from the Python scipy model (Virtanen et al., 2020) to automatically detect peaks in the estimated horizontal rotation of the head. Because the data is noisy, and even minimal head movements clearly not classifiable as head turns were detected, we applied an empirically calibrated filter set to ignore any peaks which differed from the neighbors by less than 0.01 radians (see Figure 1 for an illustration of the process). For each headshake interval, we calculated the following measures: • number of peaks; • frequency: ((n peaks -1)/duration between the first and last peaks); • the maximal amplitude. The amplitude was calculated as the difference between the maximal and minimal peak for the interval. This is illustrated as the red dotted line in Figure 1. The script used for cutting video fragments and extracting measurements from the data can be found here: https://github.com/nastyachizhikova/ Negative_Headshake_Phonetics_RSL. For the quantitative analysis, we only focused on the headshake that co-occurs with the three most frequent manual negative signs (see Section 4). We explore the distributions of the main phonetic measures above in these three types of constructions graphically and with basic descriptive statistics, using R and R Studio (R Core Team, 2019; RStudio Team, 2019). Results Basic Properties of RSL Negation Using the methods discussed above, we found 663 potential instances of negative signs in the RSL corpus. However, unexpectedly, a vast majority of examples (476, 72%) did not contain visible headshake. This confirms earlier findings that RSL is a manuallydominant sign language, but it is still quite surprising that only a minority of negative sentences are also marked with headshake. 3 Zooming in on the 187 examples that contain negative headshake, we can observe that a wide variety of manual negative markers are used in the data. The three most common types of manual negative signs are NEG, which is a side-to-side shaking of one or both palms used as the negative response sign 'no' or as a sentential negation (example 2, Figure 2, top line), NEG.EXIST which is the negative existential, but which can also be used as a sentential negation marker in combination with verbs (example 3, Figure 2, second line), and the class of irregular negative verbs (Zeshan, 2006), that is, verbs which have dedicated negative forms in RSL, such as NOT.KNOW and NOT.WANT (example 4, Figure 2, third line). ( Another frequent negative marker is the negative particle NE, which almost always expresses sentential negation, and directly follows the verb, often cliticizing to it, as in example (example 5, Figure 2, bottom line). It formally resembles the NEG sign, but contains only a single movement of the hand. neg NOBODY MEET NE 'Nobody is meeting me.' As also discussed in earlier research, negative headshake can accompany the negative manual sign, but it also optionally spreads, as in (1). In our data, the spreading of the headshake was quite rare: it occurred in only 13% of the analyzed cases. In the cases where there is no spreading, we observed remarkably precise alignment between the headshake and the manual negative sign. If we look in detail at the alignment between the headshake and the phases of the manual sign (Kita et al., 1998), the most common pattern is the following. The onset of the headshake coincides with the onset of the preparation phase of the negative sign, that is, when the hands start a transitional movement from a resting position or a preceding sign towards the negative manual sign, and the offset of the headshake coincides with the end of the stroke of the negative manual sign. Consider Figure 3 which contains several screenshots from example (4). The first frame shows the last frame of the sign INDEX 1 , and the head is in the neutral position. The second frame shows the retraction phase of this sign, initiating the transitional movement towards the manual negative sign, and the head starts a turning movement to the left. The third frame is in the middle of the transitional movement: the handshape of the negative sign NOT.KNOW is visible but not fully formed, and the initial location of the sign is not yet reached, while the head continues the turn. The fourth frame is the initial frame of the stroke of the negative sign, where the handshape and the initial location are fully formed, and the head starts a movement to the right. The fifth frame is the last frame of the stroke of the negative sign: the hands are still in the final location, and the head continues the turn from the headshake. Finally, in the sixth frame, the hands start moving towards the next sign, so this again is transitional movement, and the head starts another movement, a combination of turning and tilting, that is not a part of the negative headshake. In some cases the onset of the headshake is synchronized with the onset of the stroke of the manual sign, but this is less common. Phonetic Properties of Negative Headshake For the quantitative analysis of the phonetic properties of negative headshake, we focused on the three most common types of manual negative signs demonstrated in (2)-(4) above. In total, we analyzed 68 sentences negative headshakes. The first measure that we considered is the number of peaks, that is, the number of turns of the head, where a turn towards one side is counted as a single turn. Most frequently, the negative signs were accompanied with 1 or 2 turns, although 3-5 turns were also quite common, and one instance contained 14 turns. Looking at the three types of manual negative signs, some tendencies can be observed. 4 Specifically, while both NEG and NEG.EXIST most often co-occurred with a single turn of the head, irregular negation most often co-occurred with two turns, and never with one. Concerning the amplitude of the turns, again, the three types were very similar. In general, the mean amplitude is 0.279 radians (16 degrees), and the median amplitude 0.23 radians (13.5 degrees), so the turns are relatively small. Irregular negation seems to be accompanied by headshake of a lower amplitude than the other groups, although the difference is not significant. The final measurement we looked at was the frequency of turns, measured as the number of turns per second. The mean frequency was 7.9 turns per second. While no significant differences between the groups were found, the average frequency for the headshake co-occurring with the NEG.EXIST sign was slightly higher than for the other two types. Finally, we visually explored the plots of the head position extracted from the video recordings. When looking at the cases with multiple peaks, we were interested whether we can observe the pattern previously reported for ASL, namely that the headshake starts with a wide arc, and that the following arcs decrease in amplitude. We indeed found many examples that conform to this pattern, as in Figure 4, upper panel. However, in some cases no decrease in amplitude was visible, and/or the first movement did not have the highest amplitude, as in Figure 4, lower panel. Discussion and Outlook Headshake in RSL An important finding of this study is that headshake is a relatively infrequent marker of negation in RSL. Not only is headshake alone not enough to negate an utterance (a manual sign is required), but also under 30% of negative structures in the corpus contain headshake. However, it is still important to be able to analyze phonetic properties of headshake, which we attempt to do 4 None of the comparisons discussed in this section are statistically significant based on mixed effect regression models with signers as random effects. Given the very small size of the data set it is not surprising; but it does mean that all the discussed tendencies are only indications for future research. in this study. We found that negative headshake in RSL most frequently contains only one or two turns of the head. This is also related to the fact that, in the majority of cases, the headshake does not spread from the negative manual sign. On average, the head turns 16 degrees to the side when performing the headshake; the frequency of head turns in negative headshake is around 8/s. These measurements in isolation are not very useful. However, they open the perspective of comparative phonetic research. In a pilot follow-up, we looked at a small number of elicited RSL examples containing negation, and observed headshake with significantly larger amplitudes and number of peaks than in naturalistic corpus data. This is not completely unexpected, but should be investigated further. Furthermore, while we did not find significant differences in phonetic properties of headshake accompanying the three types of negative signs, we observed some indications that there might be differences between them. For example, it seems that headshake with irregular negation typically has more peaks (at least two), and a smaller amplitude. It might be the case that different phonological types of negative headshake exist in RSL. Unfortunately we do not have a dataset that is sufficiently large to investigate this further. Finally, similar measurements of phonetic properties of negative headshake can be conducted in future for other sign languages with sufficiently large published corpora. Thus it will be possible to test whether phonetics of headshake varies cross-linguistically. Applicability of CV An important goal of this study was to test the applicability of CV to phonetic analysis of nonmanuals in sign languages, specifically, to headshake. The measurements of head rotation extracted with OpenFace agree with our perception of head rotation in the recordings. In other words, whenever a head rotation is visible in the recording, it will be visible in the curve representing horizontal rotation of the head extracted from OpenFace. Whenever there is a peak in the movement (the head reaches the maximal degree of turning and starts moving in the opposite direction), this peak is also visible in the graph. Thus, OpenFace measurements can be used to identify the number of peaks and calculate the frequency of rotations. The creators of OpenFace (Baltrusaitis et al., 2018) report that the absolute mean error for head rotation in their model is 2.4 degrees. It is useful to relate this to the mean headshake amplitude detected in our data, which is 16 degrees, and the standard deviation, which is 11.2. The mean error for amplitude is thus around 0.2 SD of the headshake we found. This means that OpenFace measurements can be used to estimate amplitude of headshake to this degree of certainty. However, if very small differences in amplitude are to be investigated, the measurement error can become an obstacle. We do not know of any research indicating that very minimal differences in headshake amplitude in sign languages can be meaningful, but the lack of such findings can also be due to the lack of research at that level of precision. Finally, while OpenFace seems to produce good measurements of head rotation for video recordings, these measurements cannot be easily applied to detect negative headshake in the data. As mentioned above, head position can be used for many different purposes in addition to expressing negation; thus, a non-neutral position or even a sequence of non-neutral positions do not necessarily mean a headshake. This is illustrated in Figure 5, which shows a large amount of horizontal head movements, but only a small part of the utterance actually contains headshake. The initial part of the head movement is in fact due to the signer imitating a person looking for something. However, it appears that one can combine measurements extracted with OpenFace and manual inspection of video recordings. Manual inspection can help identify roughly where headshake occurs, and OpenFace measurements can be used to more precisely detect its boundaries and to measure the amplitude. Comparison to Other Types of Headshake An issue related to the applicability of CV is comparing negative headshake to other types of headshake in Figure 5: Example of head rotation in RSL. X-axis: time in seconds; y-axis: rotation in radians. Red lines: boundaries of the negative headshake. RSL signers, and also comparing headshake produced by RSL signers to gestural headshake produced by e.g. speakers of Russian, in terms of phonetic characteristics. Such a comparison is necessary for quantitatively testing the claim in the literature that negative headshake in sign languages is different from gestural headshake, and that it is more grammaticalized (Pfau, 2008). Some recent corpus-based studies in fact directly question this conclusion, and argued that headshake produced by signers can be formally and functionally similar to headshake produced by non-signers (Johnston, 2018). For the current study, we did not have the resources to compare negative headshake in RSL to headshake with other functions, or to headshake produced by nonsigners. However, we think that the general methodology of using OpenFace to extract measurements of head rotation is fully applicable to conduct such a comparison in future. Furthermore, it seems conceptually possible and realistic to use output of OpenFace and Machine Learning to detect headshake in the data automatically, as the task of detecting headshake (vs. lack of headshake) is intuitively easier than distinguishing negative headshake (vs. other uses) based on measurements of head rotation alone. This automatic detection will likely need to be followed up by manual classification of the headshake detected, but this can still increase the speed of data collection and therefore sample sizes in future studies. Bibliographical References Baker-Shenk, C. L. (1983). Figure 1 : 1Top: peak identification before filtration. Bottom: peak identification after filtration and amplitude calculation. Figure 2 : 2RSL signs NEG, NEG.EXIST, NOT.KNOW, NE from the examples. Figure 3 : 3Selected frames from example (4), see the text for details. Figure 4 : 4Example shapes of negative head movement in RSL. X-axis: time in seconds; y-axis: rotation in radians. Red lines: boundaries of the movement based on manual annotation. A microanalysis of the non-manual components of questions in American Sign Language. Doctoral dissertation, University of California, Berkeley. Baltrusaitis, T., Zadeh, A., Lim, Y. C., and Morency, L.-P. (2018). Openface 2.0: Facial behavior analysis toolkit. In 2018 13th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2018), pages 59±66. IEEE. In some sign languages, backward head tilt is also used to mark negation, but the negative headshake is typically also attested(Zeshan, 2006). See alsoCoerts (1992) for some information on negative headshake in Sign Language of the Netherlands. Some research has also been done on formal aspects of negative headshake in co-speech gesture(Harrison, 2014). This is not to say that all the cases without negative headshake were unmarked nonmanually. Other nonmanuals associated with negation, such as furrowed eyebrows and lowered mouth corners did occur, but we did not analyze them further. New Insights Into Mouthings: Evidence From a Corpus-Based Study of Russian Sign Language. A Bauer, M Kyuseva, 10.3389/fpsyg.2021.779958Frontiers in Psychology. 12779958Bauer, A. and Kyuseva, M. (2022). New In- sights Into Mouthings: Evidence From a Corpus-Based Study of Russian Sign Lan- guage. Frontiers in Psychology, 12:779958. DOI: 10.3389/fpsyg.2021.779958. . Z Cao, T Simon, S.-E Wei, Y Sheikh, Cao, Z., Simon, T., Wei, S.-E., and Sheikh, Y. (2017). Realtime multi-person 2D pose estimation using part affinity fields. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)Realtime multi-person 2D pose estimation using part affinity fields. In Proceedings of the IEEE Confer- ence on Computer Vision and Pattern Recognition (CVPR). OpenPose: realtime multiperson 2D pose estimation using Part Affinity Fields. Z Cao, G Hidalgo, T Simon, S.-E Wei, Y Sheikh, arXivpreprintarXiv:1812.08008Cao, Z., Hidalgo, G., Simon, T., Wei, S.-E., and Sheikh, Y. (2018). OpenPose: realtime multi- person 2D pose estimation using Part Affinity Fields. In arXiv preprint arXiv:1812.08008. Nonmanual Grammatical Markers. An Analysis of Interrogatives, Negation and Topicalisation in Sign Language of the Netherlands. Doctoral dissertation. J Coerts, AmsterdamUniversity of AmsterdamCoerts, J. (1992). Nonmanual Grammatical Markers. An Analysis of Interrogatives, Negation and Topical- isation in Sign Language of the Netherlands. Doc- toral dissertation, University of Amsterdam, Ams- terdam. Enhanced ELAN functionality for sign language corpora. O Crasborn, H Sloetjes, Proceedings of the 3rd Workshop on the Representation and Processing of Sign Languages: Construction and Exploitation of Sign Language Corpora. the 3rd Workshop on the Representation and Processing of Sign Languages: Construction and Exploitation of Sign Language CorporaParisELRACrasborn, O. and Sloetjes, H. (2008). Enhanced ELAN functionality for sign language corpora. In Proceedings of the 3rd Workshop on the Representa- tion and Processing of Sign Languages: Construc- tion and Exploitation of Sign Language Corpora, pages 39±43. ELRA, Paris. Corpus NGT. An open access digital corpus of movies with annotations of Sign Language of the Netherlands. O Crasborn, I Zwitserlood, Ros , J , Crasborn, O., Zwitserlood, I., and Ros, J. (2008). Cor- pus NGT. An open access digital corpus of movies with annotations of Sign Language of the Nether- lands. Visual intonation in two sign languages. Phonology. S Dachkovsky, C Healy, W Sandler, 10.1017/S095267571300012230Dachkovsky, S., Healy, C., and Sandler, W. (2013). Visual intonation in two sign lan- guages. Phonology, 30(02):211±252. DOI: 10.1017/S0952675713000122. Mixed signals. Combining linguistic and affective functions of eye brows in questions in Sign Language of the Netherlands. C De Vos, E Van Der Kooij, O Crasborn, Language and Speech. 522/3De Vos, C., Van der Kooij, E., and Crasborn, O. (2009). Mixed signals. Combining linguistic and affective functions of eye brows in questions in Sign Lan- guage of the Netherlands. Language and Speech, 52(2/3):315±339. Simultaneous articulation as a window into structure: nonmanuals in ASL. J Harmon, Proceedings of the Forthy-Third Annual Meeting of the Berkeley Linguistics Society, volume I, pages 121±144. Berkeley Linguistics Society. Nee, J., et al.the Forthy-Third Annual Meeting of the Berkeley Linguistics Society, volume I, pages 121±144. Berkeley Linguistics SocietyBerkeleyHarmon, J. (2017). Simultaneous articulation as a win- dow into structure: nonmanuals in ASL. In Nee, J., et al., editors, Proceedings of the Forthy-Third Annual Meeting of the Berkeley Linguistics Society, volume I, pages 121±144. Berkeley Linguistics So- ciety, Berkeley. Head shakes: Variation in form, function, and cultural distribution of a head movement related to ªnoº. S Harrison, 10.1515/9783110302028.1496HandbÈ ucher zur Sprach-und Kommunikationswissenschaft / Handbooks of Linguistics and Communication Science (HSK) 38/2. MÈ uller, C., et al.De Gruyter Mouton112Harrison, S. (2014). 112. Head shakes: Varia- tion in form, function, and cultural distribution of a head movement related to ªnoº. In MÈ uller, C., et al., editors, HandbÈ ucher zur Sprach-und Kommunikationswissenschaft / Handbooks of Lin- guistics and Communication Science (HSK) 38/2, pages 1496±1501. De Gruyter Mouton. DOI: 10.1515/9783110302028.1496. A corpus-based study of the role of headshaking in negation in Auslan (Australian Sign Language): Implications for signed language typology. T Johnston, 10.1515/lingty-2018-0008Linguistic Typology. 222Johnston, T. (2018). A corpus-based study of the role of headshaking in negation in Auslan (Aus- tralian Sign Language): Implications for signed lan- guage typology. Linguistic Typology, 22(2):185± 231. DOI: 10.1515/lingty-2018-0008. Eyebrow position in grammatical and emotional expressions in Kazakh-Russian Sign Language: A quantitative study. V Kimmelman, A Imashev, M Mukushev, A Sandygulova, 10.1371/journal.pone.0233731PLOS ONE. 615Kimmelman, V., Imashev, A., Mukushev, M., and Sandygulova, A. (2020). Eyebrow po- sition in grammatical and emotional expres- sions in Kazakh-Russian Sign Language: A quantitative study. PLOS ONE, 15(6). DOI: 10.1371/journal.pone.0233731. Movement phases in signs and co-speech gestures, and their transcription by human coders. S Kita, I Van Gijn, H Van Der Hulst, Gesture and Sign Language in Human-Computer Interaction. Wachsmuth, I. et al.Berlin Heidelberg; Berlin, HeidelbergSpringer1371Kita, S., van Gijn, I., and van der Hulst, H. (1998). Movement phases in signs and co-speech ges- tures, and their transcription by human coders. In Wachsmuth, I. et al., editors, Gesture and Sign Language in Human-Computer Interaction, volume 1371, pages 23±35. Springer Berlin Heidelberg, Berlin, Heidelberg. Sign language recognition with recurrent neural network using human keypoint detection. S.-K Ko, J G Son, Jung , H , Proceedings of the 2018 Conference on Research in Adaptive and Convergent Systems. the 2018 Conference on Research in Adaptive and Convergent SystemsKo, S.-K., Son, J. G., and Jung, H. (2018). Sign lan- guage recognition with recurrent neural network us- ing human keypoint detection. In Proceedings of the 2018 Conference on Research in Adaptive and Con- vergent Systems, pages 326±328. Deep hand: How to train a cnn on 1 million hand images when your data is continuous and weakly labelled. O Koller, H Ney, R Bowden, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionKoller, O., Ney, H., and Bowden, R. (2016). Deep hand: How to train a cnn on 1 million hand images when your data is continuous and weakly labelled. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 3793±3802. What Corpus-Based Research on Negation in Auslan and PJM Tells Us About Building and Using Sign Language Corpora. A Kuder, J Filipczak, P Mostowski, P Rutkowski, T Johnston, 101±106, Paris Elra, A Kuznetsova, A Imashev, M Mukushev, A Sandygulova, V Kimmelman, Proceedings of the 8th Workshop on the Representation and Processing of Sign Languages: Involving the Language Community, pages. the 8th Workshop on the Representation and Processing of Sign Languages: Involving the Language Community, pagesProceedings of the 1st International Workshop on Automatic Translation for Signed and Spoken Languages (AT4SSL). Virtual. Association for Machine Translation in the AmericasKuder, A., Filipczak, J., Mostowski, P., Rutkowski, P., and Johnston, T. (2018). What Corpus-Based Research on Negation in Auslan and PJM Tells Us About Building and Using Sign Language Corpora. In Proceedings of the 8th Workshop on the Represen- tation and Processing of Sign Languages: Involving the Language Community, pages 101±106. ELRA, Paris. Kuznetsova, A., Imashev, A., Mukushev, M., Sandygulova, A., and Kimmelman, V. (2021). Us- ing Computer Vision to Analyze Non-manual Mark- ing of Questions in KRSL. In Proceedings of the 1st International Workshop on Automatic Translation for Signed and Spoken Languages (AT4SSL), pages 49±59, Virtual. Association for Machine Translation in the Americas. Nonmanuals in sign languages: a research desideratum. A Lackner, 10.25364/04.48:2021.93.1Grazer Linguistische Studien. Lackner, A. (2021). Nonmanuals in sign lan- guages: a research desideratum. Grazer Linguistische Studien, pages 1±27. DOI: 10.25364/04.48:2021.93.1. Signing not (or not): A typological perspective on standard negation in Sign Language of the Netherlands. M Oomen, R Pfau, 10.1515/lingty-2017-0001Linguistic Typology. 211Oomen, M. and Pfau, R. (2017). Signing not (or not): A typological perspective on stan- dard negation in Sign Language of the Nether- lands. Linguistic Typology, 21(1):1±51. DOI: 10.1515/lingty-2017-0001. Nonmanuals: their prosodic and grammatical roles. R Pfau, J Quer, Sign Languages. Brentari, D., editorCambridgeCambridge University PressPfau, R. and Quer, J. (2010). Nonmanuals: their prosodic and grammatical roles. In Brentari, D., ed- itor, Sign Languages, pages 381±402. Cambridge University Press, Cambridge. The grammar of headshake: a typological perspective on German Sign Language negation. R Pfau, Linguistics in Amsterdam. 1Pfau, R. (2008). The grammar of headshake: a typo- logical perspective on German Sign Language nega- tion. Linguistics in Amsterdam, 1:37±74. Head movements in Finnish Sign Language on the basis of Motion Capture data: A study of the form and function of nods, nodding, head thrusts, and head pulls. A Puupponen, T Wainio, B Burger, Jantunen , T , 10.1075/sll.18.1.02puuSign Language & Linguistics. 181Puupponen, A., Wainio, T., Burger, B., and Jan- tunen, T. (2015). Head movements in Finnish Sign Language on the basis of Motion Cap- ture data: A study of the form and function of nods, nodding, head thrusts, and head pulls. Sign Language & Linguistics, 18(1):41±89. DOI: 10.1075/sll.18.1.02puu. R: A Language and Environment for Statistical Computing. R Foundation for Statistical Computing. R Core Team, RStudio, IncVienna, Austria. RStudio Team; Boston, MARStudio: integrated development environment for RR Core Team. (2019). R: A Language and Environ- ment for Statistical Computing. R Foundation for Statistical Computing, Vienna, Austria. RStudio Team. (2019). RStudio: integrated de- velopment environment for R. Boston, MA. tex.organization: RStudio, Inc. Linearization constraints on sentential negation in Russian Sign Language are prosodic. P Rudnev, A Kuznetsova, 10.1075/sll.20007.rudSign Language & Linguistics. 242Rudnev, P. and Kuznetsova, A. (2021). Lin- earization constraints on sentential negation in Russian Sign Language are prosodic. Sign Language & Linguistics, 24(2):259±273. DOI: 10.1075/sll.20007.rud. Everybody Sign Now: Translating Spoken Language to Photo Realistic Sign Language Video. B Saunders, N C Camgoz, R Bowden, Saunders, B., Camgoz, N. C., and Bowden, R. (2020). Everybody Sign Now: Translating Spoken Language to Photo Realistic Sign Language Video. The Phonetics of Head and Body Movement in the Realization of American Sign Language Signs. M E Tyrone, C E Mauk, 10.1159/000443836Phonetica. 732Tyrone, M. E. and Mauk, C. E. (2016). The Phonet- ics of Head and Body Movement in the Realiza- tion of American Sign Language Signs. Phonetica, 73(2):120±140. DOI: 10.1159/000443836. . P Virtanen, R Gommers, T E Oliphant, M Haberland, T Reddy, D Cournapeau, E Burovski, P Peterson, W Weckesser, J Bright, S J Van Der Walt, M Brett, J Wilson, K J Millman, N Mayorov, A R J Nelson, E Jones, R Kern, E Larson, C J Carey, L Polat, Y Feng, E W Moore, J Vanderplas, D Laxalde, J Perktold, R Cimrman, I Henriksen, E A Quintero, C R Harris, A M Archibald, A H Ribeiro, F Pedregosa, Van Mulbregt, and SciPy 1.0 Contributors.Virtanen, P., Gommers, R., Oliphant, T. E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Pe- terson, P., Weckesser, W., Bright, J., van der Walt, S. J., Brett, M., Wilson, J., Millman, K. J., Mayorov, N., Nelson, A. R. J., Jones, E., Kern, R., Larson, E., Carey, C. J., Polat, L., Feng, Y., Moore, E. W., VanderPlas, J., Laxalde, D., Perktold, J., Cimrman, R., Henriksen, I., Quintero, E. A., Harris, C. R., Archibald, A. M., Ribeiro, A. H., Pedregosa, F., van Mulbregt, P., and SciPy 1.0 Contributors. (2020). SciPy 1.0: Fundamental algorithms for scientific computing in python. 10.1038/s41592-019-0686-2Nature Methods. 17SciPy 1.0: Fundamental algorithms for scientific computing in python. Nature Methods, 17:261±272. DOI: 10.1038/s41592-019-0686-2. American Sign Language Tone and Intonation: A Phonetic Analysis of Eyebrow Properties. T Weast, 10.1515/9781614510680.203Channon, R. et al.,De GruyterBerlin, Bostoneditors, Formational Units in Sign LanguagesWeast, T. (2011). American Sign Language Tone and Intonation: A Phonetic Analysis of Eye- brow Properties. In Channon, R. et al., edi- tors, Formational Units in Sign Languages, pages 203±226. De Gruyter, Berlin, Boston. DOI: 10.1515/9781614510680.203. Convolutional pose machines. S.-E Wei, V Ramakrishna, T Kanade, Y Sheikh, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)Wei, S.-E., Ramakrishna, V., Kanade, T., and Sheikh, Y. (2016). Convolutional pose machines. In Pro- ceedings of the IEEE Conference on Computer Vi- sion and Pattern Recognition (CVPR), pages 4724± 4732. editors, The Routledge Handbook of Theoretical and Experimental Sign Language Research. R Wilbur, Quer, J., et al.,Routledge, London; New YorkNon-manual markers: theoretical and experimental perspectivesWilbur, R. (2021). Non-manual markers: theoretical and experimental perspectives. In Quer, J., et al., ed- itors, The Routledge Handbook of Theoretical and Experimental Sign Language Research, pages 530± 565. Routledge, London; New York. Interrogative and negative constructions in sign languages. U Zeshan, Number 1 in Sign Language Typology Series. Nijmegen. 7Ishara PressLanguage Resource ReferencesZeshan, U., editor. (2006). Interrogative and negative constructions in sign languages. Number 1 in Sign Language Typology Series. Ishara Press, Nijmegen. 7. Language Resource References Russian Sign Language Corpus. S Burkova, Burkova, S. (2015). Russian Sign Language Corpus.
11,849,431
Probabilistic Lexical Generalization for French Dependency Parsing
This paper investigates the impact on French dependency parsing of lexical generalization methods beyond lemmatization and morphological analysis. A distributional thesaurus is created from a large text corpus and used for distributional clustering and WordNet automatic sense ranking. The standard approach for lexical generalization in parsing is to map a word to a single generalized class, either replacing the word with the class or adding a new feature for the class. We use a richer framework that allows for probabilistic generalization, with a word represented as a probability distribution over a space of generalized classes: lemmas, clusters, or synsets. Probabilistic lexical information is introduced into parser feature vectors by modifying the weights of lexical features. We obtain improvements in parsing accuracy with some lexical generalization configurations in experiments run on the French Treebank and two out-of-domain treebanks, with slightly better performance for the probabilistic lexical generalization approach compared to the standard single-mapping approach.
[ 5637889, 1044865, 7490434, 5957770, 15567883, 6684426, 59829005, 7456408, 383404, 13421101, 38744114, 4328886, 9904828, 41415, 1359050, 18297290, 10986188, 8375885, 39375316, 42198508, 12061046, 15698938 ]
Probabilistic Lexical Generalization for French Dependency Parsing Association for Computational LinguisticsCopyright Association for Computational Linguistics12 July 2012. 2012 Enrique Henestroza Anguiano enrique.henestrozaanguiano@inria.fr Université Paris Diderot / INRIA Alpage, ParisFrance Marie Candito marie.candito@linguist.jussieu.fr Université Paris Diderot / INRIA Alpage, ParisFrance Probabilistic Lexical Generalization for French Dependency Parsing Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics the 50th Annual Meeting of the Association for Computational LinguisticsJeju, Republic of KoreaAssociation for Computational Linguistics12 July 2012. 2012 This paper investigates the impact on French dependency parsing of lexical generalization methods beyond lemmatization and morphological analysis. A distributional thesaurus is created from a large text corpus and used for distributional clustering and WordNet automatic sense ranking. The standard approach for lexical generalization in parsing is to map a word to a single generalized class, either replacing the word with the class or adding a new feature for the class. We use a richer framework that allows for probabilistic generalization, with a word represented as a probability distribution over a space of generalized classes: lemmas, clusters, or synsets. Probabilistic lexical information is introduced into parser feature vectors by modifying the weights of lexical features. We obtain improvements in parsing accuracy with some lexical generalization configurations in experiments run on the French Treebank and two out-of-domain treebanks, with slightly better performance for the probabilistic lexical generalization approach compared to the standard single-mapping approach. Introduction In statistical, data-driven approaches to natural language syntactic parsing, a central problem is that of accurately modeling lexical relationships from potentially sparse counts within a training corpus. Our particular interests are centered on reducing lexical data sparseness for linear classification approaches for dependency parsing. In these approaches, linear models operate over feature vectors that generally represent syntactic structure within a sentence, and feature templates are defined in part over the word forms of one or more tokens in a sentence. Because treebanks used for training are often small, lexical features may appear relatively infrequently during training, especially for languages with richer morphology than English. This may, in turn, impede the parsing model's ability to generalize well outside of its training set with respect to lexical features. Past approaches for achieving lexical generalization in dependency parsing have used WordNet semantic senses in parsing experiments for English (Agirre et al., 2011), and word clustering over large corpora in parsing experiments for English (Koo et al., 2008) as well as for French (Candito et al., 2010b). These approaches map each word to a single corresponding generalized class (synset or cluster), and integrate generalized classes into parsing models in one of two ways: (i) the replacement strategy, where each word form is simply replaced with a corresponding generalized class; (ii) a strategy where an additional feature is created for the corresponding generalized class. Our contribution in this paper is applying probabilistic lexical generalization, a richer framework for lexical generalization, to dependency parsing. Each word form is represented as a categorical distribution over a lexical target space of generalized classes, for which we consider the spaces of lemmas, synsets, and clusters. The standard single-mapping approach from previous work can be seen as a subcase: each categorical distribution assigns a probability of 1 to a single generalized class. The method we use for introducing probabilistic information into a feature vector is based on that used by Bunescu (2008), who tested the use of probabilistic part-ofspeech (POS) tags through an NLP pipeline. In this paper, we perform experiments for French that use the replacement strategy for integrating generalized classes into parsing models, comparing the single-mapping approach for lexical generalization with our probabilistic lexical generalization approach. In doing so, we provide first results on the application to French parsing of WordNet automatic sense ranking (ASR), using the method of McCarthy et al. (2004). For clustering we deviate from most previous work, which has integrated Brown clusters (Brown et al., 1992) into parsing models, and instead use distributional lexical semantics to create both a distributional thesaurus -for probabilistic generalization in the lemma space and ASR calculationand to perform hierarchical agglomerative clustering (HAC). Though unlexicalized syntactic HAC clustering has been used to improve English dependency parsing (Sagae and Gordon, 2009), we provide first results on using distributional lexical semantics for French parsing. We also include an out-of-domain evaluation on medical and parliamentary text in addition to an in-domain evaluation. In Section 2 we describe the lexical target spaces used in this paper, as well as the method of integrating probabilistic lexical information into a feature vector for classification. In Section 3 we discuss dependency structure and transition-based parsing. In Section 4 we present the experimental setup, which includes our parser implementation, the construction of our probabilistic lexical resources, and evaluation settings. We report parsing results both in-domain and out-of-domain in Section 5, we provide a summary of related work in Section 6, and we conclude in Section 7. Probabilistic Lexical Target Spaces Using terms from probability theory, we define a lexical target space as a sample space Ω over which a categorical distribution is defined for each lexical item in a given source vocabulary. Because we are working with French, a language with relatively rich morphology, we use lemmas as the base lexical items in our source vocabulary. The outcomes contained in a sample space represent generalized classes in a target vocabulary. In this paper we consider three possible target vocabularies, with corresponding sample spaces: Ω l for lemmas, Ω s for synsets, and Ω c for clusters. Ω l Lemma Space In the case of the lemma space, the source and target vocabularies are the same. To define an appropriate categorical distribution for each lemma, one where the possible outcomes also correspond to lemmas, we use a distributional thesaurus that provides similarity scores for pairs of lemmas. Such a thesaurus can be viewed as a similarity function D(x, y), where x, y ∈ V and V is the vocabulary for both the source and target spaces. The simplest way to define a categorical distribution over Ω l , for a lemma x ∈ V , would be to use the following probability mass function p x : px(y) = D(x, y) y ′ ∈V D(x, y ′ )(1) One complication is the identity similarity D(x, x): although it can be set equal to 1 (or the similarity given by the thesaurus, if one is provided), we choose to assign a pre-specified probability mass m to the identity lemma, with the remaining mass used for generalization across other lemmas. Additionally, in order to account for noise in the thesaurus, we restrict each categorical distribution to a lemma's k-nearest neighbors. The probability mass function p x over the space Ω l that we use in this paper is finally as follows: px(y) =                    m, if y = x (1−m)D(x, y) y ′ ∈Nx(k) D(x, y ′ ) , if y ∈ Nx(k) 0, otherwise(2) Ω s Synset Space In the case of the synset space, the target vacabulary contains synsets from the Princeton WordNet sense hierarchy (Fellbaum, 1998). To define an appropriate categorical distribution over synsets for each lemma x in our source vocabulary, we first use the WordNet resource to identify the set S x of different senses of x. We then use a distributional thesaurus to perform ASR, which determines the prevalence with respect to x of each sense s ∈ S x , following the approach of McCarthy et al. (2004). Representing the thesaurus as a similarity function D(x, y), letting N x (k) be the set of k-nearest neighbors for x, and letting W (s 1 , s 2 ) be a similarity function over synsets in WordNet, we define a prevalence function R x (s) as follows: Rx(s) = y∈Nx(k) D(x, y) max s ′ ∈ Sy W (s, s ′ ) t∈Sx max s ′ ∈ Sy W (t, s ′ )(3) This function essentially weights the semantic contribution that each distributionally-similar neighbor adds to a given sense for x. With the prevalence scores of each sense for x having been calculated, we use the following probability mass function p x over the space Ω s , where S x (k) is the set of k-most prevalent senses for x: px(s) =            Rx(s) s ′ ∈Sx(k) Rx(s ′ ) , if s ∈ Sx(k) 0, otherwise(4) Note that the first-sense ASR approach to using WordNet synsets for parsing, which has been previously explored in the literature (Agirre et al., 2011), corresponds to setting k=1 in Equation 4. Ω c Cluster Space In the case of the cluster space, any approach for word clustering may be used to create a reduced target vocabulary of clusters. Defining a categorical distribution over clusters would be interesting in the case of soft clustering of lemmas, in which a lemma can participate in more than one cluster, but we have not yet explored this clustering approach. In this paper we limit ourselves to the simpler hard clustering HAC method, which uses a distributional thesaurus and iteratively joins two clusters together based on the similarities between lemmas in each cluster. We end up with a simple probability mass function p x over the space Ω c for a lemma x with corresponding cluster c x : px(c) = 1, if c = cx 0, otherwise(5) Probabilistic Feature Generalization In a typical classifier-based machine learning setting in NLP, feature vectors are constructed using indicator functions that encode categorical information, such as POS tags, word forms or lemmas. In this section we will use a running example where a and b are token positions of interest to a classifier, and for which feature vectors are created. If we let t stand for POS tag and l stand for lemma, a feature template for this pair of tokens might then be [t a l b ]. Feature templates are instantiated as actual features in a vector space depending on the categorical values they can take on. One possible instantiation of the template [t a l b ] would then be the feature [t a =verb∧l b =avocat], which indicates that a is a verb and b is the lemma avocat ("avocado" or "lawyer"), with the following indicator function: f = 1, if ta=verb ∧ l b =avocat 0, otherwise(6) To perform probabilistic feature generalization, we replace the indicator function, which represents a single original feature, with a collection of weighted functions representing a set of derived features. Suppose the French lemma avocat is in our source vocabulary and has multiple senses in Ω s (s 1 for the "avocado" sense, s 2 for the "lawyer" sense, etc.), as well as a probability mass function p av . We discard the old feature [t a =verb∧l b =avocat] and add, for each s i , a derived feature of the form [t a =verb∧x b =s i ], where x represents a target space generalized class, with the following weighted indicator function: f (i) = pav(si), if ta=verb ∧ l b =avocat 0, otherwise(7) This process extends easily to generalizing multiple categorical variables. Consider the bilexical feature [l a =manger∧l b =avocat], which indicates that a is the lemma manger ("eat") and b is the lemma avocat. If both lemmas manger and avocat appear ouvrit Elle porte la avec clé la Figure 1: An unlabeled dependency tree for "Elle ouvrit la porte avec la clé" ("She opened the door with the key"). in our source vocabulary and have multiple senses in Ω s , with probability mass functions p ma and p av , then for each pair i, j we derive a feature of the form [x a =s i ∧x b =s j ], with the following weighted indicator function: f (i,j) = pma(si)pav(sj), if la=manger∧l b =avocat 0, otherwise(8) Dependency Parsing Dependency syntax involves the representation of syntactic information for a sentence in the form of a directed graph, whose edges encode word-to-word relationships. An edge from a governor to a dependent indicates, roughly, that the presence of the dependent is syntactically legitimated by the governor. An important property of dependency syntax is that each word, except for the root of the sentence, has exactly one governor; dependency syntax is thus represented by trees. Figure 1 shows an example of an unlabeled dependency tree. 1 For languages like English or French, most sentences can be represented with a projective dependency tree: for any edge from word g to word d, g dominates any intervening word between g and d. Dependency trees are appealing syntactic representations, closer than constituency trees to the semantic representations useful for NLP applications. This is true even with the projectivity requirement, which occasionally creates syntax-semantics mismatches. Dependency trees have recently seen a surge of interest, particularly with the introduction of supervised models for dependency parsing using linear classifiers. Transition-Based Parsing In this paper we focus on transition-based parsing, whose seminal works are that of Yamada and Matsumoto (2003) and Nivre (2003). The parsing process applies a sequence of incremental actions, which typically manipulate a buffer position in the sentence and a stack for built sub-structures. In the arc-eager approach introduced by Nivre et al. (2006) the possible actions are as follows, with s 0 being the token on top of the stack and n 0 being the next token in the buffer: The parser uses a greedy approach, where the action selected at each step is the best-scoring action according to a classifier, which is trained on a dependency treebank converted into sequences of actions. The major strength of this framework is its O(n) time complexity, which allows for very fast parsing when compared to more complex global optimization approaches. Experimental Setup We now discuss the treebanks used for training and evaluation, the parser implementation and baseline settings, the construction of the probabilistic lexical resources, and the parameter tuning and evaluation settings. Treebanks The treebank we use for training and in-domain evaluation is the French Treebank (FTB) (Abeillé and Barrier, 2004), consisting of 12,351 sentences from the Le Monde newspaper, converted to projective 2 dependency trees (Candito et al., 2010a). For our experiments we use the usual split of 9,881 training, 1,235 development, and 1,235 test sentences. Moving beyond the journalistic domain, we use two additional treebank resources for out-of-domain parsing evaluations. These treebanks are part of the Sequoia corpus (Candito and Seddah, 2012), and consist of text from two non-journalistic domains annotated using the FTB annotation scheme: a medical domain treebank containing 574 development and 544 test sentences of public assessment reports of medicine from the European Medicines Agency (EMEA) originally collected in the OPUS project (Tiedemann, 2009), and a parliamentary domain treebank containing 561 test sentences from the Europarl 3 corpus. Parser and Baseline Settings We use our own Python implementation of the arceager algorithm for transition-based parsing, based on the arc-eager setting of MaltParser (Nivre et al., 2007), and we train using the standard FTB training set. Our baseline feature templates and general settings correspond to those obtained in a benchmarking of parsers for French (Candito et al., 2010b), under the setting which combined lemmas and morphological features. 4 Automatic POS-tagging is performed using MElt (Denis and Sagot, 2009), and lemmatization and morphological analysis are performed using the Lefff lexicon (Sagot, 2010). Table 1 lists our baseline parser's feature templates. Lexical Resource Construction We now describe the construction of our probabilistic lexical target space resources, whose prerequisites include the automatic parsing of a large corpus, the construction of a distributional thesaurus, the use of ASR on WordNet synsets, and the use of HAC clustering. Automatically-Parsed Corpus The text corpus we use consists of 125 million words from the L'Est Republicain newspaper 5 , 125 million words of dispatches from the Agence France-Presse, and 225 million words from a French Wikipedia backup dump 6 . The corpus is Feature Templates Unigram tn 0 ; ln 0 ; cn 0 ; wn 0 ; ts 0 ; ls 0 ; cs 0 ; ws 0 ; ds 0 ; tn 1 ; ln 1 ; tn 2 ; tn 3 ; ts 1 ; ts 2 ; tn 0l ; ln 0l ; dn 0l ; ds 0l ; ds 0r ; ls 0h ; {m i n 0 : i ∈ |M |}; {m i s 0 : i ∈ |M |} Bigram ts 0 tn 0 ; ts 0 ln 0 ; ls 0 ln 0 ; ln 0 tn 1 ; tn 0 tn 0l ; tn 0 dn 0l ; {m i s 0 m j n 0 : i; j ∈ |M |}; {tn 0 m i n 0 : i ∈ |M |}; {ts 0 m i s 0 : i ∈ |M |} Trigram ts 2 ts 1 ts 0 ; ts 1 ts 0 tn 0 ; ts 0 tn 0 tn 1 ; tn 0 tn 1 tn 2 ; tn 1 tn 2 tn 3 ; ts 0 ds 0l ds 0r Table 1: Arc-eager parser feature templates. c = coarse POS tag, t = fine POS tag, w = inflected word form, l = lemma, d = dependency label, m i = morphological feature from set M . For tokens, n i = i th token in the buffer, s i = i th token on the stack. The token subscripts l, r, and h denote partially-constructed syntactic left-most dependent, right-most dependent, and head, respectively. preprocessed using the Bonsai tool 7 , and parsed using our baseline parser. Distributional Thesaurus We build separate distributional thesauri for nouns and for verbs, 8 using straightforward methods in distributional lexical semantics based primarily on work by Lin (1998) and Curran (2004). We use the FreDist tool (Henestroza Anguiano and Denis, 2011) for thesaurus creation. First, syntactic contexts for each lemma are extracted from the corpus. We use all syntactic dependencies in which the secondary token has an open-class POS tag, with labels included in the contexts and two-edge dependencies used in the case of prepositional-phrase attachment and coordination. Example contexts are shown in Figure 2. For verb lemmas we limit contexts to dependencies in which the verb is governor, and we add unlexicalized versions of contexts to account for subcategorization. For noun lemmas, we use all dependencies in which the noun participates, and all contexts are lexicalized. The vocabulary is limited to lemmas with at least 1,000 context occurrences, resulting in 8,171 nouns and 2,865 verbs. Each pair of lemma x and context c is subsequently weighted by mutual informativeness using the point-wise mutual information metric, with · One-Edge Context: -obj→ N |avocat · One-Edge Context: -obj→ N (unlexicalized) · Two-Edge Context: -mod→ P |avec -obj→ N |avocat · Two-Edge Context: -mod→ P |avec -obj→ N (unlexicalized) Figure 2: Example dependency contexts for the verb lemma manger. The one-edge contexts corresponds to the phrase "manger un avocat" ("eat an avocado"), and the two-edge contexts corresponds to the phrase "manger avec un avocat" ("eat with a lawyer"). probabilities estimated using frequency counts: I(x, c) = log p(x, c) p(x)p(c)(9) Finally, we use the cosine metric to calculate the distributional similarity between pairs of lemmas x, y: D(x, y) = c I(x, c)I(y, c) c I(x, c) 2 × c I(y, c) 2(10) WordNet ASR For WordNet synset experiments we use the French EuroWordNet 9 (FREWN). A WordNet synset mapping 10 allows us to convert synsets in the FREWN to Princeton WordNet version 3.0, and after discarding a small number of synsets that are not covered by the mapping we retain entries for 9,833 nouns and 2,220 verbs. We use NLTK, the Natural Language Toolkit (Bird et al., 2009), to calculate similarity between synsets. As explained in Section 2.2, ASR is performed using the method of McCarthy et al. (2004). We use k=8 for the distributional nearest-neighbors to consider when ranking the senses for a lemma, and we use the synset similarity function of Jiang and Conrath (1997) HAC Clustering For the HAC clustering experiments in this paper, we use the CLUTO package 12 . The distributional thesauri described above are taken as input, and the UPGMA setting is used for cluster agglomeration. We test varying levels of clustering, with a parameter z which determines the proportion of cluster vocabulary size with respect to the original vocabulary size (8,171 for nouns and 2,865 for verbs). Resource Coverage The coverage of our lexical resources over the FTB and two out-of-domain evaluation sets, at the level of token occurrences of verbs and common nouns, is described in Table 2. We can see that the FTB training set vocabulary provides better coverage than the FREWN for both nouns and verbs, while the coverage of the thesauri (and derived clusters) is the highest overall. Tuning and Evaluation We evaluate four lexical target space configurations against the baseline of lemmatization, tuning parameters using ten-fold cross-validation on the FTB training set. The feature templates are the same as those in Table 1, with the difference that features involving lemmas are modified by the probabilistic feature generalization technique described in Section 2.4, using the appropriate categorical distributions. In all configurations, we exclude the French auxiliary verbsêtre and avoir from participation in lexical generalization, and we replace proper nouns with a special lemma 13 . Below we describe the tuned parameters for each configuration. − RC: Replacement with cluster in Ω c For clusters and the parameter z (cf. Section 4.3.4), we settled on relative cluster vocabulary size z=0.6 for nouns and z=0.7 for verbs. We also generalized lemmas not appearing in the distributional thesaurus into a single unknown class. − PKNL: Probabilistic k-nearest lemmas in Ω l For the parameters k and m (cf. Section 2.1), we settled on k=4 and m=0.5 for both nouns and verbs. We also use the unknown class for low-frequency lemmas, as in the RC configuration. − RS: Replacement with first-sense (k=1) in Ω s Since the FREWN has a lower-coverage vocabulary, we did not use an unknown class for out-of-vocabulary lemmas; instead, we mapped them to unique senses. In addition, we did not perform lexical generalization for verbs, due to low cross-validation performance. − PKPS: Probabilistic k-prevalent senses in Ω s For this setting we decided to not place any limit on k, due to the large variation in the number of senses for different lemmas. As in the RS configuration, we mapped out-ofvocabulary lemmas to unique senses and did not perform lexical generalization for verbs. Table 3 shows labeled attachment score (LAS) results for our baseline parser (Lemmas) and four lexical generalization configurations. For comparison, we also include results for a setting that only uses word forms (Forms), which was the baseline for previous work on French dependency parsing (Candito et al., 2010b). Punctuation tokens are not scored, and significance is calculated using Dan Bikel's randomized parsing evaluation comparator 14 , at significance level p=0.05. Results In-Domain Results Our in-domain evaluation yields slight improvements in LAS for some lexical generalization configurations, with PKNL performing the best. However, the improvements are not statistically significant. A potential explanation for this disappointing result is that the FTB training set vocabulary covers the FTB test set at high rates for both nouns (95.25%) and verbs (96.54%), meaning that lexical data sparseness is perhaps not a big problem for in-domain dependency parsing. While WordNet synsets could be expected to provide the added benefit of taking word sense into account, sense ambiguity is not really treated due to ASR not providing word sense disambiguation in context. Out-Of-Domain Results Our evaluation on the medical domain yields statistically significant improvements in LAS, particularly for the two probabilistic target space approaches. PKNL and PKPS improve parsing for both the EMEA dev and test sets, while RC improves parsing for only the EMEA test set and RS does not significantly improve parsing for either set. As in our in-domain evaluation, PKNL performs the best overall, though not significantly better than other lexical generalization settings. One explanation for the improvement in the medical domain is the substantial increase in coverage of nouns in EMEA afforded into a single class. by the distributional thesaurus (+26%) and FREWN (+16%) over the base coverage afforded by the FTB training set. Our evaluation on the parliamentary domain yields no improvement in LAS across the different lexical generalization configurations. Interestingly, Candito and Seddah (2012) note that while Europarl is rather different from FTB in its syntax, its vocabulary is surprisingly similar. From Table 2 we can see that the FTB training set vocabulary has about the same high level of coverage over Europarl (94.69% for nouns and 97.76% for verbs) as it does over the FTB evaluation sets (95.35% for nouns and 96.54% for verbs). Thus, we can use the same reasoning as in our in-domain evaluation to explain the lack of improvement for lexical generalization methods in the parliamentary domain. Lexical Feature Use During Parsing Since lexical generalization modifies the lexical feature space in different ways, we also provide an analysis of the extent to which each parsing model's lexical features are used during in-domain and out-ofdomain parsing. Table 4 describes, for each configuration, the number of lexical features stored in the parsing model along with the average lexical feature use (ALFU) of classification instances (each instance represents a parse transition) during training and parsing. 15 Lexical feature use naturally decreases when moving from the training set to the evaluation sets, due to holes in lexical coverage outside of a parsing model's training set. The single-mapping configurations (RC, RS) do not increase the number of lexical features in a classification instance, which explains the fact that their ALFU on the FTB training set (6.0) is the same as that of the baseline. However, the decrease in ALFU when parsing the evaluation sets is less severe for these configurations than for the baseline: when parsing EMEA Dev with the RC configuration, where we obtain a significant LAS improvement over the baseline, the reduction in ALFU is only 13% compared to 22% for the baseline parser. For the probabilistic generalization configurations, we also see decreases in ALFU when parsing the 15 We define the lexical feature use of a classification instance to be the number of lexical features in the parsing model that receive non-zero values in the instance's feature vector. evaluation sets, though their higher absolute ALFU may help explain the strong medical domain parsing performance for these configurations. Impact on Running Time Another factor to note when evaluating lexical generalization is the effect that it has on running time. Compared to the baseline, the single-mapping configurations (RC, RS) speed up feature extraction and prediction time, due to reduced dimensionality of the feature space. On the other hand, the probabilistic generalization configurations (PKNL, PKPS) slow down feature extraction and prediction time, due to an increased dimensionality of the feature space and a higher ALFU. Running time is therefore a factor that favors the single-mapping approach over our proposed probabilistic approach. Taking a larger view on our findings, we hypothesize that in order for lexical generalization to improve parsing, an approach needs to achieve two objectives: (i) generalize sufficiently to ensure that lemmas not appearing in the training set are nonetheless associated with lexical features in the learned parsing model; (ii) substantially increase lexical coverage over what the training set can provide. The first of these objectives seems to be fulfilled through our lexical generalization methods, as indicated in Table 4. The second objective, however, seems difficult to attain when parsing text indomain, or even out-of-domain if the domains have a high lexical overlap (as is the case for Europarl). Only for our parsing experiments in the medical domain do both objectives appear to be fulfilled, as evidenced by our LAS improvements when parsing EMEA with lexical generalization. Related Work We now discuss previous work concerning the use of lexical generalization for parsing, both in the classic in-domain setting and in the more recently popular out-of-domain setting. Results in Constituency-Based Parsing The use of word classes for parsing dates back to the first works on generative constituency-based parsing, whether using semantic classes obtained from hand-built resources or less-informed classes created automatically. Bikel (2000) tried incorporating WordNet-based word sense disambiguation into a parser, but failed to obtain an improvement. Xiong et al. (2005) generalized bilexical dependencies in a generative parsing model using Chinese semantic resources (CiLin and HowNet), obtaining improvements for Chinese parsing. More recently, Agirre et al. (2008) show that replacing words with Word-Net semantic classes improves English generative parsing. Lin et al. (2009) use the HowNet resource within the split-merge PCFG framework (Petrov et al., 2006) for Chinese parsing: they use the firstsense heuristic to append the most general hypernym to the POS of a token, obtaining a semanticallyinformed symbol refinement, and then guide further symbol splits using the HowNet hierarchy. Other work has used less-informed classes, notably unsupervised word clusters. Candito and Crabbé (2009) use Brown clusters to replace words in a generative PCFG-LA framework, obtaining substantial parsing improvements for French. Results in Dependency Parsing In dependency parsing, word classes are integrated as features in underlying linear models. In a seminal work, Koo et al. (2008) use Brown clusters as features in a graph-based parser, improving parsing for both English and Czech. However, attempts to use this technique for French have lead to no improvement when compared to the use of lemmatization and morphological analysis (Candito et al., 2010b). Sagae and Gordon (2009) augment a transitionbased English parser with clusters using unlexicalized syntactic distributional similarity: each word is represented as a vector of counts of emanating unlexicalized syntactic paths, with counts taken from a corpus of auto-parsed phrase-structure trees, and HAC clustering is performed using cosine similarity. For semantic word classes, (Agirre et al., 2011) integrate WordNet senses into a transition-based parser for English, reporting small but significant improvements in LAS (+0.26% with synsets and +0.36% with semantic files) on the full Penn Treebank with first-sense information from Semcor. We build on previous work by attempting to reproduce, for French, past improvements for indomain English dependency parsing with generalized lexical classes. Unfortunately, our results for French do not replicate the improvements for English using semantic sense information (Agirre et al., 2011) or word clustering (Sagae and Gordon, 2009). The primary difference between our paper and previous work, though, is our evaluation of a novel probabilistic approach for lexical generalization. Out-Of-Domain Parsing Concerning techniques for improving out-ofdomain parsing, a related approach has been to use self-training with auto-parsed out-of-domain data, as McClosky and Charniak (2008) do for English constituency parsing, though in that approach lexical generalization is not explicitly performed. Candito et al. (2011) use word clustering for domain adaptation of a PCFG-LA parser for French, deriving clusters from a corpus containing text from both the source and target domains, and they obtain parsing improvements in both domains. We are not aware of previous work on the use of lexical generalization for improving out-of-domain dependency parsing. Conclusion We have investigated the use of probabilistic lexical target spaces for reducing lexical data sparseness in a transition-based dependency parser for French. We built a distributional thesaurus from an automatically-parsed large text corpus, using it to generate word clusters and perform WordNet ASR. We tested a standard approach to lexical generalization for parsing that has been previously explored, where a word is mapped to a single cluster or synset. We also introduced a novel probabilistic lexical generalization approach, where a lemma is represented by a categorical distribution over the space of lemmas, clusters, or synsets. Probabilities for the lemma space were calculated using the distributional thesaurus, and probabilities for the Word-Net synset space were calculated using ASR sense prevalence scores, with probabilistic clusters left for future work. Our experiments with an arc-eager transitionbased dependency parser resulted in modest but significant improvements in LAS over the baseline when parsing out-of-domain medical text. However, we did not see statistically significant improvements over the baseline when parsing in-domain text or out-of-domain parliamentary text. An explanation for this result is that the French Treebank training set vocabulary has a very high lexical coverage over the evaluation sets in these domains, suggesting that lexical generalization does not provide much additional benefit. Comparing the standard single-mapping approach to the probabilistic generalization approach, we found a slightly (though not significantly) better performance for probabilistic generalization across different parsing configurations and evaluation sets. However, the probabilistic approach also has the downside of a slower running time. Based on the findings in this paper, our focus for future work on lexical generalization for dependency parsing is to continue improving parsing performance on out-of-domain text, specifically for those domains where lexical variation is high with respect to the training set. One possibility is to experiment with building a distributional thesaurus that uses text from both the source and target domains, similar to what Candito et al. (2011) did with Brown clustering, which may lead to a stronger bridging effect across domains for probabilistic lexical generalization methods. − SHIFT: Push n 0 onto the stack. − REDUCE: Pop s 0 from the stack. − RIGHT-ARC(r): Add an arc labeled r from s 0 to n 0 ; push n 0 onto the stack. − LEFT-ARC(r): Add an arc labeled r from n 0 to s 0 ; pop s 0 from the stack. , with default information content counts from NLTK calculated over the British National Corpus 11 . http://www.illc.uva.nl/EuroWordNet/ 10 http://nlp.lsi.upc.edu/tools/download-map.9 php 11 http://www.natcorp.ox.ac.uk/ Source Evaluation Set Vocabulary FTB Eval EMEA Eval Europarl Nouns FTB train 95.35 62.87 94.69 Thesaurus 96.25 79.00 97.83 FREWN 80.51 73.09 87.06 Verbs FTB train 96.54 94.56 97.76 Thesaurus 98.33 97.82 99.54 FREWN 88.32 91.48 91.98 Table 2 : 2Lexical occurrence coverage (%) of source vocabularies over evaluation sets. FTB Eval contains both the FTB development and test sets, while EMEA Eval contains both the EMEA development and test sets. Proper nouns are excluded from the analysis. Table 3 : 3Labeled attachment score (LAS) on in-domain (FTB) and out-of-domain (EMEA, Europarl) evaluation sets for the baseline (Lemmas) and four lexical general- ization configurations (RC, PKNL, RS, PKPS). Signif- icant improvements over the baseline are starred. For comparison, we also include a simpler setting (Forms), which does not use lemmas or morphological features. Table 4 : 4Parsing model lexical features (rounded to near- est thousand) and average lexical feature use in classifi- cation instances across different training and evaluation sets, for the baseline (Lemmas) and four lexical general- ization configurations (PKNL, RC, PKPS, and RS). Our experiments involve labeled parsing, with edges additionally labeled with the surface grammatical function that the dependent bears with respect to its governor. The projectivity constraint is linguistically valid for most French parses: the authors report < 2% non-projective edges in a hand-corrected subset of the converted FTB. http://www.statmt.org/europarl/ 4 That work tested the use of Brown clusters, but obtained no improvement compared to a setting without clusters. Thus, we do not evaluate Brown clustering in this paper. 5 http://www.cnrtl.fr/corpus/estrepublicain/ 6 http://dumps.wikimedia.org/ http://alpage.inria.fr/statgram/frdep/fr_ stat_dep_parsing.html8 We additionally considered adjectives and adverbs, but our initial tests yielded no parsing improvements. http://glaros.dtc.umn.edu/gkhome/cluto/ cluto/download Proper nouns tend to have sparse counts, but for computational reasons we did not include them in our distributional thesaurus construction. We thus chose to simply generalize them http://www.cis.upenn.edu/˜dbikel/software. html AcknowledgmentsThis work was funded in part by the ANR project Sequoia ANR-08-EMER-013. Enriching a French treebank. A Abeillé, N Barrier, Proceedings of the 4th International Conference on Language Resources and Evaluation. the 4th International Conference on Language Resources and EvaluationLisbon, PortugalA. Abeillé and N. Barrier. 2004. Enriching a French tree- bank. In Proceedings of the 4th International Confer- ence on Language Resources and Evaluation, Lisbon, Portugal, May. Improving parsing and PP attachment performance with sense information. E Agirre, T Baldwin, D Martinez, Proceedings of the 46th Annual Meeting of the Association for Computational Linguistics. the 46th Annual Meeting of the Association for Computational LinguisticsColumbus, OhioE. Agirre, T. Baldwin, and D. Martinez. 2008. Improv- ing parsing and PP attachment performance with sense information. In Proceedings of the 46th Annual Meet- ing of the Association for Computational Linguistics, pages 317-325, Columbus, Ohio, June. Improving dependency parsing with semantic classes. K Agirre, K Bengoetxea, J Gojenola, Nivre, Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics. the 49th Annual Meeting of the Association for Computational LinguisticsPortland, OregonAgirre, K. Bengoetxea, K. Gojenola, and J. Nivre. 2011. Improving dependency parsing with semantic classes. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pages 699-703, Portland, Oregon, June. A statistical model for parsing and word-sense disambiguation. M Bikel, Proceedings of the EMNLP/VLC-2000. the EMNLP/VLC-2000Hong Kong, OctoberM. Bikel. 2000. A statistical model for parsing and word-sense disambiguation. In Proceedings of the EMNLP/VLC-2000, pages 155-163, Hong Kong, Oc- tober. Natural Language Processing with Python. S Bird, E Loper, E Klein, Reilly Media IncS. Bird, E. Loper, and E. Klein. 2009. Natural Language Processing with Python. O'Reilly Media Inc. Class-based n-gram models of natural language. P F Brown, P V Desouza, R L Mercer, V J D Pietra, J C Lai, Computational Linguistics. 184P.F. Brown, P.V. Desouza, R.L. Mercer, V.J.D. Pietra, and J.C. Lai. 1992. Class-based n-gram models of natural language. Computational Linguistics, 18(4):467-479. Learning with probabilistic features for improved pipeline models. R C Bunescu, Proceedings of the Conference on Empirical Methods in Natural Language Processing. the Conference on Empirical Methods in Natural Language ProcessingHonolulu, HawaiiR.C. Bunescu. 2008. Learning with probabilistic fea- tures for improved pipeline models. In Proceedings of the Conference on Empirical Methods in Natural Lan- guage Processing, pages 670-679, Honolulu, Hawaii, October. Improving generative statistical parsing with semi-supervised word clustering. M Candito, B Crabbé, Proceedings of the 11th International Conference on Parsing Technologies. the 11th International Conference on Parsing TechnologiesParis, FranceM. Candito and B. Crabbé. 2009. Improving generative statistical parsing with semi-supervised word cluster- ing. In Proceedings of the 11th International Confer- ence on Parsing Technologies, pages 138-141, Paris, France, October. Le corpus Sequoia : annotation syntaxique et exploitation pour l'adaptation d'analyseur par pont lexical. M Candito, D Seddah, Actes de la 19ème conférence sur le traitement automatique des langues naturelles. s de la 19ème conférence sur le traitement automatique des langues naturellesGrenoble, FranceTo AppearM. Candito and D. Seddah. 2012. Le corpus Sequoia : annotation syntaxique et exploitation pour l'adaptation d'analyseur par pont lexical. In Actes de la 19ème conférence sur le traitement automatique des langues naturelles, Grenoble, France, June. To Appear. Statistical French dependency parsing: Treebank conversion and first results. M Candito, B Crabbé, P Denis, Proceedings of the 7th International Conference on Language Resources and Evaluation. the 7th International Conference on Language Resources and EvaluationValetta, MaltaM. Candito, B. Crabbé, and P. Denis. 2010a. Statistical French dependency parsing: Treebank conversion and first results. In Proceedings of the 7th International Conference on Language Resources and Evaluation, Valetta, Malta, May. Benchmarking of statistical dependency parsers for French. M Candito, J Nivre, P Denis, E Henestroza, Proceedings of the 23rd International Conference on Computational Linguistics. the 23rd International Conference on Computational LinguisticsBeijing, ChinaAnguianoM. Candito, J. Nivre, P. Denis, and E. Henestroza An- guiano. 2010b. Benchmarking of statistical depen- dency parsers for French. In Proceedings of the 23rd International Conference on Computational Linguis- tics, pages 108-116, Beijing, China, August. A Word Clustering Approach to Domain Adaptation: Effective Parsing of Biomedical Texts. M Candito, E Anguiano, D Seddah, Proceedings of the 12th International Conference on Parsing Technologies. the 12th International Conference on Parsing TechnologiesDublin, IrelandM. Candito, E. Henestroza Anguiano, D. Seddah, et al. 2011. A Word Clustering Approach to Domain Adap- tation: Effective Parsing of Biomedical Texts. In Pro- ceedings of the 12th International Conference on Pars- ing Technologies, Dublin, Ireland, October. From distributional to semantic similarity. J R Curran, University of EdinburghPh.D. thesisJ.R. Curran. 2004. From distributional to semantic simi- larity. Ph.D. thesis, University of Edinburgh. Coupling an annotated corpus and a morphosyntactic lexicon for state-of-the-art POS tagging with less human effort. P Denis, B Sagot, Proceedings of the 23rd Pacific Asia Conference on Language, Information and Computation. the 23rd Pacific Asia Conference on Language, Information and ComputationHong Kong, China, DecemberP. Denis and B. Sagot. 2009. Coupling an annotated cor- pus and a morphosyntactic lexicon for state-of-the-art POS tagging with less human effort. In Proceedings of the 23rd Pacific Asia Conference on Language, In- formation and Computation, Hong Kong, China, De- cember. WordNet: An Electronic Lexical Database. C. FellbaumMIT PressCambridge, MAC. Fellbaum, editor. 1998. WordNet: An Electronic Lex- ical Database. MIT Press, Cambridge, MA. FreDist: Automatic construction of distributional thesauri for French. Henestroza Anguiano, P Denis, Actes de la 18ème conférence sur le traitement automatique des langues naturelles. s de la 18ème conférence sur le traitement automatique des langues naturellesMontpellier, FranceHenestroza Anguiano and P. Denis. 2011. FreDist: Automatic construction of distributional thesauri for French. In Actes de la 18ème conférence sur le traite- ment automatique des langues naturelles, pages 119- 124, Montpellier, France, June. Semantic similarity based on corpus statistics and lexical taxonomy. J J Jiang, D W Conrath, International Conference on Research in Computational Linguistics. TaiwanJ.J. Jiang and D.W. Conrath. 1997. Semantic similarity based on corpus statistics and lexical taxonomy. In In- ternational Conference on Research in Computational Linguistics, Taiwan. Simple semisupervised dependency parsing. X Koo, M Carreras, Collins, Proceedings of the 46th Annual Meeting of the Association for Computational Linguistics. the 46th Annual Meeting of the Association for Computational LinguisticsColumbus, OhioKoo, X. Carreras, and M. Collins. 2008. Simple semi- supervised dependency parsing. In Proceedings of the 46th Annual Meeting of the Association for Compu- tational Linguistics, pages 595-603, Columbus, Ohio, June. Refining grammars for parsing with hierarchical semantic knowledge. Y Lin, M Fan, X Zhang, H Wu, Chi, Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing. the 2009 Conference on Empirical Methods in Natural Language ProcessingSingaporeLin, Y. Fan, M. Zhang, X. Wu, and H. Chi. 2009. Re- fining grammars for parsing with hierarchical semantic knowledge. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing, pages 1298-1307, Singapore, August. Automatic retrieval and clustering of similar words. D Lin, Proceedings of the 36th Annual Meeting of the Association for Computational Linguistics and 17th International Conference on Computational Linguistics. the 36th Annual Meeting of the Association for Computational Linguistics and 17th International Conference on Computational LinguisticsMontreal, Quebec2D. Lin. 1998. Automatic retrieval and clustering of simi- lar words. In Proceedings of the 36th Annual Meeting of the Association for Computational Linguistics and 17th International Conference on Computational Lin- guistics, Volume 2, pages 768-774, Montreal, Quebec, August. Finding predominant word senses in untagged text. D Mccarthy, R Koeling, J Weeds, J Carroll, Proceedings of the 42nd Meeting of the Association for Computational Linguistics. the 42nd Meeting of the Association for Computational LinguisticsBarcelona, SpainD. McCarthy, R. Koeling, J. Weeds, and J. Carroll. 2004. Finding predominant word senses in untagged text. In Proceedings of the 42nd Meeting of the Associa- tion for Computational Linguistics, pages 279-286, Barcelona, Spain, July. Self-training for biomedical parsing. D Mcclosky, E Charniak, Proceedings of the 46th Annual Meeting of the Association for Computational Linguistics. the 46th Annual Meeting of the Association for Computational LinguisticsColumbus, OhioD. McClosky and E. Charniak. 2008. Self-training for biomedical parsing. In Proceedings of the 46th Annual Meeting of the Association for Computational Linguis- tics, pages 101-104, Columbus, Ohio, June. Labeled pseudo-projective dependency parsing with support vector machines. J Nivre, J Hall, J Nilsson, G Eryi, S Marinov, Proceedings of the Tenth Conference on Computational Natural Language Learning. the Tenth Conference on Computational Natural Language LearningNew York City, NYJ. Nivre, J. Hall, J. Nilsson, G. Eryi it, and S. Marinov. 2006. Labeled pseudo-projective dependency pars- ing with support vector machines. In Proceedings of the Tenth Conference on Computational Natural Lan- guage Learning, pages 221-225, New York City, NY, June. Malt-Parser: A language-independent system for datadriven dependency parsing. J Nivre, J Hall, J Nilsson, A Chanev, G Eryigit, S Kübler, S Marinov, E Marsi, Natural Language Engineering. 1302J. Nivre, J. Hall, J. Nilsson, A. Chanev, G. Eryigit, S. Kübler, S. Marinov, and E. Marsi. 2007. Malt- Parser: A language-independent system for data- driven dependency parsing. Natural Language Engi- neering, 13(02):95-135. An efficient algorithm for projective dependency parsing. J Nivre, Proceedings of the 8th International Workshop on Parsing Technologies. the 8th International Workshop on Parsing TechnologiesNancy, FranceJ. Nivre. 2003. An efficient algorithm for projective de- pendency parsing. In Proceedings of the 8th Interna- tional Workshop on Parsing Technologies, pages 149- 160, Nancy, France, April. Learning accurate, compact, and interpretable tree annotation. S Petrov, L Barrett, R Thibaux, D Klein, Proceedings of the 21st International Conference on Computational Linguistics and the 44th annual meeting of the Association for Computational Linguistics. the 21st International Conference on Computational Linguistics and the 44th annual meeting of the Association for Computational LinguisticsSydney, AustraliaS. Petrov, L. Barrett, R. Thibaux, and D. Klein. 2006. Learning accurate, compact, and interpretable tree an- notation. In Proceedings of the 21st International Conference on Computational Linguistics and the 44th annual meeting of the Association for Computational Linguistics, pages 433-440, Sydney, Australia, July. Clustering words by syntactic similarity improves dependency parsing of predicate-argument structures. K Sagae, A Gordon, Proceedings of the 11th International Conference on Parsing Technologies. the 11th International Conference on Parsing TechnologiesParis, FranceK. Sagae and A. Gordon. 2009. Clustering words by syntactic similarity improves dependency parsing of predicate-argument structures. In Proceedings of the 11th International Conference on Parsing Technolo- gies, pages 192-201, Paris, France, October. The Lefff, a freely available, accurate and large-coverage lexicon for French. B Sagot, Proceedings of the 7th International Conference on Language Resources and Evaluation. the 7th International Conference on Language Resources and EvaluationValetta, MaltaB. Sagot. 2010. The Lefff, a freely available, accurate and large-coverage lexicon for French. In Proceed- ings of the 7th International Conference on Language Resources and Evaluation, Valetta, Malta, May. News from OPUS -A collection of multilingual parallel corpora with tools and interfaces. J Tiedemann, Recent Advances in Natural Language Processing. John Benjamins5AmsterdamJ. Tiedemann. 2009. News from OPUS -A collection of multilingual parallel corpora with tools and interfaces. In Recent Advances in Natural Language Processing, volume 5, pages 237-248. John Benjamins, Amster- dam. Parsing the penn chinese treebank with semantic knowledge. D Xiong, S Li, Q Liu, S Lin, Y Qian, Proceedings of the International Joint Conference on Natural Language Processing. the International Joint Conference on Natural Language ProcessingJeju Island, KoreaD. Xiong, S. Li, Q. Liu, S. Lin, and Y. Qian. 2005. Pars- ing the penn chinese treebank with semantic knowl- edge. In Proceedings of the International Joint Con- ference on Natural Language Processing, pages 70- 81, Jeju Island, Korea, October. Statistical dependency analysis with support vector machines. H Yamada, Y Matsumoto, Proceedings of the 8th International Workshop on Parsing Technologies. the 8th International Workshop on Parsing TechnologiesNancy, FranceH. Yamada and Y. Matsumoto. 2003. Statistical depen- dency analysis with support vector machines. In Pro- ceedings of the 8th International Workshop on Parsing Technologies, pages 195-206, Nancy, France, April.
11,034,156
FBK-HLT: A New Framework for Semantic Textual Similarity
This paper reports the description and performance of our system, FBK-HLT, participating in the SemEval 2015, Task #2 "Semantic Textual Similarity", English subtask. We submitted three runs with different hypothesis in combining typical features (lexical similarity, string similarity, word n-grams, etc) with syntactic structure features, resulting in different sets of features. The results evaluated on both STS 2014 and 2015 datasets prove our hypothesis of building a STS system taking into consideration of syntactic information. We outperform the best system on STS 2014 datasets and achieve a very competitive result to the best system on STS 2015 datasets.
[ 12549805, 16695859, 15215411, 7164502, 6964767 ]
FBK-HLT: A New Framework for Semantic Textual Similarity SemEval 2015. June 4-5, 2015 Ngoc Phuoc University of Trento Trento Italy An Vo Fondazione University of Trento Trento Italy Bruno Kessler University of Trento Trento Italy Simone Magnolini magnolini@fbk.eu University of Brescia Fondazione Bruno Kessler TrentoItaly Octavian Popescu o.popescu@us.ibm.com IBM Research T.J. Watson Yorktown, US FBK-HLT: A New Framework for Semantic Textual Similarity Proceedings of the 9th International Workshop on Semantic Evaluation the 9th International Workshop on Semantic EvaluationDenver, ColoradoSemEval 2015. June 4-5, 2015 This paper reports the description and performance of our system, FBK-HLT, participating in the SemEval 2015, Task #2 "Semantic Textual Similarity", English subtask. We submitted three runs with different hypothesis in combining typical features (lexical similarity, string similarity, word n-grams, etc) with syntactic structure features, resulting in different sets of features. The results evaluated on both STS 2014 and 2015 datasets prove our hypothesis of building a STS system taking into consideration of syntactic information. We outperform the best system on STS 2014 datasets and achieve a very competitive result to the best system on STS 2015 datasets. Introduction Semantic related tasks have been a noticed trend in Natural Language Processing (NLP) community. Particularly, the task Semantic Textual Similarity (STS) has captured a huge attention in the NLP community despite being recently introduced since Se-mEval 2012 (Agirre et al., 2012). Basically, the task requires to build systems which can compute the similarity degree between two given sentences. The similarity degree is scaled as a real score from 0 (no relevance) to 5 (semantic equivalence). The evaluation is done by computing the correlation between human judgment scores and system scores by the mean of Pearson correlation method. At SemEval 2015, Task #2 "Semantic Textual Similarity (STS)", English STS subtask (Agirre et al., 2015) evaluates participating systems on five test datasets: image description (image), news headlines (headlines), student answers paired with reference answers (answers-students), answers to questions posted in stach exchange forums (answers-forum), and English discussion forum data exhibiting commited belief (belief ). As being inspired by the UKP system (Bär et al., 2012), which was the best system in STS 2012, we build a supervised system on top of it. Our system adopts some word and string similarity features in UKP, such as string similarity, character/word n-grams, and pairwise similarity; however, we also add other distinguished features, like syntactic structure information, word alignment and semantic word similarity. As a result, our team, FBK-HLT, submitted three runs and achieve very competitive results in the top-tier systems of the task. The remainder of this paper is organized as follows: Section 2 presents the System Description, Section 3 describes our Experiment Settings, Section 4 reports the Evaluations of our system. Finally, Section 5 is Conclusions and Future Work. System Description We describe our system, which is built from different linguistic features. We construct a pipeline system, in which each component produces different features independently and at the end, all features are consolidated by a machine learning tool, which learns a regression model for predicting the similarity scores from given sentence-pairs. On top of this, the system is expandable and scalable for adopting more useful features aiming for improving the accuracy. The System Overview in Figure 1 shows the logic and design processes in which different com- ponents connect and work together. Data Preprocessing The input data undergoes the data preprocessing in which we use Tree Tagger (Schmid, 1994) to perform tokenization, lemmatization, and Part-of-Speech (POS) tagging. On the other hand, we use Stanford Parser (Klein and Manning, 2003) to obtain the dependency parsing from given sentences. Word and String Similarity Features We adopt some word and string similarity features from the UKP system (Bär et al., 2012), which are briefly described as follows: • String Similarity: we use Longest Common Substring (Gusfield, 1997), Longest Common Subsequence (Allison and Dix, 1986) and Greedy String Tiling (Wise, 1996) measures. • Character/Word n-grams: we compare character n-grams (Barrón-Cedeno et al., 2010) with the variance n=2, 3, ..., 15. In contrast, we compare the word n-grams using Jaccard coefficient done by Lyon (Lyon et al., 2001) and containment measure (Broder, 1997) with the variance of n=1, 2, 3, and 4. • Semantic Word Similarity: we use the pairwise similarity algorithm by Resnik (Resnik, 1995) on WordNet (Fellbaum, 1998), and the vector space model Explicit Semantic Analysis (ESA) (Gabrilovich and Markovitch, 2007) which is constructed by two lexical semantic resources Wikipedia 1 and Wiktionary 2 . Syntactic Structure Features We exploit the syntactic structure information by the mean of three different toolkits: Syntactic Tree Kernel, Distributed Tree Kernel and Syntactic Generalization. We describe how each toolkit is used to learn and extract the syntactic structure information from texts to be used in our STS system. Syntactic Tree Kernel Syntactic Tree Kernel (Moschitti, 2006) is a tree kernels approach to learn the syntactic structure from syntactic parsing information, particularly, the Partial Tree (PT) kernel is proposed as a new convolution kernel to fully exploit dependency trees. We use the open-source toolkit "Tree Kernel in SVM-Light 3 " to learn this syntactic information. Having assumed that paraphrased pairs would share the same content and similar syntactic structures, we decide to choose the Microsoft Research Paraphrasing Corpus (Dolan et al., 2005) which contains 5,800 sentence pairs extracted from news sources on the web, along with human annotations indicating whether each pair captures a paraphrase/semantic equivalence relationship. This corpus is split into Training set (4,076 pairs) and Testing set (1,725 pairs). We use Stanford Parser (Klein and Manning, 2003) to obtain the dependency parsing from sentence pairs. Then we use the machine learning tool svm-light-tk 1.2 which uses Tree Kernel approach to learn the similarity of syntactic structure to build a binary classifying model on the Train dataset. The output predictions are probability confidence scores in [-1,1], corresponds to the probability of the label to be positive. According to the assumption above, we label paraphrased pairs as 1, -1 otherwise. We obtain the Accuracy of 69.16% on the Test set. Distributed Tree Kernel Distributed Tree Kernel (DTK) (Zanzotto and Dell'Arciprete, 2012) is a tree kernels method using a linear complexity algorithm to compute vectors for trees by embedding feature spaces of tree fragments in low-dimensional spaces. Then a recursive algorithm is proposed with linear complexity to compute reduced vectors for trees. The dot product among reduced vectors is used to approximate the original tree kernel when a vector composition function with specific ideal properties is used. Firstly, we use Stanford Parser (PCFG Parser) trained on Penn TreeBank (Klein and Manning, 2003) to obtain the dependency parsing of sentences, and feed them to the software "distributedtree-kernels" to produce the distributed trees. 4 Then, we compute the Cosine similarity between the vectors of distributed trees of each sentence pair. This cosine similarity score is converted to the scale of STS and SR for evaluation. Syntactic Generalization Given a pair of parse trees, the Syntactic Generalization (SG) (Galitsky, 2013) finds a set of maximal common subtrees. The toolkit "relevance-based-onparse-trees" is an open-source project which evaluates text relevance by using syntactic parse treebased similarity measure. 5 Given a pair of parse trees, it measures the similarity between two sentences by finding a set of maximal common subtrees, using representation of constituency parse trees via chunking. Each type of phrases (NP, VP, PRP etc.) 4 https://code.google.com/p/distributed-tree-kernels 5 https://code.google.com/p/relevance-based-on-parse-trees will be aligned and subject to generalization. It uses the OpenNLP system to derive dependency trees for generalization (chunker and parser). 6 This tool is made to give as a tool for text relevance which can be used as a black box, no understanding of computational linguistics or machine learning is required. We apply the tool on the STS datasets to compute the similarity of syntactic structure of sentence pairs. Further Features We also deploy other features which also may help in identifying the semantic similarity degree between two given sentences, such as word alignment in machine translation evaluation metric and the vector space model Weighted Matrix Factorization (WMF) for pairwise similarity. Machine Translation Evaluation Metric - METEOR METEOR (Metric for Evaluation of Translation with Explicit ORdering) (Banerjee and Lavie, 2005) is an automatic metric for machine translation evaluation, which consists of two major components: a flexible monolingual word aligner and a scorer. For machine translation evaluation, hypothesis sentences are aligned to reference sentences. Alignments are then scored to produce sentence and corpus level scores. We use this word alignment feature to learn the similarity between words, phrases in two given texts in case of different orders. Weighted Matrix Factorization (WMF) WMF (Guo and Diab, 2012) is a dimension reduction model to extract nuanced and robust latent vectors for short texts/sentences. To overcome the sparsity problem in short texts/sentences (e.g. 10 words on average), the missing words, a feature that LSA/LDA typically overlooks, is explicitly modeled. We use the pipeline to compute the similarity score between texts. Experiment Settings We generate and select 25 optimal features, ranging from lexical level to string level and syntactic level. We deploy the machine learning toolkit WEKA (Hall et al., 2009) for learning a regression model (GaussianProcesses) to predict the similarity scores. We build three models based on three sets of features to verify our hypothesis in which we augment that computing semantic similarity degree is not only about lexical similarity and string similarity, but also taking into consideration a deeper level at syntactic structure where more semantic information is embedded. In the system development process, we train our system on the given datasets of STS 2012, 2013 and use the STS 2014 datasets for evaluating the system. In Table 1, we also examine the contribution of different features to the overall accuracy of system, and prove that syntactic structure information also has some impact to the performance of our system. Our model using all features described above outperform the best system DLS@CU in STS 2014 evaluation. We submitted three runs with different sets of features as below: -Run1: All features described in Section 2 used. -Run2: The feature obtained by Distributed Tree Kernel approach is excluded as sometimes it returns negative correlation. -Run3: No syntactic features are included. Evaluations In Table 2 we report the performance of our three runs achieved on the STS 2015 test datasets. Among three submitted runs, Run1 has the best score, which confirm that exploiting the syntactic structure information benefits the overall performance of our system. Besides, although occasionally the features extracted by Distributed Tree Kernel approach returns negative result, it still contributes a small positive portion in the final result, which is shown in the Run2. In contrast, the Run3 which excludes all syntactic structure features, eventually, returns 4% lower than the other two runs. In overall, our system achieves a very competitive result compared to the best ranked system, DLS@CU-S1. Specifically, the difference between our Run1 and the DLS@CU-S1 on each test dataset of STS 2015 varies slightly 1%-2%. However, this difference is not statistically significant, as we can understand that each system may perform slightly different on different evaluation datasets. Generally, by taking into account the results of our system and DLS@CU on both STS 2014 and 2015 evaluation datasets, we can consider that we are almost equivalent in performance. Conclusions and Future Work In this paper, we describe the pipeline system FBK-HLT participating in the SemEval 2015, Task #2 "Semantic Textual Similarity", English subtask. We present a supervised system which considers multiple linguistic features from low to high language level, such as lexical, string and syntactic. We also augment that looking into the syntactic structure of text will more or less benefit the capability of predicting the semantic similarity. Among our three submitted runs, our performance is much above the baseline and very competitive to the best system; we are ranked in the top-tier (12 th , 13 th , and 23 nd ) out of total 73 systems. For the time being, we can see that the contribution of syntactic features is still limited (about 4%) to the overall performance. However, it does not deny the significance of syntactic information in semantic related tasks, especially, this STS task. Hence, we expect to study to exploit more useful features from the syntactic information, which intuitively, is supposed to play a significant role in semantic reasoning. Figure 1 : 1System Overview. Table 2 : 2Evaluation Results on STS 2015 datasets. http://en.wikipedia.org/wiki/Main_Page 2 http://en.wiktionary.org 3 http://disi.unitn.it/moschitti/Tree-Kernel.htm https://opennlp.apache.org Semeval-2012 task 6: A pilot on semantic textual similarity. Eneko Agirre, Mona Diab, Daniel Cer, Aitor Gonzalez-Agirre, Proceedings of the First Joint Conference on Lexical and Computational Semantics. the First Joint Conference on Lexical and Computational Semantics1Proceedings of the Sixth International Workshop on Semantic EvaluationEneko Agirre, Mona Diab, Daniel Cer, and Aitor Gonzalez-Agirre. 2012. Semeval-2012 task 6: A pilot on semantic textual similarity. In Proceedings of the First Joint Conference on Lexical and Computational Semantics-Volume 1: Proceedings of the main confer- ence and the shared task, and Volume 2: Proceedings of the Sixth International Workshop on Semantic Eval- uation, pages 385-393. Eneko Agirre, Carmen Banea, Claire Cardie, Daniel Cer, Mona Diab, Aitor Gonzalez-Agirre, Weiwei Guo, Inigo Lopez-Gazpio, Montse Maritxalar, Rada Mihalcea, German Rigau, Larraitz Uria, Janyce Wiebe, Proceedings of the 9th International Workshop on Semantic Evaluation. the 9th International Workshop on Semantic EvaluationEnglish, Spanish and Pilot on Interpretability; Denver, COSemEval-2015 Task 2: Semantic Textual SimilarityEneko Agirre, Carmen Banea, Claire Cardie, Daniel Cer, Mona Diab, Aitor Gonzalez-Agirre, Weiwei Guo, Inigo Lopez-Gazpio, Montse Maritxalar, Rada Mihal- cea, German Rigau, Larraitz Uria, and Janyce Wiebe. 2015. SemEval-2015 Task 2: Semantic Textual Simi- larity, English, Spanish and Pilot on Interpretability. In Proceedings of the 9th International Workshop on Se- mantic Evaluation (SemEval 2015), Denver, CO, June. A bit-string longest-common-subsequence algorithm. Lloyd Allison, Trevor I Dix, Information Processing Letters. 235Lloyd Allison and Trevor I Dix. 1986. A bit-string longest-common-subsequence algorithm. Information Processing Letters, 23(5):305-310. METEOR: An automatic metric for mt evaluation with improved correlation with human judgments. Satanjeev Banerjee, Alon Lavie, Proceedings of the ACL Workshop on Intrinsic and Extrinsic Evaluation Measures for Machine Translation and/or Summarization. the ACL Workshop on Intrinsic and Extrinsic Evaluation Measures for Machine Translation and/or SummarizationSatanjeev Banerjee and Alon Lavie. 2005. METEOR: An automatic metric for mt evaluation with improved correlation with human judgments. In Proceedings of the ACL Workshop on Intrinsic and Extrinsic Evalu- ation Measures for Machine Translation and/or Sum- marization, pages 65-72. Ukp: Computing semantic textual similarity by combining multiple content similarity measures. Daniel Bär, Chris Biemann, Iryna Gurevych, Torsten Zesch, Proceedings of the First Joint Conference on Lexical and Computational Semantics. the First Joint Conference on Lexical and Computational Semantics1Proceedings of the Sixth International Workshop on Semantic EvaluationDaniel Bär, Chris Biemann, Iryna Gurevych, and Torsten Zesch. 2012. Ukp: Computing semantic textual sim- ilarity by combining multiple content similarity mea- sures. In Proceedings of the First Joint Conference on Lexical and Computational Semantics-Volume 1: Pro- ceedings of the main conference and the shared task, and Volume 2: Proceedings of the Sixth International Workshop on Semantic Evaluation, pages 435-440. Plagiarism detection across distant language pairs. Alberto Barrón-Cedeno, Paolo Rosso, Eneko Agirre, Gorka Labaka, Proceedings of the 23rd International Conference on Computational Linguistics. the 23rd International Conference on Computational LinguisticsAlberto Barrón-Cedeno, Paolo Rosso, Eneko Agirre, and Gorka Labaka. 2010. Plagiarism detection across dis- tant language pairs. In Proceedings of the 23rd In- ternational Conference on Computational Linguistics, pages 37-45. On the resemblance and containment of documents. Z Andrei, Broder, Compression and Complexity of Sequences. IEEEAndrei Z Broder. 1997. On the resemblance and contain- ment of documents. In Compression and Complexity of Sequences 1997, pages 21-29. IEEE. Bill Dolan, Chris Brockett, and Chris Quirk. 2005. Microsoft research paraphrase corpus. Retrieved March. 29Bill Dolan, Chris Brockett, and Chris Quirk. 2005. Mi- crosoft research paraphrase corpus. Retrieved March, 29:2008. . Christiane Fellbaum, WordNet. Wiley Online LibraryChristiane Fellbaum. 1998. WordNet. Wiley Online Li- brary. Computing semantic relatedness using wikipedia-based explicit semantic analysis. Evgeniy Gabrilovich, Shaul Markovitch, IJCAI. 7Evgeniy Gabrilovich and Shaul Markovitch. 2007. Com- puting semantic relatedness using wikipedia-based ex- plicit semantic analysis. In IJCAI, volume 7, pages 1606-1611. Machine learning of syntactic parse trees for search and classification of text. Boris Galitsky, Boris Galitsky. 2013. Machine learning of syntac- tic parse trees for search and classification of text. . Engineering Applications of Artificial Intelligence. 263Engineering Applications of Artificial Intelligence, 26(3):1072-1091. Modeling sentences in the latent space. Weiwei Guo, Mona Diab, Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Long Papers. the 50th Annual Meeting of the Association for Computational Linguistics: Long Papers1Weiwei Guo and Mona Diab. 2012. Modeling sentences in the latent space. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguis- tics: Long Papers-Volume 1, pages 864-872. Algorithms on strings, trees and sequences: computer science and computational biology. Dan Gusfield, Cambridge University PressDan Gusfield. 1997. Algorithms on strings, trees and sequences: computer science and computational biol- ogy. Cambridge University Press. The WEKA data mining software: an update. Mark Hall, Eibe Frank, Geoffrey Holmes, Bernhard Pfahringer, Peter Reutemann, Ian H Witten, ACM SIGKDD explorations newsletter. 111Mark Hall, Eibe Frank, Geoffrey Holmes, Bernhard Pfahringer, Peter Reutemann, and Ian H Witten. 2009. The WEKA data mining software: an update. ACM SIGKDD explorations newsletter, 11(1):10-18. Accurate unlexicalized parsing. Dan Klein, D Christopher, Manning, Proceedings of the 41st Annual Meeting on Association for Computational Linguistics. the 41st Annual Meeting on Association for Computational Linguistics1Dan Klein and Christopher D Manning. 2003. Ac- curate unlexicalized parsing. In Proceedings of the 41st Annual Meeting on Association for Computa- tional Linguistics-Volume 1, pages 423-430. Detecting short passages of similar text in large document collections. Caroline Lyon, James Malcolm, Bob Dickerson, Proceedings of the 2001 Conference on Empirical Methods in Natural Language Processing. the 2001 Conference on Empirical Methods in Natural Language ProcessingCaroline Lyon, James Malcolm, and Bob Dickerson. 2001. Detecting short passages of similar text in large document collections. In Proceedings of the 2001 Conference on Empirical Methods in Natural Lan- guage Processing, pages 118-125. Efficient convolution kernels for dependency and constituent syntactic trees. Alessandro Moschitti, Machine Learning: ECML 2006. Alessandro Moschitti. 2006. Efficient convolution ker- nels for dependency and constituent syntactic trees. In Machine Learning: ECML 2006, pages 318-329. Using information content to evaluate semantic similarity in a taxonomy. Philip Resnik, Proceedings of the 14th International Joint Conference on Artificial Intelligence. the 14th International Joint Conference on Artificial Intelligence1IJCAI'95Philip Resnik. 1995. Using information content to evalu- ate semantic similarity in a taxonomy. In Proceedings of the 14th International Joint Conference on Artificial Intelligence -Volume 1, IJCAI'95, pages 448-453. Probabilistic part-of-speech tagging using decision trees. Helmut Schmid, Proceedings of international conference on new methods in language processing. international conference on new methods in language processingManchester, UK12Helmut Schmid. 1994. Probabilistic part-of-speech tag- ging using decision trees. In Proceedings of interna- tional conference on new methods in language pro- cessing, volume 12, pages 44-49. Manchester, UK. Yap3: Improved detection of similarities in computer program and other texts. J Michael, Wise, ACM SIGCSE Bulletin. ACM28Michael J Wise. 1996. Yap3: Improved detection of sim- ilarities in computer program and other texts. In ACM SIGCSE Bulletin, volume 28, pages 130-134. ACM. Distributed tree kernels. Fabio Massimo Zanzotto, Lorenzo Dell&apos;arciprete, Proceedings of the 29th International Conference on Machine Learning. the 29th International Conference on Machine LearningFabio Massimo Zanzotto and Lorenzo Dell'Arciprete. 2012. Distributed tree kernels. In Proceedings of the 29th International Conference on Machine Learning.
8,024,419
A Japanese Learning Support System Matching Individual Abilities
With the growing popularity of Japanese learning, a large number of learning support tools or systems have been developed to help Japanese learners in various situations. We have particularly noticed the increasing necessity of systems developed as web applications, most of which are free and easily accessed, and hence regarded to be the most significant resources for Japanese learners. However, noun of the existing studies has considered the difference in language ability among Japanese learners. Learning contents and instructional method in these systems usually remain unchanged at all times without taking account of individual variations while in some cases they are supposed to vary with the real language ability of each Japanese learner. In this paper, we have developed a web application to provide appropriate suggestions and different learning materials for each Japanese learner based on their individual Japanese abilities. Specifically, we divide the language ability into several elements, propose different methods to quantify each element, and generate feedbacks or training questions for the Japanese learners. Experimental results have partially shown the effectiveness of our methods. 1
[ 17041842 ]
A Japanese Learning Support System Matching Individual Abilities Takahiro Ohno Zyunitiro Edani Ayato Inoue Dongli Han han@chs.nihon-u.ac.jp Graduate School of Integrated Basic Sciences Graduate School of Integrated Basic Sciences Department of Information Science College of Humanities and Sciences Department of Information Science College of Humanities and Sciences Nihon University Tokyo JAPAN Nihon University Tokyo JAPAN Nihon University TokyoJAPAN Nihon University TokyoJAPAN A Japanese Learning Support System Matching Individual Abilities With the growing popularity of Japanese learning, a large number of learning support tools or systems have been developed to help Japanese learners in various situations. We have particularly noticed the increasing necessity of systems developed as web applications, most of which are free and easily accessed, and hence regarded to be the most significant resources for Japanese learners. However, noun of the existing studies has considered the difference in language ability among Japanese learners. Learning contents and instructional method in these systems usually remain unchanged at all times without taking account of individual variations while in some cases they are supposed to vary with the real language ability of each Japanese learner. In this paper, we have developed a web application to provide appropriate suggestions and different learning materials for each Japanese learner based on their individual Japanese abilities. Specifically, we divide the language ability into several elements, propose different methods to quantify each element, and generate feedbacks or training questions for the Japanese learners. Experimental results have partially shown the effectiveness of our methods. 1 Introduction More and more people are learning Japanese as the second or foreign language. According to a report issued by the Japan Foundation, Japanese learners have increased 9.1% all over the world since 2009 1 . With the growing popularity of Japanese learning, a large number of learning support tools or systems have been developed to help Japanese learners in various situations (Liu et al, 1999;Fujita, 2001;Suwa, 2006;Zhang, 2006;Gao, 2005;Kakegawa, 2000;Nakano and Tomiura, 2011). We have particularly noticed the increasing necessity of systems developed as web applications, most of which are free and easily accessed, and hence regarded to be the most significant resources for Japanese learners. Here are some examples. Asunaro 2 presents the dependency relations between phrases in a given Japanese sentence, Obi 3 classifies the difficulty of a given text into 13 levels, Reading Tutor 4 analyzes a given text and shows the difficulty level of each morpheme in it, and Chantokun 5 discovers the misuse of a case particle in a user's input and shows the potential alternatives as well. However, noun of the existing studies has considered the difference in language ability among Japanese learners. Learning contents and instructional method in these systems usually remain unchanged at all times without taking account of individual variations while in some cases they are supposed to vary with the real language ability of each Japanese learner. Capturing the personal feature of a learner's language ability and providing her with the most appropriate learning contents in the most proper way will definitely make the learning procedure more efficient. Our final goal in this work is to develop a web application to provide appropriate suggestions and different learning materials for each Japanese learner based on their individual Japanese abilities. Specifically, we divide the language ability into several elements, propose different methods to quantify each element, and generate feedbacks or training questions for the Japanese learners. Here in this paper, we describe the basic idea in Section 2, and describe a few modules we have developed as the first step of the whole system in Section 3, 4, and 5. Finally, we end this paper with a conclusion in Section 6. The Basic Idea The general framework is composed of two main parts: the interactive interface and the background processing platform. When the learner inputs some words, the system will carry out two kinds of analysis in turn: morphological analysis and syntactic parsing. Here, we use the free Japanese analyzing tools, Cabocha 6 and Knp 7 , to carry out the analytical tasks. Then the system tries to figure out the linguistic ability of the current user. The linguistic ability structure is divided into several elements: Kanji character, vocabulary, case particle, sentence pattern, inflection, and honorific expression. So far, we have developed two modules for case particles and sentence patterns respectively. Finally, based on the analytical results, the system generates different feedbacks or practice questions for each Japanese learner trying to provide her with the most appropriate learning contents in the most proper way, which might make the learning procedure more efficient. Usage of Case Particles We have mentioned Chantokun, a previous web application, in Section 1, where wrong usages of case particles could be discovered and corrected. Case particles are the most important components in Japanese sentences. It is impossible to generate a grammatically correct sentence without using any case particles. We in this work consider case particles as one of the most critical factors to analyze the linguistic ability of Japanese learners, and propose a method to conduct a profound analysis on their usages of case particles. Here, similar to Chantokun, we also use 3gram data from Google N-gram Corpus 8 to discover and modify the wrong usages of case particles. The 3-gram corpus is extracted mainly from web pages containing a large number of 3continuous-word fragments in the form of "W1 CP W2". Here, CP indicates a case particle, W1 and W2 represent the two words surrounding it. However, the difference between our work and Chantokun lies in that we incorporate dependency relation analysis into the error checking task as shown in Figure 1. Besides the error check and correction, we have developed another function involving the case particles. Through the correct use cases of case particles from the user's input texts, we try to estimate the user's level of dealing with case particles. Here we define two kinds of measurements: GUR (General Understanding Rates), and GER (General Error Rates) as shown below. M G x GUR i × = ∑ max N G y GER i × = ∑ max Here, x i and y i stand for the occurrence frequency of the correctly used 3-gram and the modified 3gram in the 3-gram corpus. M is the number of correctly used case particles in the user's input texts, and N represents the number of case particles that have been modified. G max is the highest occurrence frequency in the 3-gram corpus. We try to reflect the user's understanding ability towards the frequently used case particles, and the tendency to make mistakes with these formulas. In the experiments for wrong-usage detection of case particles with 100 sentences extracted from Lang8 9 , we get the results as shown in Table 1 with different experimental arguments. Here, "Abs" indicates the absolute threshold value. For example, "Abs(0)" means the case where a wrongly used case particle is detected without considering the difference between the wrong usage and the most frequent usage in the corpus. On the other hand, "Rel" indicates the cases where a specific magnitude relationship between the wrong usage and the most frequent usage has to be taken into consideration. Generally, "Rel(10)" is the most effective one among all the argument sets. Usage of Sentence Patterns A sentence pattern indicates some specific usage of certain words to express some particular context or meaning (Han and Song, 2011). Here is a very simple example: "~あげく" meaning "in the end". The signal " ~ " acts as a placeholder with certain strict conditions. In this sentence pattern, only two kinds of expressions could be used to replace "~" in front of "あ げく": past tenses of verbs or a particular formal noun in Japanese, "の". Whether a Japanese learner is able to use a sentence pattern correctly is considered as another significant indicator of her real Japanese linguistic ability. To the best of our knowledge, Reading Tutor is the only web system which has made contributions on learning sentence patterns. Reading Tutor analyzes the input sentence, recognizes the sentence patterns used in it, and elaborates the usage of each sentence pattern found. However, Reading Tutor is not able to recognize the wrong sentence-pattern usages. In other words, even if an expression other than the past tense of a verb or the particular formal noun "の" appears in front of "あげく", Reading Tutor is not able to indicate the mistake. 9 http://lang-8.com/ During the practical sentence-pattern learning process, compared with the simple and outward sentence-pattern searching function, it is usually more important to tell the user whether the sentence she has just composed using a particular sentence pattern is correct, and where the problem is lying if the answer is no. Our study differs from Reading Tutor on this aspect. Here, the signal " ~ " is a placeholder as described above, and each signal except "~" indicates a partial expression of the whole sentence pattern. During the analytical procedure, we use Cabocha to obtain the conjugated form for each "~". Meanwhile, we create a huge table containing all the combining rules in advance based on a sentence-pattern dictionary (Ask Shuppan, 2008), and develop a module to discover the wrong usages of sentence patterns and provide feedbacks on correct usage based on the combination-rule table. Specifically, we follow the steps below to accomplish this task taking "~あげく"as the specific case here. Step1. Search the input sentence for "あげ く" Step2. Obtain the part-of-speech (POS) and conjugation information of " ~ " , the expressions in front of " あ げ く " using Cabocha. Step3. Compare the POS of "~" and that in the combination-rule table. Step4. Exit the process and present the user with the message "POS Error" if they do not match. Step5. Compare the conjugation information of " ~ " and that in the combination-rule table. Step6. Exit the process and present the user with the message "Conjugation Error" if they do not match. 1. ~○ 2. ~○~ 3. ~~○ 4. ○~△ 5. ~○~△ 6. ~○~△~□ 7. ~○~△~□~◎ PACLIC-27 The above process will be iterated for all the signals including "○", "△", "□", and "◎" for all the other patterns in Figure 2. We have conducted a simple experiment to examine the effectiveness of our sentence-pattern processing module. Here, we extract 200 correct sample sentences each containing at least one sentence pattern from another Japanese sentencepattern dictionary (Ask Shuppan, 2007). 1. Delicate difference lies between the sentence pattern dictionary and the Morphological analyzer. 2. Oral Expressions are used instead of the formal ones in "○", "△", "□", and "◎". 3. The sentence pattern dictionary is nonexhaustive. 4. Normal usages are incorrectly equated to certain sentence patterns The first three issues come from the inadequacy of the sentence-pattern dictionary, and are possible to be addressed completely or partially through incorporating other dictionaries and complementing the current one simultaneously. The last issue indicates the case where a normal expression containing one of four special signals (" ○ ", " △ ", " □ ", and " ◎ ") is misattributed to a sentence pattern. Here is an example. Input: 私は大学を卒業するまでそこで過ごしました。 (I lived there until I graduated from the college) Feedback: 「~て」を接続しなければいけません (The「する」connection must be replaced by the「~て」connection) According to the Feedback, the input sentence should be modified as "私は大学を卒業してまでそ こで過ごしました" meaning I graduated from the college to live there. The modified sentence has a completely different nuance from the input sentence which is also correct. Our future task includes figuring out strategies to address this kind of problems. Practice-question Generation Another significant difference between our system and other previous studies lies in the function of providing practice questions and feedbacks based on the user's linguistic ability and self-assessment. Specifically, practice questions are provided to help the learners improve their abilities to use a certain caseparticle or sentence pattern. On the other hand, feedbacks are given to the learners to indicate their scores and what they should pay particular attention to during the practicing process. Determination of Question Form Some existing studies have mentioned the relation between the learning effect and the learning method or feedbacks during the process of foreign language learning. Yokoyama analyzed the effectiveness of negative feedbacks (NFs) and represented some perceptions on the difference between explicit and implicit NFs (Yokoyama, 1996). In another study, Nishitani and Matsuda explored the possibility to manage the language-anxiety level of the learners (Nishitani and Matsuda, 2008). Profound survey on the above studies leads us to the following ideas. 1. Feedbacks are generally effective for foreign language learning 2. Expositions tailored for a particular learner is necessary. 3. Different Question forms should be provided to learners of different levels 4. Language-anxiety element might be taken into consideration to select the most appropriate learning method. Based on the above considerations, we have developed three modules for our practicequestion generation function: Character Judgement, Question-form Determination, and Feedback Generation. Character Judgement conducts a questionnaire with each learner having an assessment page filled out in the system. Questions contained in the assessment page come from Motoda's study (Motoda, 2000), and are used to assess the user's language-anxiety and feelings of self-esteem. Figure 3 shows the screen shot of the questionnaire in our web system. Fig. 3. Screen shot of the questionnaire Average assessments from the questionnaire are used to estimate the user's character and selfperception, which will be used in the Questionform Determination module. In our system, four forms are used to provide practice questions: multiple-choice question, fillin-the-blank question, true-false question, and error-correction question. Following the idea suggested by Yokoyama, we assign difficulty levels from 1 to 4 to each of the four forms. For example, multiple-choice questions are comparatively simple, and error-correction questions are usually difficult compared with others. In the Question-form Determination module, judgement on question form is carried out based mainly on the user's total accuracy so far. For example, if the learner has achieved a total accuracy of 90%, she will be given the chance to step on to the higher difficult level. Similarly, the user will be forced to reduce her difficulty level to an easier question form. This is the basic policy to adjust the question form for each learner. However, there are situations where we must consider users' characters as well. For instance, if the user's language-anxiety is comparatively high, we will set a stricter condition for her to raise the difficulty level. The most appropriate form will be selected for a particular user in accordance with her character and self-perception. The third module, Feedback Generation, applies the opinions of Nishitani and Matsuda on the effects of feedbacks, and outputs a feedback sentence according to the user's character. Extraction of Question Source As described in Section 3, we use the Google 3-gram Corpus to discover and modify the wrong usages of case particles. Here we extract 3-grams from the same corpus as the source of practice questions. When the system decides to generate a practice question regarding a particular case particle according to the result of a first-time ability test, the context of the particular case particle is also employed. For example, if the user messes up with the 3gram "W 1 +CP+W 2 ", the user will receive a set of 3-grams as the practice questions with similar contexts. Specifically, 3-grams in the following form are randomly extracted from the Google Corpus and used to generate practice questions for "W 1 +CP+W 2 ". W 1SP /W N1SS +CP+W 2SP /W N2SS Here, W NSP indicates the words holding the same POS as W N , and W N1SS indicates the words holding the same semantic feature as W N . We use Juman 10 to extract semantic features for nouns, and Japanese Wordnet 11 to extract semantic features for verbs. On the other hand, we generate practice questions for sentence patterns from a news corpus 12 . Specifically, we take the following steps to accomplish this task. Step1. Extract the body text from the corpus. Step2. Segment the body text into sentences. Step3. Clip the sentences containing at least one sentence pattern. Step4. Examine the correctness of the sentence-pattern usage with the program described in Section 4. Step5. Change the inflected form of the verb around the special signals in a sentence pattern to another. Step6. Present the whole sentence containing a blank or a wrong verbal inflected form to the user as a practice question. Comparing with the web text, news articles are more formal which indicates the ease to find appropriate sample sentences, whereas facing the risk that extracted sentences tend to be long and thus comparatively difficult for entrance-level users. Conclusion This paper describes some work we have been doing towards the development of a Japanese learning system. The principal difference between this work and the previous studies lies in the linguistic ability structure we have defined, and the idea that each learner is able to obtain his or her own linguistic-ability evaluation and customized learning contents. We have implemented three modules to help users with their usage of case particles and sentence grammars so far. Some evaluations have shown the effectiveness of our strategies. Figure 6 is the screen shot of our web system However, as elaborated in Section 4 and 5, we still have ways to improve the method and obtain better results. Also, some ongoing modules including those for Kanji character, vocabulary and honorific expression are to be finished as soon as possible. What matters most of all, is a questionnaire targeted toward the JSL learners to examine the learning effectiveness for them with the help of our web application. Fig. 1 . 1The dependency relation analysis Fig. 2 . 2Main structures of sentence patterns Generally, there are seven kinds of structures lying in all sentence patterns as shown Figure 2. Fig. 4 . 4Screen shot of the multiple-choice practice questions Fig. 5. Screen shot of the true-false questions Fig. 6 . 6Screen shot of our web interface Table . 1 .. Experimental results for case particles Table 2 2shows the experimental results.Recognized Sentence Patterns 328(100%) Correctly recognized sentence patterns 279 (85%) Wrongly Recognized Sentence Patterns 49(15%) Table . 2 .. Experimental results for sentence pattern Cases of failure have been observed with the following reasons. http://code.google.com/p/cabocha/ 7 http://nlp.ist.i.kyoto-u.ac.jp/index.php?KNP http://www.gsk.or.jp/catalog/GSK2007-C/ GSK2007C_README.utf8.txt PACLIC-27 http://nlpwww.nict.go.jp/wn-ja/ 12 http://www.nichigai.co.jp/sales/mainichi/mainichidata.html PACLIC-27 AcknowledgmentsThis research is supported in part by JSPS Grantin-Aid for Young Scientists (B) Grant Number 24700914. Ikita Reibun De Manabu Nihongo Hyougen Bunkei Jiten. Ask Shuppan, Japanin JapaneseAsk Shuppan. 2008. "Ikita Reibun De Manabu Nihongo Hyougen Bunkei Jiten". Japan. (in Japanese) . Ask Shuppan, Donna Toki Dou Tsukawu Nihongo Hyougen Bunkei Jiten". Japan.in JapaneseAsk Shuppan. 2007. "Donna Toki Dou Tsukawu Nihongo Hyougen Bunkei Jiten". Japan. (in Japanese) An Instruction System of Hand-writing Chinese Character for Non-Japanese. S Fujita, C Lin, S Narita, Journal of Japan Society for Educational Technology. 252in JapaneseFujita, S., Lin, C., and Narita, S. 2001. "An Instruction System of Hand-writing Chinese Character for Non-Japanese". Journal of Japan Society for Educational Technology. Vol. 25, No. 2, pp. 129-138. (in Japanese) The Feature Extracted for Evaluating Japanese-Learners' Composition in China. J Gao, I Takahashi, J Kuroiwa, T Odaka, H Ogura, IEICE Trans. 884in JapaneseGao, J., Takahashi, I., Kuroiwa, J., Odaka, T., and Ogura, H. 2005. "The Feature Extracted for Evaluating Japanese-Learners' Composition in China". IEICE Trans. Vol.J88-D-I, No.4, pp.882-890. (in Japanese). Japanese Sentence Pattern Learning with the Use of Illustrative Examples Extracted from the Web. D Han, X Song, IEEJ Transactions on Electrical and Electronic Engineering. 65Han, D., and Song, X. 2011. "Japanese Sentence Pattern Learning with the Use of Illustrative Examples Extracted from the Web", IEEJ Transactions on Electrical and Electronic Engineering, Vol.6, No.5, pp.490-496. Diagnostic Processing of Japanese for Computer-Assisted Second Language Learning. D Kakegawa, H Kanda, E Fujioka, M Itami, K Itoh, IEICE Trans. 836in JapaneseKakegawa, D., Kanda, H., Fujioka, E., Itami, M., and Itoh, K. 2000. "Diagnostic Processing of Japanese for Computer-Assisted Second Language Learning", IEICE Trans. Vol.J83-D-I, No.6, pp.693-701. (in Japanese). Anckle: Agent-Based Communicative Kanji Learning Environment Focusing on the Difference between Japanese and Chinese Kanji Meaning. Y Liu, H Ogata, Y Ochi, Y Yano, IEICE Trans. 8210in JapaneseLiu, Y., Ogata, H., Ochi, Y., and Yano, Y. 1999. "Anckle: Agent-Based Communicative Kanji Learning Environment Focusing on the Difference between Japanese and Chinese Kanji Meaning", IEICE Trans. Vol.J82-D-II, No.10, pp.1645-1654. (in Japanese). Measurement of Second Language Anxiety in the Target Language Environment: the Japanese Language Anxiety Scale -Test construction, Reliability, and Validity. S Motoda, Japanese Journal of Educational Psychology. 48in JapaneseMotoda, S. 2000. "Measurement of Second Language Anxiety in the Target Language Environment: the Japanese Language Anxiety Scale -Test construction, Reliability, and Validity", Japanese Journal of Educational Psychology. Vol.48. pp.422-432. (in Japanese) Relationship between Errors and Corrections in Verb Selection: Basic Research for Composition Support. T Nakano, Y Tomiura, Journal of Natural Language Processing. 181in JapaneseNakano, T. and Tomiura, Y. 2011. "Relationship between Errors and Corrections in Verb Selection: Basic Research for Composition Support", Journal of Natural Language Processing. Vol.18, No.1, pp.3-29. (in Japanese) Providing feedback to manage foreign language learners' anxiety level. M Nishitani, T Matsuda, 11Center for Student Exchange journal (Hitotsubashi Universityin JapaneseNishitani, M., and Matsuda, T. 2008. "Providing feedback to manage foreign language learners' anxiety level", Center for Student Exchange journal (Hitotsubashi University), 11, pp.35-46. (in Japanese) A Support System of Understanding Katakana Loan Words for Learners of Japanese. I Suwa, I Takahashi, J Kuroiwa, T Odaka, H Ogura, IEICE Trans. 4in JapaneseSuwa, I., Takahashi, I., Kuroiwa, J., Odaka, T., and Ogura, H. 2006. "A Support System of Understanding Katakana Loan Words for Learners of Japanese". IEICE Trans. Vol.J89-D, No.4, pp.797-806. (in Japanese). Yokoyama, Daini Gengo Gakushu Ni Okeru Negative Feedback No Yakuwari: Gaikan. Yokoyama. 1996. "Daini Gengo Gakushu Ni Okeru Negative Feedback No Yakuwari: Gaikan", http://teapot.lib.ocha.ac.jp/ocha/bitstream/10083/5020 A System of Supporting Japanese Input and Japanese Learning for Foreign Students. X Zhang, I Takahashi, J Kuroiwa, T Odaka, H Ogura, IEICE Trans. 12in JapaneseZhang, X., Takahashi, I., Kuroiwa, J., Odaka, T., and Ogura, H. 2006. "A System of Supporting Japanese Input and Japanese Learning for Foreign Students". IEICE Trans. Vol.J89-D, No.12, pp.2734-2743. (in Japanese).
6,402,709
SE(IMENTING A SENTENf,I¢ INTO MOItl)IIEM1,;S USING STNI'ISTIC INFOI{MATION BI,TFWEEN WORI)S
This paper is on dividing non-separated language sentences (whose words are not separated from each other with a space or other separaters) into morphemes using statistical information, not grammatical information which is often used in NLP. In this paper we describe our method and experimental result on Japanese and Chinese se~,tences. As will be seen in the body of this paper, the result shows that this systent is etlicient for most of tile sentences.
[ 5923203 ]
SE(IMENTING A SENTENf,I¢ INTO MOItl)IIEM1,;S USING STNI'ISTIC INFOI{MATION BI,TFWEEN WORI)S Shiho Nobesawa Faculty of Science and Technology Keio University N;tkanishi L;d Junya Tsutsumi Faculty of Science and Technology Keio University N;tkanishi L;d Tomoaki Nitta Faculty of Science and Technology Keio University N;tkanishi L;d Kotaro One Faculty of Science and Technology Keio University N;tkanishi L;d Sun Da Jiang Faculty of Science and Technology Keio University N;tkanishi L;d M~lsakazu Nakanishi Faculty of Science and Technology Keio University N;tkanishi L;d SE(IMENTING A SENTENf,I¢ INTO MOItl)IIEM1,;S USING STNI'ISTIC INFOI{MATION BI,TFWEEN WORI)S This paper is on dividing non-separated language sentences (whose words are not separated from each other with a space or other separaters) into morphemes using statistical information, not grammatical information which is often used in NLP. In this paper we describe our method and experimental result on Japanese and Chinese se~,tences. As will be seen in the body of this paper, the result shows that this systent is etlicient for most of tile sentences. INTRODUCTION AND MOTIVATION An English sentence has several words and those words are separated with a space, it is e~usy to divide an English sentence into words. I[owever a a apalmse sentence needs parsing if you want to pick up the words in the sentence. This paper is on dividing non-separated language sentences into words(morphemes) without using any grammatical information. Instead, this system uses the statistic information between morphenws to select best ways of segmenting sentences in nonseparated languages. Thinldng about segmenting a sentence into pieces, it is not very hard to divide a sentence using a certain dictionary for that. The problem is how to decide which 'segmentation' the t)est answer is. For examl)le , there must be several ways of segmenting a Japanese sentence written in lliragana(Jal)a,lese alphabet). Maybe a lot more than 'several'. So, to make the segmenting system useful, we have to cot> sider how to pick up the right segmented sentences from all the possible seems-like-scgrne, nted sentences, This system is to use statistical inforn,ation between morphemes to see how 'sentence-like'(how 'likely' to happen a.s a sentence) the se.gmented string is. To get the statistical association between words, mutual information(MI) comes to be one of the most interesting method. In this paper MI is used to calculate the relationship betwee.n words found ill the given sentence. A corpus of sentences is used to gain the MI. 'Fo implement this method, we iml)lemented a system MSS(Morphological Segmentation using Statistical information). What MSS does is to find the best way of segmenting a non-separated language, sentence into morphemes without depending on granamatieal information. We can apply this system to many languages. ~2 )/[ORPHOLOGICAL ANALYSIS What; a Morphological Analysis Is A morpheme is the smallest refit of a string of characters which has a certain linguistic l/leaning itself. It includes both content words and flmction words, in this l)aper the definition of a morl)heme is a string of characters which is looked u I) in tile dictionary. Morphoh)gical analysis is to: l) recognize the smallest units making up tile given sentellce if the sentence is of a l|on-separated hmguage, divide the sentence into morphenms (automatic segmentation), and 2) check the morlflmmes whether they are the right units to make up the sentence. Segmenting Methods We have some ways to segment a non-separated sentence into meaningflll morphemes. These three methods exl)lained below are the most popular ones to segment ,I apanese sentences. • The longest-sc'gment method: l~,ead the given sentence fi'om left to right and cut it with longest l)ossible segment. For exampie, if we get 'isheohl' first we look for segments wilich uses the/irst few lette,'s in it,'i' and 'is'. it is ol)vious that 'i';' is loIlger thall 'i', SO tile system takes 'is' as the segment. Then it tries the s;tllle method to find the segnlents in 'heold' and tinds 'he' and 'old'. The, least-bunsetsu segmenting m(',thod: Get all the possible segmentations of the input sentence and choose the segmentation(s) which has least buusetsu in it.. 'l'his method is to seg:ment Japanese sentence.s, which have content words anti function words together in one bunsetsu most of the time. This method helps not to cut a se, ntenee into too small meaningless pieces. Lettm'-tyl)e, segmenting method: In Japanese language we have three kinds of letters called Iliragana, Katakana and Kanji. This method divides a Japanese sentence into meaningful segments checking the type of letters. The Necessity of Morphological Analysis When we translate an English sentence into another language, the easiest way is to change the words in the sentence into the corresponded words in the target language. It is not a very hard job. All we have to do is to look up the words in the dictionary, flowever when it comes to a non-separated language, it is not as simple. An non-separated language does not show the segments included in a sentence. For example, a Japanese sentence does not have any space between words. A Japanese-speaking person can divide a Japanese sentence into words very easily, however, without arty knowledge in Japanese it is impossible. When we want a machine to translate an non-separated language into another language, first we need to segment the given sentence into words. Japanese is not the only language which needs the morphological segmentation. For example, Chinese and Korean are non-separated too. We can apply this MSS system to those languages too, with very simple preparation. We do not have to change the system, just prepare the corpus for the purpose. Problems of Morphological Analysis The biggest problems through the segmentation of an non-separated language sentence are the ambiguity and unknown words. Those sentences are all made of same strings but the included morphemes are different. With dill>rent segments a sentence can have several meanings. Japanese h~ three types of letters: I[iragana, Katakana and Kanji. lIiragana and Katakana are both phonetic symbols, and each Kanji letters has its own meanings. We can put several Kanji letters to one lliragana word. This makes morphological analysis of Japanese sentence very difficult. A Japanese sentence can have more than one morphological segmentation and it is not easy to figure out which one makes sense. Even two or nlore seglnentation can be 'correct' lbr one sentence. To get the right segmentation of a sentence one may need not only morphological analysis but also semantic analysis or grammatical parsing. In this paper no grammatical information is used arid MI between morphemes becomes the key to solve this problem. rio deal with unknown words is a big problem in natural language processing(NLP) too. To recognize unknown segments in tim sentences, we have to discuss the likelihood of tim unknown segment being a linguistic word. In this pal)er unknown words are not acceptable as a 'morpheme'. We define that 'morpheme' is a string of characters which is registered in the dictionary. 3 CALCULATING TIlE SCORES OF SENTENCES Scores of Sentences When the system searches the ways to divide a sentence into morphemes, more than one segmentation come out most of the time. What we want is one (or more) 'correct' segmeutation and we do not need any other possibilities. If there arc many ways of seg-,nenting, we need to select the best one of them. For that purpose the system introduced the 'scores of sentences'. Mutual Information A mutual information(MI) [1][2] [3] is tile information of the ~ussociation of several things. When it comes to NLI', M I is used I.o see the relationship between two (or more) certain words. The expression below shows the definition of the MI for NI, P: l'(wl, w2) Ml(wt ;w2) = 1o9 l'(Wl )P(w2) (t) lo i : a word P(wi) : the probability wl appears in a corpus P(wl ,w,2) : the probability w~ and 'w2 comes out together in a corpus Tiffs expression means that when wl and w.2 has a strong association between them, P(wt)P(w~) << P(wt,w2) i.e. MI(wl,w2) >> 0. When wl and w~ do not have any special association, P(w,)P(w.a) P(wl,w2) i.e. Ml(wl,'w2) ~ O. And wl,en wx and w2 come out together very rarely, P(wl)P(w2) >> ,'(~,,, ,,,~) i.e. M X(w,,,~,~) << 0. Calculating the Score of a Sentence Using the words in the given dictionary, it is easy to make up a 'sentence'. llowever, it is hard to consider whether the 'sentence' is a correct one or not. The meaning of 'correct sentence' is a sentence which makes sense. For example, 'I am Tom.' can make sense, however, 'Green the adzabak arc the a ran four.' is hardly took ms a meaningful sentence. 'Fhe score is to show how 'sentence-like' the given string of morphemes is. Segmenting ~t non-sel)arated language sentence, we often get a lot of meaningless strings of morphemes. To pick up secms-likc-mea,fingfid strings from the segmentations, we use MI. Actually what we use in tim calculation is not l, he real MI described in section 3.2. The MI expression in section 3.2 introduced the bigrams. A bigram is a possibility of having two certain words together in a corpus, as you see in the expression(l). Instead of the bigram we use a new method named d-bigram here in this paper [3]. D-bigram The idea of bigrams and trigraiT~s are often used in the studies on NLP. A bigram is the information of the association between two certain words and a trigram is the information among three. We use a new idea named d-bigram in this paper [3]. A d-bigram is the possibility that two words wt and w2 come out together at a distance of d words in a corpus. For example, if we get 'he is Tom' as input sentence, we have three d-bigram data: ('he' 'is' 1) ('is' 'Tom' 1) ('he' 'Tom' 2) ('he' 'is' 1) means the information of the association of the two words 'tie' and 'is' appear at the distance of 1 word in the corpus. 3.4 Calculation The expression to calculate the scores between two 2) Give a certain weight accordiug to the distance, d to all those Mid. words is[3]: t'(wl, w~, d) Mid(w1, w,2, d) = 1o9~~ (2) 3) Sum up those 3~7~. The sum is the score of the sentence. Church and llanks said in their pN)er [1] that the information between l.wo remote wo,'ds h~s less meaning in a sentence when it comes to the semantic analysis. According to the idea we l)ut d 2 in the expression so that nearer pair can be more effective in calculating the score of the sentence. 4 Tns SYSTSM MSS 4.1 Overview M,qS takes a lliragana sentence as its input. First, M,qS picks Ul) the morphemes found ill the giwm sentence with checking the dictionary. The system reads the sentence from left to rigltt, cutting out every possibility. Each segment of the sentence is looked up in the dictionary and if it is found in the dictionary the system recognize the segnlent as a morpheme. Those morphemes are replaced by its corresponded Kanji(or lliragana, Katakana or mixed) morpheme(s). As it is tohl in section 2.4, a lliragana morpheme can have several corresponded l(anji (or other lettered) morphemes. In that case all the segments corresponded to the found l|iragana morpheme, are memorized as morl)hemes found in the sentence,. All the found morphemes are nunfl)ered by its position in the sentence. After picking Illl all the n,orphenu.'s in I.he sentence the system tries to put them together mtd brings them up back to sentence(tat)h~ I). [nl)ut a lliragana sentence. Cut out t, he morphemes. lI Make up sentences with the morphemes. tI Calculate the score of sentences using the mutual information. g Compare. the scores of all the. made-up sentences and get the best-marked one as the most 'sentence-like' sentence. Then the system compares those sentences made up with found morl)he.mes and sees which one is the The Corpus A corpus is a set of sentences, These sentences are of target language. For example, when we apply this system to Japanese morphological analysis we need a corpus of Japanese sentences which are already segmented. The corpus prepared for the paper is the translation of English textbooks for Japanese junior high school students. The reason why we selected junior high school textbooks is that the sentences in the textbooks are simple and do not include too many words. This is a good environment for evaluating this system. 4.3 The Dictionary The dictionary for MSS is made of two part. One is the heading words and the other is the morphemes corresponded to the headings. There may be more than one morphemes attached to one heading word. The second part which has morphemes is of type list, so that it can have several morphemes. RESULTS Implement MSS to all input sentences and get the score of each segmentation. After getting the list of segmentations, look for the 'correct' segmentedsentence and see where in the list tile right one is. The data shows the scores the 'correct' segmentations got (table 2). 2 shows that most of the sentences, no matter whether the sentences are in the. corpus or not, are segmented correctly. We find the right segmentation getting the best score in the list of possible segmentations, c~ is tile data when the input sentences are in corpus. That is, all the 'correct' morphemes have association between each other. That have a strong effect in calculating the sco,'es of sentences. The condition is almost same for fl and 7. Though the sentence has one word replaced, all other words in the sentence have relationship between them. Tim sentences in 7 inelude one word which is not in the corpus, but still tile 'correct' sentence can get the best score among the possibilities. We can say that the data c~, fl and 7 are very successfld. llowever, we shouhl remember that not all the sentences in the given corpus wouht get the best score through the list. MSS does trot cheek the corpus itself when it calculate the score, it just use the Mid, the essential information of the corpus. That is, whether the input sentence is written in the corpus or not does not make any effect in calculating scores directly. Ilowever, since MSS uses Mid to calculate the. scores, the fact that every two morphemes in the sentence have connection between them raises the score higher. When it comes to the sentences which are not in corpus themselves, the ratio that the 'correct' sentence get the best score gets down (see table 2, data ~, e). The sentences of 6 and g are not found in the corpus. Even some sentences which are of spoken language and not grammatically correct are included in the input sentences. It can be said that those ~ and e sentences arc nearer to the real worhl of Japanese language. For ti sentences we used only morphemes which are in the corpus. That means that all tim morphenres used in the 5 sentences have their own MI,I. And e sentences have both morphemes it( the corpus and the ones not in the corpus. The morphemes which arc not in the corpus do not have any Ml(l. Table 2 shows that MSS gets quite good result eve(, though the input sentences arc not in the corpus. MSS do not take the necessary information directly from the co> pus and it uses the MIa instead. This method makes the information generalize.d and this is the reason why 5 and e can get good results too. Mid comes to }>e the key to use the effect of the MI between morphemes indirectly so that wc can put the information of the mssoeiation between morphemes to practical use. This is what we expected and MSS works successfldly at this point. The Corpus In this paper we used the translation of English text: books for Japanese junior high school students. Primary textbooks are kiud of a closed worhl which have limited words in it an<l the included sentences are mostly in some lixed styles, in good graummr. The corpus we used in this pal)er has about 630 sentences which have three types of Japanese letters all mixed. This corpus is too small to take ms a model of the ,'eal world, however, for this pal>e( it is big enough. Actually, the results of this paper shows that this system works efficiently even though the corpus is small. The dictionary an<l the statistical information are got from the given corpus. So, the experimental re= suit totally depends on the corpus. That is, selecting which corpus to take to implement, we can use I.his system ill many purposes(section 5.5). Comparison with the Other Methods It is not easy to compare this system with other seg-,nenting methods. We coral)are with tile least-bunsetsu method here ill this paper. The least-bunselsv method segment the given sentences into morphemes and fin(l the segmentations with least bunselsu. This method makes all the segmentation first an(l selects the seems-like-best segmentations. This is the same way MSS does. The difference is that the least-bdnsetsv method checkes the nmnber of tile bumselsu instead of calculating the scores of sen(elites. Let us think about implementing a sentence the morl)hcmes are l,ot in the dictionary. That means that the morphemes do not have any statistical informations between them. In this situation MSS can not use statistical informations to get the scores. Of course MSS caliculate the scores of sentences accord: ing to tile statistical informations between given morphemes, llowe.ver, all the Ml,l say that they have no association I)etween t]le (~lorpherlles. When there is no possibility that the two morl>hemes appears together ill the corpus, we give a minus score ~s tit('. Ml,t wdue, so, as the result, with more morphemes the score of the+ sentence gets lower. That is, tire segmentation which has less segments ill it gets better scores. Now compare it with the least-bunsetsu method. With using MSS the h.'ast-morpheme segme.ntations are selected as the goo(I answer, q'hat is tile same way the least-bunsetsu method selects the best one. '['his means that MSS and the least-bttnscts.le method have the same efficiency when it comes to the sentences which morl(hemes are not in the corpus. It is obvious that when the sentence has morphemes in the corpus the ellicie.ncy of this systern gets umch higher(table 2). Now it is proved that MSS is, at least, as etli: cicnt as the least-b'unsets'~ nmthod, no matter what sentence it takes. We show a data which describes I.his(tabh~ 3). "Fable 3 is a good exanq)le of the c;use whelL the. input sentence has few morphemes which are in the corl)uS. This dal.a shows that in I.his situal.ion I.here is an outstanding relation between the number of morl)hemes and the scores of the segmented se.ntenees. This example (table 3) has an ambiguity how to segment the sentence using the registere(l morphemes, and all the morphemes which causes the alnbiguity are not in the given (:orpus. Those umrl)hemes not in the corpus do not have any statistical information betweel, them and we have no way to select which is bett<.'r. So, the scores of sentences are Ul) to the length of the s<~gmented sentence, that is, the number how many morl)hemes the sentence has. '['he segmented sentence which has least segments gets the best score, since MSS gives a minus score for unknown mssociation between morphemes. That means that with more segments in the sentence the score gets lower. This sit- 5.4 Experiment in Chinese The theme of tiffs paper is to segment non-separaLe(] language sentences into morphemes. In this paper we described on segmentation of Japanese non-segmented sentences only but we are working on Chinese sentences too. This MSS is not for Japanese only. It can be used for other non-separated languages too. "lb implement for other languages, we just need to prepare the corpus for that and make up the dictionary from it. llere is the example of implementing MSS for Chinese language(table 4). The input is a string of characters which shows the pronounciations of a Chinese sentence. MSS changes it into Chinese character senteces, segmenting the given string. Changing the Corpus To implement tiffs MSS system, we only need a eel pus. The dictionary is made from the corpus. This -14.80836 gives MSS system a lot of usages and posibilities. Most of the NLP systems need grammatical i,ffof malleus, and it is very hard to make up a certain grammatical rule to use in a NLP. The corpus MSS needs to implement is very easy to get. As it is described in the previous section, a corpus is a set of real sentence.s. We can use IVISS in other languages or in other purposes just getting a certain corpus for that and making up a dictionary from the corpus. That is, MSS is available in many lmrposes with very simple, easy preparation. CONCLUSION This paper shows that this automatic segmenting system MSS is quite efficient for segmentation of nonseparated language sentences. MSS do not use any grammatical information to divide input sentences. Instead, MSS uses MI l)etween morphenres included in the input sentence to select the best segmentation(s) frorn all the possibilities. According to the results of the experiments, MSS can segment ahnost all the sentences 'correctly'. This is such a remarkable result. When it comes to the sentences which are not in the corpus the ratio of selecting the right segmentation as the best answer get a little bit lower, however, the result is considerably good enough. The result shows that using Mid between morphemes is a very effective method of selecting 'correct' sentences, aml this means a lot in NLP. is in the yard. birds are in the yard. -figure robber is in the yard. ( ('~t/~" 03)('~" 12) ('ff~" 23) (" L'23) ('~" ad) ('1:" 4 s)('~" ,ls)('~'67) {'')" 78)('|77-" 89) ('l,:" "N--a" "~')most 'sentence-like'. For that purpose this system calculate the score of likelihood of each sentences(section 3.4). ( " ~: ~ .... ~:[ o ")) heading word morphemes Chinese : ( "tiny" (" ~,, .... ~t")) heading word morpherne~ 5 As the value of Mid gets bigger, the more those words have the ,association. And the score of a sentence is calculated with these Mid data(expression(2)). Calculate Mld of every pair of words included in the given sentence.lu i : ;t word d : distance of the two words Wl and w2 P(wi) : the possibility the wm'd wl appears in the coq)us P(wl,w2,d) : the possibility wl and w2 eoll'le out d words away fl'om each other in the corpus The definition of the sentence score is[l]: ia(W)= 9 9 Mia(wi,w'+ d,d) d-' (a) i:0 d:l d : distance of the two words m : distance limit ?1. : the llUllti|lel" Of Wol'ds ill tile SelttellCe I~ll : it selttence wi : The i-th morpheme in the sentence I~V This expression(3) calculates the scores with the algoritlmt below: 1) Table 1 : 1MSS example Table 2 : 2Experiment in Japanese corpus dictionary input number of input sentence distance limit about 630 J~tp,'tnese sentences (with three kinds of letters mixed) about 1500 heading words (includes morphemes not in tile corpus) lion-segmented Ja.p;~nese selltences using lllragana only about 100 e~tch 5 ~ -V~score a 99% loo% 7 100% 95% E 80% 2nd best T ~ 3rd best 100% 100% 100 % 100 % 100% :100% 98 % 98 % 90 % 95 % the very sentences in tile corpus replaced one rnorllheme in the sentence (the buried morpheme is in the corpus) replaced one morpheme in the sentence (tile buried morpbeme is not in the corpus) sentences not in the corpus (the morphemes are all in tim corpus) sentences not in the corpus (include morphemes not; in the corpus) 5.1 Ext)eriment in Japanese According to the experimental results(table 2), it is obvious that MSS is w.'ry useful. The table Table 3 : 3MSS and The least-bvnselsu method input : a non-segmented uation is resemble to the way how the least-bunseisu method selects the answer.Japanese tliragana sentence not in the corpus all unknown morphemes in the sentence are registered in the (lictionary (some morphemes in the corpus are included) " sumomo mo nlonlo hie memo no ilCh] the number of the morphemes 6 7 8 9 10 the scores of the sentences -65,0 -79.6 -9,1.3 -108.9 -123.5 the number of tile segmented 5 20 21 8 1 sentences tile tcorrectl segmentation ~k" MSS O tile least- bunsetsu 0 method morphemes included : " © .... ~2 " in the corpus : " no .... Ill(lllO " morphemes not included : " IAI .... ~4!. ~ " in the corpus : " uchi .... sunm " " sumomo " *' hie j~ " ~t" 'P nlOUlO ~p Parsing, Word Associations and Typical Predlcate-Argument t{,elations. Kenneth Church, William Gale, Patrick , Donald Llindle, ternational Parsing Workshop. Kenneth Church, William Gale, Patrick lhmks, and Donald llindle. Parsing, Word Associations and Typical Predlcate-Argument t{,elations. In- ternational Parsing Workshop, 1989. Itow to compile a hilingual collocational lexicon automatically. Statislically-based Natural Language Programming Techniques. Frank Smadja, Frank Smadja. Itow to compile a hilingual collo- cational lexicon automatically. Statislically-based Natural Language Programming Techniques, pages 57--63, 1992. A Multi-Lingual Translation System Based on A Statistical Model(written in Jal)anese). JSAI Technical report, SIG-PPAI-9302-2. Tomoaki Dunya Tsutsurni, Kotaro Nitta, Shlho One, Nobesawa, dunya Tsutsurni, Tomoaki Nitta, Kotaro One, and Shlho Nobesawa. A Multi-Lingual Transla- tion System Based on A Statistical Model(written in Jal)anese). JSAI Technical report, SIG-PPAI- 9302-2, pages 7-12, 1993. Parsing a Natural Language Using Mutual Information Statistics. M David, Mitchell P Magerman, Marcus, AAAIDavid M.Magerman and Mitchell P.Marcus. Pars- ing a Natural Language Using Mutual Information Statistics. AAAI, 1990. A Statist, ieal Approach to Language Translation. l'roc, of COLING-88. It, J Brown, S Della Cocke, V Della Pietra, F Pietra, R Jelinek, P Mercer, Roossin, It.Brown, J.Cocke, S.Della Pietra, V.Della Pietra, F.Jelinek, R.Mercer, and P.Roossin. A Statist, i- eal Approach to Language Translation. l'roc, of COLING-88, pages 71-76, 1989.
239,890,007
Annotation model and corpus for opinion detection in economic and financial narratives
Specialized press and professional information channels influence beliefs on economic outlook or prospects for financial markets by drawing attention on particular events, and disseminating domain expert opinions. Analyzing this textual data allows for a better understanding of investors' beliefs and detecting key indicators for market dynamics.
[ 166150816, 13745905, 227231731, 174801222 ]
Annotation model and corpus for opinion detection in economic and financial narratives Jiahui Hu *jiahui.hu@student-cs.fr†pap@limsi.fr Natixis CIB §Paris-Saclay University Bât 507, Rue du Belvedère91400Paris, OrsayFrance †CNRS, France, France §LISN, France Patrick Paroubek Natixis CIB §Paris-Saclay University Bât 507, Rue du Belvedère91400Paris, OrsayFrance †CNRS, France, France §LISN, France Dirk Schumacher Natixis CIB §Paris-Saclay University Bât 507, Rue du Belvedère91400Paris, OrsayFrance †CNRS, France, France §LISN, France Annotation model and corpus for opinion detection in economic and financial narratives Specialized press and professional information channels influence beliefs on economic outlook or prospects for financial markets by drawing attention on particular events, and disseminating domain expert opinions. Analyzing this textual data allows for a better understanding of investors' beliefs and detecting key indicators for market dynamics. Though considerable efforts have been made to develop data-hungry algorithms on coarsegrained level sentiment analysis on financerelated social media messages, performing fine-grained level target-dependent opinion analysis on documents written by domain experts and journalists is still a relatively unexploited field. Since some narratives are essentially made of opinions/emotions expressed about economy and finance concepts, we address fine-grained detection of these linguistic markers at an intrasentential level. We propose, in this paper, a global model extracting from texts terms that are specific to finance and economy or expressing an opinion/emotion in order to address the challenges of the domain-specific language we face: (1) opinions and facts about a given factor may appear at different locations (2) the range of domain-specific concepts is large and opinion may be explicit or implicit (3) syntactic structures and rhetorical relations often carry useful information for detecting market change indicators (4) emotions, like panic, also need to be detected since they are part of the economic and financial market cycle. The proposed model consists of the incorporation of fundamental approaches in natural language processing, language evaluation theory (appraisal theory), and machine learning methods for information extraction and data annotation. In this paper, we present our annotation model and report on experiments to evaluate the quality of our dataset. Introduction The processing of information is crucial in determining financial assets' prices. Thus, market participants' opinions can be an essential driver of price dynamics. Recent progress of NLP technologies and access to digitized texts have facilitated automatic sentiment analysis of financial narratives. Most existing corpora for opinion analysis applied to economy and finance focus on the sentence level polarity (Malo et al., 2013) or text level (Cortis et al., 2017), leaving aside opinion targets. (Barbaglia et al., 2020) created a corpus focusing on the polarity of six macroeconomic aggregates, but it is not publicly available as of the time of writing. The corpus of FiQA task 1 1 for fine-grained opinion analysis of news headlines and tweets is relatively small (1,313 samples) for training supervised learning models, and it contains relatively short sentences. In the texts we will analyze, the sentences are generally longer. Therefore, we introduce a corpus consisting of labels annotated at the intra-sentential level by humans and algorithms to fill this gap. The novelty of our corpus is that we also consider specific rhetorical modes like financial experts do. Each sentence contains the following annotations: (1) terminologies in economics and finance, (2) opinion and emotion expressions (OEE), (3) name entities, (4) negation patterns and (5) the pair (target, polarity). Methodology Dataset Our corpus is collected from five reputable sources from 1986 to 2021 (see Table 4 for size and sources of the raw dataset, see Figure 4 for corresponding time range). These texts aim at communicating, discussing, or commenting on business and economic activities. On the one hand, contents issued by the central banks 2 and corporates( MD&A of 10-K filings 3 (Ewens, 2019) and transcripts of earning calls) are first-hand information that is essential for financial markets. On the other hand, when it comes to news articles 4 and tweets, outsider comments on these official contents reflect how financial participants evaluate these events; the popularity of certain narratives in the media also shed some light on the main driver of market dynamics. Annotation scheme Our primary objective is to label all pairs of opinion expressions and their corresponding target, i.e. (target, opinion), at the intra-sentential level. The opinion is classified into three polarities: positive, negative and non-committal; polarities are attributed based on the judgement related to economic norms or the health of business activities. The novelty of our scheme is that we consider how financial experts communicate and analyze the evolution of event trends, namely, a. Formulations of argumentative constructions b. Conditional opinions c. cause-effect relations d. explicit speculation about the future Our raw dataset contains both facts and opinions. In TBOA, we focus on sentences of interest that contain at least one terminology in economy and finance (called TOI, terms of interest) and at least one opinion & emotion expression (called OEE). This criterion helps us to separate opinionated sentences from factual ones, because we can use existing NLP technologies to extract appraisal terms (see 2.3) and TOIs. TOIs are extracted as follows: we firstly candidate noun phrases 5 and keep just those containing elements of a domain-specific thesaurus. To extract appraisal terms, we look for exact matches between lemmatized words of each sentence and a pre-defined list of appraisal terms. The pre-annotation pipeline also includes the machineassisted annotations of names entities and negation patterns. How do we detect opinionated sentences The particularity of opinionated sentences in financial narratives is that authors use evaluative lan-guage to monitor and judge an event (i.e. happenings or changes of a business or economic activity) or assess their impact. Authors may: (1) monitor changes by using language to describe in which direction an event or a concept evolves a , (2) express a judgment about these dynamics by clarifying their preference; furthermore their expectations can be diversely grounded in a mix of rationality and/or emotions, (3) and assess the intensity of these dynamics. a in the DOWN & LOW category, plummet and decrease convey the notion of scaling rapid and median, respectively. These elements converge toward the theoretical research about the language of evaluation. We have chosen appraisal theory because it provides meaning-making resources to assess the intensity (celled Graduation) or the direction of attitudinal expressions (called Attitude, i.e. affect, judgment and appreciation) and its author's commitment (called Engagement). As illustrated by Figure 1, we propose three axes to regroup opinion expressions about changes in economic and financial activities. • Variation axis: gain or loss in quantity or volume, or description of stable state • Attitude axis: recognition of value or loss in value, lack of visibility, anxious awareness of undesirable outcome; or even emotional assessment such as the intense feeling of excitement and strong desire to put ideas into practice, feelings of helplessness, the impression of losing control on the situation. • Graduation axis is complementary to the two previous axes: high or low intensity. Annotations Our corpus is annotated with an open-source annotation platform called INCEpTION (Klie et al., 2018). As exemplified in Figure 2, the annotator identifies all targets towards which opinions are expressed and their polarities. Following the evaluation campaign DEFT 2018 (Paroubek et al., 2018), the annotator (i) selects minimal information about the Opinion & Emotion Expression (i.e. "dysfunctional", tagged [OEE]), (ii) selects the most complete information about the target (i.e. Sovereign bond market in Fig-ure 2) and attributes polarity (tagged [-] in Figure 2) to it, (iii) then draws an unlabeled arc from [OEE] toward its corresponding target. Syntactic structure of OEEs The use of language differs from one speaker to another, depending on culture, profession, personal experience, or target audience. We assume that texts written by journalists and experts tend to use a more diverse vocabulary and syntactic structure to report facts accurately, persuade readers or polish their articles. To verify this assumption, we analyze the syntactic structure of subjective expressions in three types of texts that target different groups of people. We started from this angle because through the analysis of syntactic structure we aim to capture common phenomena in our corpus while detecting domain specificities of subjective expressions in financial narratives. Corpora for comparison SemEval14 (Pontiki et al.) corpus is created for the NLP task 6 called Aspect-based Sentiment Analysis. This corpus consists of annotations of (target, polarity) of customer reviews on restaurants and laptops separately. OEEs are extracted using the neural model of (Fan et al., 2019). MPQA (Wiebe et al., 2005) corpus consists of texts collected from a wide range of news sources. Authors annotate expressions related to opinions, beliefs, thoughts, feelings, emotions, goals, evaluations and judgments, called internal states. They divided it into two frames: expressive and direct subjectivity; the latter includes words for subjective speech events( such as say) and explicitly mentioned private states (such as fear). Our annotation scheme does not consider the language used to position a speaker's stance, corresponding to speech 6 Semantic Evaluation 2014 Task 4 event expressions. Thus, we focus on the syntactic structure of expressive subjective elements, which are implicit evaluative expressions. Tools Numerous toolkits have been developed for syntactic analysis. We favoured Stanza (Qi et al., 2020), a state-of-the-art performance toolkit based on a neural NLP pipeline. For our algorithm, we use the universal part-of-speech (POS) tags and the syntactic dependency trees produced by the Stanza parser, focusing on the syntactic constructions of the OEEs themselves and the syntactic dependency relations that link them to other components of the sentence where they are located. Result Analysis We observe different patterns of opinion expressions in these four corpora. In the SemEval 14 corpora, internauts tend to use unigrams (95.76% and 93.26% of all OEEs, respectively) for writing product reviews. Adjectives (adj) and verbs are the most used for commenting on restaurants and laptops, followed by a small portion of adverbs (adv) and nouns. When it comes to new articles of the MPQA corpus, the variety of OEEs is the most diversified and balanced; we guess addressing implicit opinions requires more thoughtful expressions. Consequently, 72% of the OEEs are multi-grams. In our corpus ECOFIN, the top three types of OEEs are unigrams: verbs(24.8%), adj(11.7%) and nouns(7.9%), but the overall portion of unigrams(49.26%) is much smaller than the SemEval 14. These multi-word subjective expressions, such as the combination of adj & nouns, adv & adj, are more frequently used in financial narratives than online comments (see Figure 5). In particular, the combination of verbs with other classes of words (such as adv, adposition 7 ) represents at least 10% of OEEs. This observation is in line with the fact that financial experts are more likely to express their subjective opinions around changes and events, which require verbal expressions. We further investigate which are the most used verbs 7 preposition and postpositions and how they relate to words in other classes(see 3.4). Statistics of the five datasets in our experiments confirm our assumption (Table 1 and Figure 5, 6 ). Opinion expressions are domain-dependent; news articles and financial texts are more likely to employ multi-word opinion expressions composed of a wide range of word classes. Each word inside the OEE can modify the semantic orientation of another word, which complexifies the computation of the overall semantic orientation of the whole OEEs. Analysis of verbs inside OEEs Syntactic structure and word classes of OEEs can be valuable clues for determining where their corresponding target can be found. For example, for a unigram OEE whose word class is adjective, its target is likely to be the noun that follows because adjectives precede the noun they modify in English. Following this idea, we manually examine 30 sentence of our corpus whose OEEs are in the form of verb+adp and find that most TOI precedes this type of OEEs, but some exceptions can be found in OEEs with the adp "to". Similarly, targets are very likely to be announced before OEEs "remain adj". Recent studies ( (Huang et al., 2020), (Zhao et al., 2021)) have proposed integrating syntax-related information in graph neural networks (GNN) or using GNN for sequence labelling by propagating the labelling information from known to unknown rules (which can be any rules, including syntactic ones). In the future, we want to study how these mechanism can be exploited to analyze our corpus. Analysis of dependency relations We also interest in the dependency relations inside each OEE of our corpus and how it is related to other words in the dependency tree. As exemplified in figure 3, the sentence is separated into three parts: OEEs, words that are above OEE (called precedent_OEE) and below OEE (called posterior_OEE). Inside each OEE, the most frequent dependency relations are adjective and adverb modifiers and case-making relations linking adposition with the noun it attaches. The object is the fourth most important type of relation; it is connected to a verb and conveys information about the entity that undergoes a state change(see (1) in Figure 1). For example, in Figure 3, author's evaluative opinion toward "fragmentation" is expressed with two OEEs highlighted in orange and purple. Inside these two OEEs, the "obj" indicates which financial concepts (i.e., "costs" and "(possibility of) economies of scale") are modified. When it comes to dependency relations on the top of OEEs (posterior_OEE), we can find noun subjects, they can be the receive of an action. other most frequent dependencies of posterior_OEE and precedent_OEE can be found in Table 2 Conclusion This paper presents our annotation scheme and the technologies used for pre-annotation. The preannotation output allows us to identify candidates for our corpus creation and alleviate the workload of annotators. We also compare the syntactic structure of OEEs of our financial narrative corpus with three corpora of fine-grained sentiment analysis. This comparison underlines the diversity of subjective expressions used by journalists and financial experts and the complexity of their syntactic structure. This result exemplifies why predicting TBOA from financial narratives is challenging. It also helps us understand how financial experts and journalists express opinions and how these subjective expressions in news articles and financial narratives differ from those in online comments. In the future, we want to develop neural models adapted to our corpus by considering domainspecific knowledge, fundamental approaches in NLP in the neural model architecture to augment the machine's capacity to discover meaningful patterns. A Appendix Figure 1 : 1Our focus on specific aspects of texts written by financial experts Figure 2 : 2Example of a annotated sentence (explicit opinion) Our corpus is annotated by one of the authors familiar with the domain terminology. Please refer to Appendix A.1 for more examples of our annotated sentences. Figure 3 : 3Dependency tree of "This fragmentation increases cost and reduces the possibility of economies of scale." Figure 5 :Figure 6 : 56Top 8 universal POS of MPQA (blue) and ECOFIN (red) corpora Top 8 universal POS of SemEval 14 corpora, Laptop (blue) and Restaurant(red) 10 Expressive subjective elements are been underlined. Statistics about number of tokens per OEECorpora Unigram 2-3 4-5 6-10 >10 MPQA 28.16% 30.63% 16.61% 16.38% 8.17% ECOFIN 49.26% 28.96% 12.11% 7.56% 1.63% Laptop 94.26% 5.14% 0.23% 0.00% 0.00% Restaurant 95.76% 3.24% 0.15% 0.00% 0.00% Table 1 .Corpora nsubj obl amod advmod obj case posterior_OEE 11.21% 8.56% 6.39% 5.52% 5.48% 3.61% precedent_OEE 8.21% 7.53% 6.78% 3.91% 3.59% 9.0% Table 2 2Statistics about number of tokens per OEE Table 3 3Percentage of sentences of interest from randomly chosen sentences Figure 4: Shaded slashes of the column 'News' indicate that the time range of news sentences from the Financial PhraseBank dataset is incognito.A.1 Sample sentencesOur ECOFIN corpus (1) "The fair value of investment properties totalled EUR 2,299.9 mn , compared to EUR 2,229.5 mn in the corresponding period in 2009." 8 (2) "This fragmentation increases costs and reduces the possibilities of economies of scale." 9Sent Num target polarity OEE (1) deficit ratio - rise (2) fragmentation - increases costs (2) fragmentation - reduces the possibilities of economies of scale Table 4 4Our manual annotations: pair(target, polarity) and the corresponding OEE of each sample sentences MPQA corpus 10 • 'The criteria set by Rice are the following : the three countries in question are repressive and grave human rights violators , and aggressively seeking weapons of mass decertain countries resort to military power and embark on trampling upon human rights of civilians.' • 'He explained that both the US and Jordan have different issues to deal with on a national level , including environmental issues.'8 from Financial Phrasebank dataset 9 source: link https://sites.google.com/view/fiqa/home link of ECB's press conferences and speeches, link of FOMC 3 link of MD&A data source4 We randomly choose sentences from Financial PhraseBank dataset(Malo et al., 2013) to apply our annotations.5 We use SpaCy (link), an open-source NLP toolkit for its computation efficiency. AcknowledgementsThis work is supported by the grant CIFRE, a partnership between Natixis CIB Research and the LISN Laboratory (Interdisciplinary Laboratory of Digital Sciences). Forecasting with economic news. Available at SSRN 3698121. Luca Barbaglia, Sergio Consoli, Sebastiano Manzan, Luca Barbaglia, Sergio Consoli, and Sebastiano Man- zan. 2020. Forecasting with economic news. Avail- able at SSRN 3698121. SemEval-2017 task 5: Finegrained sentiment analysis on financial microblogs and news. Keith Cortis, André Freitas, Tobias Daudert, Manuela Huerlimann, Manel Zarrouk, Siegfried Handschuh, Brian Davis, 10.18653/v1/S17-2089Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017). the 11th International Workshop on Semantic Evaluation (SemEval-2017)Vancouver, CanadaAssociation for Computational LinguisticsKeith Cortis, André Freitas, Tobias Daudert, Manuela Huerlimann, Manel Zarrouk, Siegfried Handschuh, and Brian Davis. 2017. SemEval-2017 task 5: Fine- grained sentiment analysis on financial microblogs and news. In Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017), pages 519-535, Vancouver, Canada. Association for Computational Linguistics. Michael Ewens, 10.22002/D1.1249Mda statements from public firms. Michael Ewens. 2019. Mda statements from public firms: 2002-2018. Target-oriented opinion words extraction with target-fused neural sequence labeling. Zhifang Fan, Zhen Wu, Xin-Yu Dai, Shujian Huang, Jiajun Chen, 10.18653/v1/N19-1259Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesMinneapolis, Minnesota1Association for Computational LinguisticsZhifang Fan, Zhen Wu, Xin-Yu Dai, Shujian Huang, and Jiajun Chen. 2019. Target-oriented opinion words extraction with target-fused neural sequence labeling. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 2509-2518, Minneapolis, Minnesota. Associ- ation for Computational Linguistics. Syntax-aware graph attention network for aspect-level sentiment classification. Lianzhe Huang, Xin Sun, Sujian Li, Linhao Zhang, Houfeng Wang, 10.18653/v1/2020.coling-main.69Proceedings of the 28th International Conference on Computational Linguistics. the 28th International Conference on Computational LinguisticsBarcelona, Spain (OnlineInternational Committee on Computational LinguisticsLianzhe Huang, Xin Sun, Sujian Li, Linhao Zhang, and Houfeng Wang. 2020. Syntax-aware graph at- tention network for aspect-level sentiment classifica- tion. In Proceedings of the 28th International Con- ference on Computational Linguistics, pages 799- 810, Barcelona, Spain (Online). International Com- mittee on Computational Linguistics. The inception platform: Machine-assisted and knowledge-oriented interactive annotation. Jan-Christoph Klie, Michael Bugert, Beto Boullosa, Richard Eckart De Castilho, Iryna Gurevych, Proceedings of the 27th International Conference on Computational Linguistics: System Demonstrations. the 27th International Conference on Computational Linguistics: System DemonstrationsAssociation for Computational LinguisticsJan-Christoph Klie, Michael Bugert, Beto Boullosa, Richard Eckart de Castilho, and Iryna Gurevych. 2018. The inception platform: Machine-assisted and knowledge-oriented interactive annotation. In Proceedings of the 27th International Conference on Computational Linguistics: System Demonstrations, pages 5-9. Association for Computational Linguis- tics. Good Debt or Bad Debt: Detecting Semantic Orientations in Economic Texts. Pekka Malo, Ankur Sinha, Pyry Takala, Pekka Korhonen, Jyrki Wallenius, arXiv:1307.5336cs, q-finPekka Malo, Ankur Sinha, Pyry Takala, Pekka Ko- rhonen, and Jyrki Wallenius. 2013. Good Debt or Bad Debt: Detecting Semantic Orientations in Eco- nomic Texts. arXiv:1307.5336 [cs, q-fin]. ArXiv: 1307.5336. DEFT2018 : Recherche d'information et analyse de sentiments dans des tweets concernant les transports en Île de France. Patrick Paroubek, Cyril Grouin, Patrice Bellot, Vincent Claveau, Iris Eshkol-Taravella, Amel Fraisse, Agata Jackiewicz, Jihen Karoui, Laura Monceaux, Juan-Manuel Torres-Moreno, DEFT 2018 -14ème atelier Défi Fouille de Texte. Rennes, France2Patrick Paroubek, Cyril Grouin, Patrice Bellot, Vin- cent Claveau, Iris Eshkol-Taravella, Amel Fraisse, Agata Jackiewicz, Jihen Karoui, Laura Monceaux, and Juan-Manuel Torres-Moreno. 2018. DEFT2018 : Recherche d'information et analyse de sentiments dans des tweets concernant les transports en Île de France. In DEFT 2018 -14ème atelier Défi Fouille de Texte, volume 2 of Actes de la conférence Traite- ment Automatique des Langues, TALN 2018, pages 1-11, Rennes, France. Ion Androutsopoulos, and Suresh Manandhar. SemEval-2014 Task 4: Aspect Based Sentiment Analysis. Maria Pontiki, Dimitris Galanis, John Pavlopoulos, Harris Papageorgiou, 9Maria Pontiki, Dimitris Galanis, John Pavlopoulos, Harris Papageorgiou, Ion Androutsopoulos, and Suresh Manandhar. SemEval-2014 Task 4: Aspect Based Sentiment Analysis. page 9. Stanza: A python natural language processing toolkit for many human languages. Peng Qi, Yuhao Zhang, Yuhui Zhang, Jason Bolton, Christopher D Manning, Peng Qi, Yuhao Zhang, Yuhui Zhang, Jason Bolton, and Christopher D. Manning. 2020. Stanza: A python natural language processing toolkit for many human languages. Annotating Expressions of Opinions and Emotions in Language. Language Resources and Evaluation. Janyce Wiebe, Theresa Wilson, Claire Cardie, 10.1007/s10579-005-7880-939Janyce Wiebe, Theresa Wilson, and Claire Cardie. 2005. Annotating Expressions of Opinions and Emotions in Language. Language Resources and Evaluation, 39(2-3):165-210. Glara: Graph-based labeling rule augmentation for weakly supervised named entity recognition. Xinyan Zhao, Haibo Ding, Zhe Feng, abs/2104.06230CoRRXinyan Zhao, Haibo Ding, and Zhe Feng. 2021. Glara: Graph-based labeling rule augmentation for weakly supervised named entity recognition. CoRR, abs/2104.06230.
226,239,200
[]
The role of artificially generated negative data for quality estimation of machine translation Varvara Logacheva v.logacheva@sheffield.ac.uk University of Sheffield Sheffield United Kingdom Lucia Specia l.specia@sheffield.ac.uk University of Sheffield Sheffield United Kingdom The role of artificially generated negative data for quality estimation of machine translation The modelling of natural language tasks using data-driven methods is often hindered by the problem of insufficient naturally occurring examples of certain linguistic constructs. The task we address in this paper -quality estimation (QE) of machine translation -suffers from lack of negative examples at training time, i.e., examples of low quality translation. We propose various ways to artificially generate examples of translations containing errors and evaluate the influence of these examples on the performance of QE models both at sentence and word levels. Introduction The task of classifying texts as "correct" or "incorrect" often faces the problem of unbalanced training sets: examples of the "incorrect" class can be very limited or even absent. In many cases, naturally occurring instances of these examples are rare (e.g. incoherent sentences, errors in human texts). In others, the labelling of data is a non-trivial task which requires expert knowledge. Consider the task of quality estimation (QE) of machine translation (MT) systems output. When performing binary classification of automatically translated sentences one should provide examples of both bad and good quality sentences. Good quality sentences can be taken from any parallel corpus of human translations, whereas there are very few corpora of sentences annotated as having low quality. These corpora need to be created by c 2015 The authors. This article is licensed under a Creative Commons 3.0 licence, no derivative works, attribution, CC-BY-ND. human translators, who post-edit automatic translations, mark errors in translations, or rate translations for quality. This process is slow and expensive. It is therefore desirable to devise automatic procedures to generate negative training data for QE model learning. Previous work has followed the hypothesis that machine translations can be assumed to have low quality (Gamon et al., 2005). However, this is not the case nowadays: many translations can be considered flawless. Particularly for word-level QE, it is unrealistic to presume that every single word in the MT output is incorrect. Another possibility is to use automatic quality evaluation metrics based on reference translations to provide a quality score for MT data. Metrics such as BLEU (Papineni et al., 2002), TER (Snover et al., 2006) and ME-TEOR (Banerjee and Lavie, 2005) can be used to compare the automatic and reference translations. However, these scores can be very unreliable, especially for word-level QE, as every word that differs in form or position would be annotated as bad. Previous efforts have been made for negative data generation, including random generation of sentences from word distributions and the use of translations in low-ranked positions in n-best lists produced by statistical MT (SMT) systems. These methods are however unsuitable for QE at the word level, as they provide no information about the quality of individual words in a sentence. In this paper we adopt a different strategy: we insert errors in otherwise correct sentences. This provides control over the proportion of errors in the negative data, as well as knowledge about the quality of individual words in the generated sentences. The goals of the research presented here are to understand the influence of artificially generated data (by various methods and in various quan-tities) on the performance of QE models at both sentence and word levels, and ultimately improve upon baseline models by extending the training data with suitable artificially created examples. In Section 2 we further review existing strategies for artificial data generation. We explain our generation strategies in Section 3. In Section 4 we describe our experiment and their results. 2 Previous work 2.1 Discriminative language modelling One example of task that requires low quality examples is discriminative language modelling (DLM), i.e., the classification of sentences as "good" or "bad". It was first introduced in a monolingual context within automatic speech recognition (Collins et al., 2005), and later applied to MT. While in speech recognition negative examples can be created from system outputs that differ from the reference (Bhanuprasad and Svenson, 2008), in MT there are multiple correct outputs, so negative examples need to be defined more carefully. In Okanohara (2007) bad sentences used as negative training instances are drawn from the distribution P (w i |w i−N +1 , ..., w i−1 ): first the start symbol < s > is generated, then the next words are taken based on the word probability given the already generated words. Other approaches to discriminative LMs use the n-best list of the MT system as training data (Li and Khudanpur, 2008). The translation variant which is closest to the oracle (e.g. has the highest BLEU score) is used as a positive example, while the variant with high system score and low BLEU score is used as a negative example. Such dataset allows the classifier to reduce the differences between the model score and the actual quality score of a sentence. Li et al. (2010) simulate the generation of an n-best list using translation tables from SMT systems. By taking entries from the translation table with the same source side they create a set of alternative translations for a given target phrase. For each sentence, these are combined, generating a confusion set for this sentence. Quality estimation for MT QE can be modelled as a classification task where the goal is to distinguish good from bad translations, or to provide a quality score to each translation. Therefore, examples of bad sentences or words produced by the MT system are needed. To the best of our knowledge, the only previous work on adding errors to well-formed sentences is that by Raybaud et al. (2011). In (Raybaud et al., 2011), the training data for the negative data generation process consists of a set of MT hypotheses manually post-edited by a translator. Hypotheses are aligned with the corresponding post-editions using the TERp tool (Snover et al., 2008). The alignment identifies the edit operations performed on the hypothesis in order to convert it to the post-edited version: leave word as is (no error), delete word, insert new word, substitute word with another word. Two models of generation of error strings from a well-formed sentence are proposed. Both are based on the observed frequency of errors in the post-edited corpus and do not account for any relationships between the errors and the actual words. The bigram error model draws errors from the bigram probabilities P (C i |C i−1 ) where C i is an error class. The cluster error model generates clusters of errors based on the distribution of lengths of erroneous word sequences in the training data. Substituting words are chosen from a probability distribution defined as the product of these words' probabilities in the IBM-1 model and a 5-gram LM. A model trained only on artificial data performs slightly better than one trained on a small manually annotated corpus. Human error correction Another task that can benefit from artificially generated examples is language learner error correction. The input for this task is text that potentially contains errors. The goal is to find these errors, similarly to QE at the word level, and additionally correct them. While the text is written by humans, it is assumed that these are non-native speakers, who possibly translate the text from their native language. The difference is that in this task the source text is a hidden variable, whereas in MT it is observed. The strategy of adding errors to correct sentences has also been used for this task. Human errors are more intuitive to simulate as language learners explicitly attempt to use natural language grammars. Therefore, rule-based systems can be used to model some grammar errors, particularly those affecting closed class words, e.g. determiner errors (Izumi et al., 2003) or countability errors (Brockett et al., 2006). More recent statistical methods use the distributions of errors in corpora and small seed sets of errors. They often also concentrate on a single error type, usually with closed class words such as articles and prepositions (Rozovskaya and Roth, 2010). Felice and Yuan (2014) go beyond closed class words to evaluate how errors of different types are influenced by various linguistic parameters: text domain, learner's first language, POS tags and semantic classes of erroneous words. The approach led to the generation of high-quality artificial data for human error correction. However, it could not be used for MT error identification, as MT errors are different from human errors and usually cannot be assigned to a single type. Generation of artificial data The easiest choice for artificial data generation is to create a sentence by taking all or some of its words from a probability distribution of words in some monolingual corpus. The probability can be defined for unigrams only or conditioned on the previous words (as it was done for discriminative LMs). This however is a target language-only method that does not suit the QE task as the "quality" of a target word or sentence is dependent on the source sentence, and disregarding it will certainly lead to generation of spurious data. Random target sentences based on a given source sentence could be generated with bilingual LMs. However another limitation of this approach is the assumption that all words in such sentences are wrong, which makes the data useless for wordlevel QE. Alternatively, the artificial sentences can be generated using MT systems for back-translation. The target sentences are first fed to a target-source MT system, and then its output is passed to a source-target system. However, according to our experiments, if both systems are statistical the back-translation is too similar to the original sentence, and the majority of their differences are interchangeable paraphrases. Rule-based systems could be more effective, but the number of rulebased systems freely available would limit the work to a small number of language pairs. A two-stage error generation method As previously discussed, existing methods that artificially generate entire sentences have drawbacks that make them difficult or impossible to use for QE. Therefore, following Raybaud et al. (2011) and previous work on human error correction, our approach is to inject errors into otherwise correct texts. This process consists of two stages: • labelling of a sentence with error tags, • insertion of the errors into that sentence. The first stage assigns an error tag to every word in a sentence. The output of this stage is the initial sentence where every word is assigned a tag denoting a type of error that needs to be incurred on this word. We use five tags corresponding to edit operations in the TERp tool: no error (OK), substitution (S), deletion (D), insertion (I) and shift (H). During the second stage the words in the sentence are changed according to their tag: substituted, deleted, shifted, or left in place if word has the tag OK. Figure 1 gives an example of the complete generation process. Error tagging of sentences We generate errors based on a corpus of postedited machine translations. We align translations and post-editions using the TERp tool (exact matching) and extract counts on the number of shifts, substitutions, insertions and deletions. TERp does not always capture the true errors, in particular, it fails to identify phrase substitutions (e.g. was → has been). However, since editors are usually asked to minimise the number of edits, translations and post-editions are often close enough and the TERp alignment provide a good proxy to the true error distribution. The TERp alignments can be used to collect the statistics on errors alone or to combine the frequency of errors with the words they are incurred on. We suggest three methods of generation of an error string for a sentence: • bigramEG: the bigram error generation that uses a bigram error model regardless of the actual words (Raybaud et al., 2011). • wordprobEG: the conditional probability of an error given a word. • crfEG: the combination of the bigram error model and error probability conditioned on a word. This generation method can be modelled with Hidden Markov Model (HMM) or conditional random fields (CRF). The first model has the advantage of keeping the distribution of errors as in the training data, because the probability distributions used depend Figure 1: Example of the two-stage artificial data generation process only on the frequency of errors themselves. The second model is more informed about which words commonly cause errors. Our implementation of the third method uses CRFs to train an error model. We use all unigrams, bigrams and trigrams that include the target word as features for training. This method is expected to produce more plausible error tags, but it can have the issue that the vocabulary we want to tag is not fully covered by the training data, so some words in the sentences to tag will be unknown to the trained model. If an unknown word needs to be tagged, it will more often be tagged with the most frequent tag, which is "Good" in our case. In order to avoid this problem we replace rare words in training set with a default string or with the word class, e.g. a POS tag. Insertion of errors We consider errors of four types: insertion, deletion, substitution and shift. Word marked with the 'deletion' error tag are simply removed. Shift errors require the distribution of shift distances which are computed based on a TERpaligned corpus. Substitutions and insertions require word insertion (WI) and the new words need to be drawn from some probability distribution. We suggest two methods for the generation of these distributions: • unigramWI: word frequencies computed based on a large monolingual corpus. • paraphraseWI: distributions of words that can be used instead of the current word in the translation. This computation is performed as follows: first all possible sources of a target word are extracted from an SMT system's translation table, then all possible targets for these sources. That gives us a confusion set for each target word. Experiments We conducted a set of experiments to evaluate the performance of artificially generated data on different tasks of QE at the sentence and word levels. Tools and datasets The tools and resources required for our experiments are: a QE toolkit to build QE models, the training data for them, the data to extract statistics for the generation of additional examples. The for sentence-level QE we used the QUEST toolkit (Specia et al., 2013). It trains QE models using sklearn 1 versions of Support Vector Machine (SVM) classifier (for ternary classification task, Section 4.4) and SVM regression (for HTER prediction, Section 4.5). The wordlevel version of QUEST 2 was used for word-level feature extraction. Word-level classifiers were trained with CRFSuite 3 . The CRF error models were trained with CRF++ 4 . POS tagging was performed with TreeTagger (Schmid, 1994). Sentence-level QuEst uses 17 baseline features 5 for all tasks. Word-level QuEst reimplements the set of 30 baseline features described in (Luong et al., 2014). The QE models were built and tested based on the data provided for the WMT14 English-Spanish QE shared task (Section 4.3). The statistics on error distributions were computed using the English-Spanish part of training data for WMT13 shared task on QE 6 . The statistics on the distributions of words, alignments and lexical probabilities were extracted from the Europarl corpus (Koehn, 2005). We trained the alignment model with FastAlign (Dyer et al., 2013) and extracted the lexical probabilities tables for words using scripts for phrase table building in Moses (Koehn et al., 2007). For all the methods, errors were injected into the News Commentary corpus 7 . Generated data Combining three methods of errors generation and two methods of errors insertion into sentences resulted in a total of six artificial datasets. Here we perform some analysis on the generated data. The datasets differ in the percentage of errors injected into the sentences. BigramEG datasets have 23% of edits which matches the distribution of errors on the real data. WordprobEG datasets contain fewer errors -17%. The crfEG models contain the lowest number of errors -5% of the total number of words. As it was expected, data sparsity makes the CRF model tag the majority of the words with the most frequent tag ("Good"). Replacing rare words with a default word token or with a POS tag did not improve these statistics. Word inserters Unigram We computed the perplexity of all datasets with respect to an LM trained on the Spanish part of the Europarl corpus (see Table 1). The figures match the error percentages in the data -the lower the number of errors, the more is kept from the original sentence, and thus the more natural it looks (lower perplexity). Note that sentences where errors were inserted from a general distribution (unigramWI) have lower perplexity than those generated using using paraphrases. This can be because the un-igramWI model tends to choose high-frequency words with lower perplexity, while the constructed paraphrases contain more noise and rare words. Experimental setup We evaluated the performance of the artificially generated data in three tasks: the ternary classification of sentences as "good", "almost good" or "bad", the prediction of HTER (Snover et al., 2009) score for a sentence, and the classification of words in a sentence as "good" or "bad" (tasks 1.1, 1.2 and 2 of WMT14 QE shared task 8 , respectively). The goal of the experiments was to check whether it is possible to improve upon the baseline results by adding artificially generated examples to the training sets. The baseline models for all tasks were trained on the data provided for the corresponding shared tasks for the English-Spanish language pair. All models were tested on the official test sets provided for the corresponding shared tasks. Since we know how many errors were injected into the sentences, we know the TER scores for our artificial data. The discrete labels for the ternary classification task are defined as follows: "bad" sentences have four or more non-adjacent errors (two adjacent erroneous words are considered one error), "almost good" sentences contain one erroneous phrase (possibly of several words), and "good" sentences are error-free. The new training examples were added to the baseline datasets. We ran a number of experiments gradually increasing the number of artificially generated sentences used. At every run, the new data was chosen randomly in order to reduce the influence of outliers. In order to make the results more stable, we ran each experiment 10 times and averaged the evaluation scores. Sentence-level ternary QE task The original dataset for this task contains 949 "good", 2010 "almost good", and 857 "bad" sentences, whereas the test set has 600 entries: 131 "good", 333 "almost good", 136 "bad". The results were evaluated using F1-score. The addition of new "bad" sentences leads to an improvement in quality, regardless of the sentence generation method used. Models trained on datasets generated by different strategies display the same trend: adding up to 400 sentences results in a considerable increase in quality, while further addition of data only slightly improves quality. Figure 2 shows the results of the experiments -here for clarity we included only the results for datasets generated with the unigramWI, although the paraphraseWI demonstrates a similar behaviour with slightly lower quality. The best F1score of 0.49 is achieved by a model trained on the data generated with the crf error generator, which is an absolute improvement of 1.9% over the baseline. However, adding only negative data makes the distribution of classes in the training data less Figure 2: Ternary classification: performance of error generators similar to that of the test set, which might affect performance negatively. Therefore, we conducted other three sets of experiments: we added (i) equal amount of artificial data for the "good" and "bad" classes (ii) batches of artificial data for all classes that keep the original proportion of classes in the data (iii) artificial data for only the "good" class. The latter setting is tested in order to check whether the classifier benefits from negative instances, or just from having new data added to the training sets. The results are shown in Figure 3. We plot only the results for the bigramEG + unigramWI setting as it achieved the best result in absolute values, but the trends are the same for all data generation techniques. The best strategy was to add both "good" and "bad" sentences: it beats the models which uses only negative examples, but after 1000 artificial sentences its performance degrades. Keeping the original distribution of classes is not beneficial for this task: it performs worse than any other tested scenario since it decreases the F1score for the "good" class dramatically. Overall, the additional negative training data improves the ternary sentence classification. The addition of both positive and negative examples can further improve the results, while providing additional instances of the "almost good" class did not seem to be as helpful. Figure 4 shows that the addition of any type of artificial data leads to substantial improvements in quality for this task. The results were evaluated in terms of Mean Absolute Error (MAE). The ini- Figure 3: Ternary classification: artificial examples of different classes tial training dataset was very small -896 sentences (200 sentences for test), which may explain the substantial improvements in prediction quality as new data is added. We also noticed that the performance of the generated datasets was primarily defined by the method of errors generation, whereas different word choice strategies did not impact the results as much. Figure 4 depicts the results for the unigramWI words selection method only with all error generation methods. Sentence-level HTER QE task The addition of data from datasets generated with crfEG gives the largest drop in MAE (from 0.161 to 0.14). This result is achieved by a model that uses 1200 artificial sentences. Further addition of new data harms performance. The data generated by other error generators does not cause such a large improvement in quality, although it also helps reduce the error rate. As it was described earlier, the crfEG model generates sentences with a small number of errors. Since the use of this dataset leads to the largest improvements, we can suggest that in the HTER prediction task, using the baseline dataset only, the majority of errors is found in sentences whose HTER score is low. However, the reason might also be that the distributions of scores in the baseline training and test sets are different: the test set has lower average score (0.26 compared to 0.31 in the training set) and lower variance (0.03 versus 0.05 in the training set). The use of artificial data with a small number of errors changes this distribution. We also experimented with training a model using only artificial data. The results of models trained on only 100 artificial sentences for each However, the further addition of new artificial sentences did not lead to improvements. Thus, despite the positive impact of the artificial data on the results, the models cannot be further improved without real training examples. Word-level QE task Here we tested the impact of the artificial data on the task of classifying individual words as "good" or "bad". The baseline set contains 47335 words, 35% of which have the tag "bad". The test set has 9613 words with the same label distribution. All the datasets led to similar results. Overall, the addition of artificial data harms prediction performance: the F1-score goes down until 1500 sentences are added, and then levels off. The performance for all datasets is similar. However, analogously to the previous tasks, there are differences between crfEG and the other two error generation techniques: the former leads to faster deterioration of F1-score. No differences were observed among the word insertion techniques tested. Figure 5 shows the average weighted F1-score and F1-scores for both classes. Since all datasets behave similarly, we show the results for two of them that demonstrate slightly different performance: crfEG+unigramWI is shown with solid blue lines, while bigramEG+unigramWI is shown with dotted red lines. The use of data generated with CRF-based methods results in slightly faster decline in performance than the use of data generated with bigramEG or wordprobEG. One possible reason is that the CRF-generated datasets have fewer errors, hence they change the original tags distribution in the training data. Therefore, test instances are tagged as "bad" less often. That explains why the F1-score of the "bad" class decreases, whereas the F1-score of the "good" class stays at the same. To summarise our findings for word-level QE, the strategies of data generation proposed and tested thus far do not lead to improvements. The word-level predictions are more sensitive to individual words in training sentences, so the replacement of tokens with random words may confuse the model. Therefore, the word-level task needs more elaborate methods for substituting words. Conclusions and future work We presented and experimented with a set of new methods of simulation of errors made by MT systems. Sentences with artificially added errors were used as training data in models that predict the quality of sentences or words. The addition of artificial data can help improve the output of sentence-level QE models, with substantial improvements in HTER score prediction and some improvements in sentences classification into "good", "almost good" and "bad". However, the largest improvements are related to the fact that the additional data changes the overall distribution of scores in the training set, making it more similar to the test set. On the other hand, the fact that the artificial sentences did not decrease the quality in such cases proves that it can be used to counter-balance the large number of positive examples. Unlike sentence-level QE, the task of word-level QE did not benefit from the artificial data. That may relate to our choice of method to replace words in artificial sentences. While thus far we analysed the usefulness of artificial data for the QE task only, it would be interesting to check if this data can also improve the performance of discriminative LMs. Figure 4 : 4HTER regression results generation method were surprisingly good: their MAE ranged from 0.149 to 0.158 (compared to the baseline result of 0.161 on the original data). Figure 5 : 5Word-level QE. Blue solid lines -results for crfEG, red dotted lines -bigramEG http://scikit-learn.org/ 2 http://github.com/ghpaetzold/quest 3 http://www.chokkan.org/software/crfsuite/ 4 https://code.google.com/p/crfpp/ 5 http://www.quest.dcs.shef.ac.uk/ quest files/features blackbox baseline 17 6 http://www.quest.dcs.shef.ac.uk/ wmt13 qe.html 7 http://statmt.org/wmt14/ training-parallel-nc-v9.tgz http://statmt.org/wmt14/ quality-estimation-task.html AcknowledgementsThis work was supported by the EXPERT (EU Marie Curie ITN No. 317471) project. METEOR: An Automatic Metric for MT Evaluation with Improved Correlation with Human Judgments. Satanjeev Banerjee, Alon Lavie, ACL-2005, MTSumm workshop. Banerjee, Satanjeev and Alon Lavie. 2005. METEOR: An Automatic Metric for MT Evaluation with Im- proved Correlation with Human Judgments. In ACL- 2005, MTSumm workshop, pages 65-72. Errgrams A Way to Improving ASR for Highly Inflected Dravidian Languages. Kamadev Bhanuprasad, Mats Svenson, IJCNLP-2008. Bhanuprasad, Kamadev and Mats Svenson. 2008. Errgrams A Way to Improving ASR for Highly Inflected Dravidian Languages. In IJCNLP-2008, pages 805-810. Correcting esl errors using phrasal smt techniques. Chris Brockett, William B Dolan, Michael Gamon, Coling-ACL-2006. Brockett, Chris, William B. Dolan, and Michael Ga- mon. 2006. Correcting esl errors using phrasal smt techniques. In Coling-ACL-2006. Discriminative Syntactic Language Modeling for Speech Recognition. Michael Collins, Brian Roark, Murat Saraclar, ACL-2005. Collins, Michael, Brian Roark, and Murat Saraclar. 2005. Discriminative Syntactic Language Modeling for Speech Recognition. In ACL-2005. A simple, fast, and effective reparameterization of ibm model 2. Chris Dyer, Victor Chahuneau, A. Noah Smith, NAACL-HLT-2013. Dyer, Chris, Victor Chahuneau, and A. Noah Smith. 2013. A simple, fast, and effective reparameteriza- tion of ibm model 2. In NAACL-HLT-2013, pages 644-648. Generating artificial errors for grammatical error correction. Mariano Felice, Zheng Yuan, EACL-2014. Felice, Mariano and Zheng Yuan. 2014. Generating artificial errors for grammatical error correction. In EACL-2014, pages 116-126. Sentence-level MT evaluation without reference translations: beyond language modeling. Michael Gamon, Anthony Aue, Martine Smets, EAMT-2005. Gamon, Michael, Anthony Aue, and Martine Smets. 2005. Sentence-level MT evaluation without refer- ence translations: beyond language modeling. In EAMT-2005. Automatic error detection in the japanese learners' english spoken data. Emi Izumi, Kiyotaka Uchimoto, Toyomi Saiga, Thepchai Supnithi, Hitoshi Isahara, ACL-2003. Izumi, Emi, Kiyotaka Uchimoto, Toyomi Saiga, Thep- chai Supnithi, and Hitoshi Isahara. 2003. Automatic error detection in the japanese learners' english spo- ken data. In ACL-2003, pages 145-148. Moses: Open Source Toolkit for Statistical Machine Translation. Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, ACL-2007, Demo session. Richard Zens, Chris Dyer, Ondej Bojar, Alexandra Constantin, and Evan HerbstKoehn, Philipp, Hieu Hoang, Alexandra Birch, Chris Callison-burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ondej Bojar, Alexandra Constantin, and Evan Herbst. 2007. Moses: Open Source Toolkit for Statistical Machine Translation. In ACL-2007, Demo session, pages 177-180. Europarl: A Parallel Corpus for Statistical Machine Translation. Philipp Koehn, MT-Summit 2005. Koehn, Philipp. 2005. Europarl: A Parallel Corpus for Statistical Machine Translation. In MT-Summit 2005, pages 79-86. Large-scale Discriminative n -gram Language Models for Statistical Machine Translation. Zhifei Li, Sanjeev Khudanpur, AMTA-2008. Li, Zhifei and Sanjeev Khudanpur. 2008. Large-scale Discriminative n -gram Language Models for Sta- tistical Machine Translation. In AMTA-2008, pages 21-25. Unsupervised Discriminative Language Model Training for Machine Translation using Simulated Confusion Sets. Zhifei Li, Ziyuan Wang, Sanjeev Khudanpur, Jason Eisner, Coling-2010. Li, Zhifei, Ziyuan Wang, Sanjeev Khudanpur, and Ja- son Eisner. 2010. Unsupervised Discriminative Language Model Training for Machine Translation using Simulated Confusion Sets. In Coling-2010. Lig system for word level qe task at wmt14. Ngoc Luong, Laurent Quang, Benjamin Besacier, Lecouteux, WMT-2014. Luong, Ngoc Quang, Laurent Besacier, and Benjamin Lecouteux. 2014. Lig system for word level qe task at wmt14. In WMT-2014, pages 335-341. A Discriminative Language Model with Pseudo-Negative Samples. Daisuke Okanohara, ACL-2007. Okanohara, Daisuke. 2007. A Discriminative Lan- guage Model with Pseudo-Negative Samples. In ACL-2007, pages 73-80. BLEU: a Method for Automatic Evaluation of Machine Translation. Kishore Papineni, Salim Roukos, Todd Ward, Weijing Zhu, ACL-2002. Papineni, Kishore, Salim Roukos, Todd Ward, and Wei- jing Zhu. 2002. BLEU: a Method for Automatic Evaluation of Machine Translation. In ACL-2002, pages 311-318. This sentence is wrong. Detecting errors in machine-translated sentences. Raybaud, David Sylvain, Kamel Langlois, Smaïli, Machine Translation. 251Raybaud, Sylvain, David Langlois, and Kamel Smaïli. 2011. This sentence is wrong. Detecting errors in machine-translated sentences. Machine Translation, 25(1):1-34. Generating confusion sets for context-sensitive error correction. Alla Rozovskaya, Dan Roth, EMNLP-2010. Rozovskaya, Alla and Dan Roth. 2010. Generating confusion sets for context-sensitive error correction. In EMNLP-2010, pages 961-970. Probabilistic part-of-speech tagging using decision trees. Helmut Schmid, International Conference on New Methods in Language Processing. Schmid, Helmut. 1994. Probabilistic part-of-speech tagging using decision trees. In International Con- ference on New Methods in Language Processing, pages 44-49. A Study of Translation Edit Rate with Targeted Human Annotation. Matthew Snover, Bonnie Dorr, Richard Schwartz, Linnea Micciulla, John Makhoul, AMTA-2006. Snover, Matthew, Bonnie Dorr, Richard Schwartz, Lin- nea Micciulla, and John Makhoul. 2006. A Study of Translation Edit Rate with Targeted Human Annota- tion. In AMTA-2006, pages 223-231. TERp System Description. Matthew Snover, Nitin Madnani, Bonnie Dorr, Richard Schwartz, AMTA-2008, MetricsMATR workshop. Snover, Matthew, Nitin Madnani, Bonnie Dorr, and Richard Schwartz. 2008. TERp System Description. In AMTA-2008, MetricsMATR workshop. Fluency, adequacy, or hter?: Exploring different human judgments with a tunable mt metric. Matthew Snover, Nitin Madnani, Bonnie J Dorr, Richard Schwartz, WMT-2009. Snover, Matthew, Nitin Madnani, Bonnie J. Dorr, and Richard Schwartz. 2009. Fluency, adequacy, or hter?: Exploring different human judgments with a tunable mt metric. In WMT-2009, pages 259-268. QuEst -A translation quality estimation framework. Lucia Specia, Kashif Shah, G C De Jose, Trevor Souza, Cohn, ACL-2013. Demo sessionSpecia, Lucia, Kashif Shah, Jose G C de Souza, and Trevor Cohn. 2013. QuEst -A translation quality estimation framework. In ACL-2013, Demo session.
17,754,705
Event and Event Actor Alignment in Phrase Based Statistical Machine Translation
This paper proposes the impacts of event and event actor alignment in English and Bengali phrase based Statistical Machine Translation (PB-SMT) System. Initially, events and event actors are identified from English and Bengali parallel corpus. For events and event actor identification in English we proposed a hybrid technique and it was carried out within the TimeML framework. Events in Bengali are identified based on the concept of complex predicate structures. There can be one-to-one and one-to-many mappings between English and Bengali events and event actors. We preprocess the parallel corpus by single tokenizing the multiword events and event-actors which reflects some significant gain on the PB-SMT system. We represent a hybrid alignment approach of events and event-actors in both English-Bengali training corpus by defining a rule based aligner and a statistical hybrid aligner. The rule base aligner assumes a heuristic that the sequence of events and event actors on the source (English) side are also maintained in the target (Bengali) side. The performance of PB-SMT system could vary depending on the number of events and event-actors that are identified in the parallel training data. The proposed system achieves significant improvements (5.79 BLEU points absolute, 53.02% relative improvement) over the baseline system on an English-Bengali translation task.
[ 9736002, 541539, 15369413, 62182406, 14946981, 3838667, 8884845, 7862962, 2000790, 2777782 ]
Event and Event Actor Alignment in Phrase Based Statistical Machine Translation October 2013 Anup Kumar Kolya Dept. of Computer Science & Engineering Dept. of Computer Science & Engineering IIT Patna Jadavpur University Kolkata Patna-800 013700 032India, India Santanu Pal Dept. of Computer Science & Engineering Dept. of Computer Science & Engineering IIT Patna Jadavpur University Kolkata Patna-800 013700 032India, India Asif Ekbal Dept. of Computer Science & Engineering Dept. of Computer Science & Engineering IIT Patna Jadavpur University Kolkata Patna-800 013700 032India, India Sivaji Bandyopadhyay Dept. of Computer Science & Engineering Dept. of Computer Science & Engineering IIT Patna Jadavpur University Kolkata Patna-800 013700 032India, India Event and Event Actor Alignment in Phrase Based Statistical Machine Translation International Joint Conference on Natural Language Processing Nagoya, JapanOctober 2013 This paper proposes the impacts of event and event actor alignment in English and Bengali phrase based Statistical Machine Translation (PB-SMT) System. Initially, events and event actors are identified from English and Bengali parallel corpus. For events and event actor identification in English we proposed a hybrid technique and it was carried out within the TimeML framework. Events in Bengali are identified based on the concept of complex predicate structures. There can be one-to-one and one-to-many mappings between English and Bengali events and event actors. We preprocess the parallel corpus by single tokenizing the multiword events and event-actors which reflects some significant gain on the PB-SMT system. We represent a hybrid alignment approach of events and event-actors in both English-Bengali training corpus by defining a rule based aligner and a statistical hybrid aligner. The rule base aligner assumes a heuristic that the sequence of events and event actors on the source (English) side are also maintained in the target (Bengali) side. The performance of PB-SMT system could vary depending on the number of events and event-actors that are identified in the parallel training data. The proposed system achieves significant improvements (5.79 BLEU points absolute, 53.02% relative improvement) over the baseline system on an English-Bengali translation task. Introduction Event and event actor alignment play a very crucial role to improve the translation quality in a machine translation system. A translated sentence is not a satisfactory and proper translation until we properly combine event and event actor in sentence level task. Recently, event related works are becoming popular in the machine translation field. Sentence-aligned parallel bilingual corpora are very useful for applying machine learning approaches to machine translation. But, most of these works have been focused on European language pairs and some of the Asian Languages such as English-Japanese and English-Chinese. In this work, we have added event and event-actor alignments as additional parallel examples with the English-Bengali parallel corpus. The entire task is divided into three steps, first, we identify event and event actors on the both side of the parallel corpus, second, we align events and event actors using a rule based and a statistical alignment method and finally, the identified multiword events and event actors are single tokenized on the both side and then the prior alignment of event and event actors are applied on the English-Bengali PB-SMT system for further improvement. The identification of events on English side, we have followed the guidelines of TimeML view (Pustejovsky et al., 2003a). TimeML defines events as situations that happen or occur, or elements describing states or circumstances in which something obtains or holds the truth. These events are generally expressed by tensed or un-tensed verbs, nominalizations, adjectives, predicative clauses or prepositional phrases. In the sentences, almost all events are involved with the event actor, either active or passive. Event actor identification in English is facilitated by the available free resources and tools such as Stanford Parser, VerbNet (Kipper-Schuler et al, 2005) .In detail research works related to English event and event actor identification can be found in (Kolya et al., 2010). We have defined Complex Predicates as events (Das et al., 2010) (Hook, 1974). In the next step, we identify event actors of event from Bengali language. We have considered the same guidelines for event actor identification in Bengali as those proposed for event actor identification in English. For Bengali event actor identification, we have used two available lexical engines, namely Name Entity Recognizer (NER) (Ekbal and Bandyopadhyay, 2009) and shallow parser 1 . The accuracy of the Bengali NE recognizer (NER) is poorer compared to English NER because (i) there is no concept of capitalization in Bengali (ii) some Bengali common nouns are also often used as named entities. Similarly, the Bengali shallow parser faces such kinds of difficulties. Overall, Bengali is morphologically rich language and has very limited such kind of resources. The major challenge is to develop an event alignment system between a resource-rich language like English and a resource-poor language like Bengali. The proposed system is relying on the design of rules and the availability of large amounts of annotated data. But, building of large amount data is a time consuming, labour intensive and expensive task. The main motivation of this work is the scarcity of sufficient works related to event alignment. To the best of our knowledge this is the first time that the event alignment approach is applied for the English-Bengali language pair. Given a set of parallel sentences, we identify events and event actors in both the sides. The events and event actors in both sides of the parallel corpus are assigned appropriate tags (event: e and event actor: ea). Thereafter we align the English events and event actors with Bengali events and event actors. The alignment has been carried out by single tokenizing the multi word events and event-actors on both sides of the parallel corpus. Thereafter the alignment of events and event actors in the parallel English-Bengali sentences is carried out based on two approaches: (i) rule based approach and (ii) hybrid statistical approach. The rule based approach fails to align the causal sentences that include the cause-effect constructs. The positions of the cause and the effect clauses may change their position in the target sentence. The positions of the cause and the effect clauses may change their position in the target sentence. Such types of parallel sentences are event aligned using the hybrid statistical approach. We attempt to achieve good accuracies for event 1 http://ltrc.iiit.ac.in/showfile.php?filename=downloads/shallow _parser.php identification and event actor identification for both the languages which is reflected as the improvement of the English-Bengali PB-SMT system performance. The hybrid approach also validates the correctness of the alignment of the rule based system. The remainder of the paper is organized as follows. Next section briefly elaborates the related work. The proposed system is described in Section 3. Section 4 states the tools and resources used for the various experiments. Section 5 includes the results obtained, together with some analysis. Section 6 concludes and provides avenues for further work. Related Works The works related to alignment are mostly developed for machine translation task. Some works in sentence alignment can be found in (Brown, 1991) and (Gale and Church, 1993). (Chen, 1993) developed a method which was slower but more accurate than the sentencelength based Brown and Gale algorithm. (Wu, 1994) used an approach which was adopted from Gale and Church's method for Chinese. They used a small corpus-specific bilingual lexicon to improve alignment accuracy in texts containing multiple sentences of similar length. (Melamed 1996(Melamed , 1997 also proposed a method based on word correspondences. (Plamondon, 1998) developed a two-pass approach, in which a method similar to the one proposed by Melamed identifies points of correspondence in the text that constrain a second-pass search based on the statistical translation model. (Moore, 2002) developed a hybrid sentence-alignment method using sentence length-based and word-correspondencebased models. This model is fast, very accurate, and requires that the corpus be separated into words and sentences. In the hybrid model, they used the sentence pairs that are assigned the highest probability of alignment to train a modified version of IBM Translation Model 1 (Brown, 1993). (Fung, 1994) presented K-vec, an alternative alignment strategy, that starts by estimating the lexicon. Moore (2003) used capitalization cues for identifying NEs on the English side and then applied statistical techniques to decide which portion of the target language corresponds to the specified English NE. A Maximum Entropy model based approach for English-Chinese NE alignment has been proposed in Feng et al. (2004) which significantly outperforms IBM Model 4 and HMM. A method for automatically extracting NE translingual equivalences between Chinese and English based on multi-feature cost minimization has been proposed in Huang et al. (2003). System Description In our system, initially we have identified Event and Event Actor from English-Bengali parallel corpus. Then, we have established Rule base event and event-actor Alignment Model, and Statistical Hybrid based Alignment model for the experiment setup. English Event Identification Our approach for event identification is based on a hybrid approach. The system is combined with Support Vector Machine (SVM 2, 4 ), semantic role labeling (SRL) (Gildea et al, 2002;WorldNet 7 and several heuristics. Hybrid event identification system Some lexical rules have been used to identify the de-verbal event words more accurately, in addition with SVM, SRL, WordNet based approaches. Rules are extracted on the basis of detailed analysis of suffixes and the morphological markers of de-verbal derivations like "expedition' and ‗accommodation" in the source side of the corpus. Initially, Stanford Named Entity (NE) tagger 3 is passed on the English side of the training corpus. The output of the system is tagged with Person, Location, Organization and Other classes. The following cue sets or rules are applied for event extraction: Cue-1: The morphologically de-verbal nouns are usually identified by the suffixes like ‗-tion', 'ion', '-ing' and '-ed' etc. The non-NE nouns that end with these suffixes are considered as the event words. Cue 2: After searching verb-noun combination from the test set, non-NE noun words are considered as the events. Cue 3: The non-NE nouns occurring after (i) the complements of aspectual PPs headed by prepositions, (ii) any time-related verbs and (iii) certain expressions are considered as events. The performance of the event extraction system has been reported with the precision, recall and F-measure values of 93.00%, 96.00% and 94.47%, respectively on the TempEval-2 corpus. 3 http://chasenorg/~taku/software/yamcha 4 http://nlp.stanford.edu/software/CRF-NER.shtml Event-Actor identification It has been observed from the detailed text analysis that almost all events are associated with some actors (-anything having existence (living or nonliving)"), either active or passive. More generally, event actions are associated with persons or organizations and sometimes with locations. In this section, it has been shown how event actors are identified for the events. Subject Based Baseline Model The input English sentences with event constructs are passed through the Stanford Parser to extract the dependency relationships from the parsed data. The output is checked to identify the predicates, -nsubj‖ and -xsubj‖ so that the subject related information in the -nsubj‖ and -xsubj‖ predicates are considered as the probable candidates of event actors. Other dependency relations are filtered out. Syntax Based Model The syntax of a sentence in terms of its argument structure or sub-categorization information of the associated verb plays an important role to identify the event actors of the events in a sentence. (a) Syntax Acquisition from Verbnet Using VerbNet (Kipper-Schuler et al, 2005), a separate rule based argument structure acquisition system is developed in the present task for identifying the event actors. The acquired argument structures are compared against the extracted VerbNet frame syntaxes. If the acquired argument structure matches with any of the extracted frame syntaxes, the event actor corresponding to each event verb is tagged with the actor information in the appropriate slot in the sentence. (b) Argument Structure Acquisition Framework To acquire the argument structure, Stanford Parser parsed event sentences are passed through a rule based phrasal-head extraction system to identify the head part of the phrase (wellstructured and bracketed) level argument structure of the sentences corresponding to the event verbs. SRL for Event Actor Identification Semantic Role Label (SRL) plays an important role to extract target argument relationship from the semantic role labeled sentences. Here, the argument is considered as an event actor and the target is identified as the corresponding event. Let us consider the following example: . So using the SRL technique, the event and the corresponding event actor are found. The original F-scores of the event actor identification systems for the subject based and syntax based models are 65.98% and 70%, respectively. Adding the SRL technique for event actor identification, the F-score of the system further improves to 73%. Bengali Event Extraction The sentences are passed through an open source Bengali shallow parser 1 . The shallow parser gives different morphological information (root, lexical category of the root, gender, number, person, case, etc.) that helps in identifying the lexical patterns of Complex Predicates (CPs). Bengali sentences were POS-tagged using the available shallow parser. We have extracted{verb(v)+verb(v), (noun(n)+verb(v)) and (adjective(adj)+verb(v))} lexical complex predicates pattern. The complex predicate (v+v) pattern is considered as the compound verb and (n+v) and (adj+v) patterns are considered as conjunct verbs (ConjVs). These compound and conjunct verb patterns are used as the possible candidates for event expressions. Identification of Complex Predicates (CPs) In the Bengali side, generally complex predicates follow some patterns such as conjunct verbs (e.g., [ (Paul, 2010). The other types of predicates presents in Bengali language follow the same lexical pattern like the compound verb but the Full Verb and Light Verb behave as independent syntactic entities (e.g, niye gelo ‗take-go'). Such complex predicates are termed as Serial Verb (SV). Das et al. (2010) analyzed and identified the categories of compound verbs (Verb + Verb) and conjunct verbs (Noun /Adjective/Adverb + Verb) for Bengali. We adapted their strategy for identification of compound verbs as well as serial verbs (Verb + Verb + Verb) in Bengali. Bengali Event Actor Identification Here, events are associated with either active or passive event actors in Bengali like in English language. Similarly, event actions are associated with persons or organizations and sometimes with locations. Initially, sentences that do not have any event words are discarded. Bengali Name Entity Recognizer (NER) and Bengali shallow parser are employed to detect the event actors from the sentences. The baseline system for identifying event actor is developed based on the person, organization and location information which are recognized by Bengali NER. Then, Bengali shallow parser has been used to improve the performance of event actor identification. In the following two sections, it has been shown in details how event actors are identified for the events in Bengali language by applying the above two techniques. Name Entity based Approach Here, Bengali named entities are identified from parallel corpus. After identification of Bengali NEs and Bengali events from the sentences, following heuristics rules are introduced for event actor identification: (i) If sentence is having only one NE and one or more than one events then this single NE is selected as the event actor for all events. (ii) If sentence is having multiple NEs and only one event, then all the NEs are selected as the event actors for the single event. (iii) If there exists multiple NEs and multiple events in a sentence, then <event, actor> pairs are formed by considering an event and its closest possible NE as the event actor in the sentence. Rule based event and event-actor Alignment Model The rule based alignment model aligns the identified events and event-actors between the English and Bengali parallel sentences. Here it is observed that that event-actors associated with events appear as contiguous sequence of words in a sentence. For example, -travelers" is an event actor of the event word "discover" in the English side which is aligned with -‖, the event actor of the event word < > in the Bengali side. -Discover‖ is an event group with the syntactic structure event actor "travelers" which can be determined deterministically given the phrase (NP, VP) and POS tags information. Ex-(a) ...adventurous/JJ travelers/NNS will/MD discover/VB an/DT ethereal …… Ex-(b) … < > ….. During event and event actor alignment the following issues are observed between the English and the Bengali language: (i) It aligns both one-to-one and one-to-many alignments between word forms. (ii) In the English and Bengali side event actors are identified by noun (NN), proper noun (NNP) and pronoun (PRN) based word from the noun phrase. Then the alignment has been done on both sides. (iii) In event alignment, English side event words are generally verb(VB) and noun(NN) while the internal structure of Bengali event words are combination of compound verbs (VM-Vaux) and conjunct words (NN-VAUX,ADJ-VUX). (iv) In event alignment, English event words are generally aligned to a group of Bengali event words. Light verbs are added with the main verb which increases the number of words in Bengali with respect to English event word in the sentence. Similarly for English event words, the auxiliary verb is considered as a part of it. The following alignment from Example (a) above bears testimony to the above. In the above parallel sentence, the event actor -traveler‖ on the English side is aligned with -‖ on the Bengali side. The corresponding events associated with the event actor are -discover‖ and -lingers‖ on the English side which are aligned with -‖ and -‖ respectively in the Bengali side. In order to get the correct alignment, identification of event actors and events orders should be correct. Thus the following parallel phrase translation entries are generated. Traveler ↔ [vramonkari] will discover ↔ [abiskar korbe] Lingers ↔ [mone rakhar] (v). It has been observed that the order of event actor with event in English and Bengali language are same in most of the cases. Correct identification of event words in Bengali side corresponding to English side plays an important role in the event word alignment. In the example 2, it is easy to align, but in some cases the word align-ment complexity increases when the order of the events and the event actors does not follow the same sequence in the English and the Bengali parallel sentences. The complexity is further increased due to the non-availability of large bilingual corpus and the presence of inflectional variations in Bengali. So sometimes it is difficult to correctly align event words to the target words. Once these alignments are obtained, then we validate the alignment with statistical hybrid based alignment model. Hybrid based Alignment model Initially an English-Bengali phrase based statistical translation model has been developed which has been trained with the same EILMT tourism domain corpus of 22,492 sentences. The above rule based event actor alignments are validated by translating both the event and the event actor. From the above knowledge we get a link between the event and the event actor on both sides. Even the alignment details are also available. . From this point of view, we can conclude that if we know any of the translation of either the event or the event actor then we can align with the target event and event actor relation. Using this heuristics, we have translated event or event actor and matched with the target Bengali event or event actor which has been provided by the rule based system as described in section 4. A string level edit distance matric has been used to validate the bilingual even-actor relations. After alignment of event and actor words from English side, we collect token position number of the event words with event tag from the sentence. We follow the Timex3 guideline for event word identification, so English side event words are mainly single word based token. Position of the single token number is added with event tag <e>. For the identification of event actors in the Bengali side, we follow the guidelines of English event actor <ea> identification that is already defined in Rule no (ii) in section 4. On English side after identification of event word in a sentence, we have added auxiliary dependent verb with it as defined in rule no (iv). After identification we have pre-processed the single tokenized corpus by replacing space with underscore (‗_'). We have used underscore (‗_') instead of hyphen (‗-') because there already exists some hyphenation words in the corpus. The use of Underscore (‗_') character also facilitates the de-tokenizing the single-tokenized events or event-actors at decoding time. We collect the token position number of event word(s) and actor(s) from both sides of the parallel sentence. Finally we get a sentence level source-target event-event actor-actor alignment. Amidst[0] such[1] solitude[2], adventurous[3] <ea> travelers[4] </ea> will[5]<e> discover[6]</e> an[7] ethereal[8] landscape[9] that[10] <e> lingers[11] </e> in [12]the[13] memory[14]. After considering depending auxiliary verb For example, 4-1 5-2 11-6 We have also generated source-target event and event-actor alignment level parallel example which has been added as additional parallel example with the training data. Now we retrain the PB-SMT system using moses toolkit (Koehn et at., 2003). The sentence level positional alignment information helps us for updating and correcting the alignment table which has been generated during the training phase using growdiag-final-and algorithm. The rest of the process has been followed as described in the state-of-art system. This approach also helps us to align the event and event-actor relation which cannot be aligned by the rule based system. In this approach we have translated the identified source events or event-actors. The translated events or eventactors are matched with the corresponding target side events and event-actors by using string level edit-distance method. Stanford NER, CRF chunker 6 and the Wordnet 3.0 7 have been used for identifying the events and the event-actors in the source English side of the parallel corpus. The sentences on the target side (Bengali) are POS-tagged by using the tools obtained from the consortium mode project -Development of Indian Language to Indian Language Machine Translation (IL-ILMT) System 8 ‖. The effectiveness of the present work is demonstrated by using the standard log-linear PB-SMT model as our baseline system. The GIZA++ implementation of IBM word alignment model 4, phrase-extraction heuristics described in (Koehn et al., 2003), minimum-error-rate training (Och, 2003) on a held-out development set, target language model trained using SRILM toolkit (Stolcke, 2002) with Kneser-Ney smoothing (Kneser and Ney, 1995) and the Moses decoder (Koehn et al., 2007) have been used in the present study. Experiments and Evaluations We have randomly identified 500 sentences each for the development set and the test set from the initial parallel corpus. The rest are considered as the training corpus. The training corpus was filtered with the maximum allowable sentence length of 100 words and sentence length ratio of 1:2 (either way). Finally the training corpus contained 22,492 sentences. In addition to the target side of the parallel corpus, a monolingual Bengali corpus containing 488,026 words from the tourism domain was used for the target language model. We experimented with different n-gram settings for the language model and the maximum phrase length and found that a 4-gram language model and a maximum phrase length of 7 produce the optimum baseline result. The baseline model (Experiment 1) has scored 10.92 BLEU matric points that is described in Table 3. We carried out the rest of the experiments using these settings. Initially we identified event actor relation on both sides of the parallel corpus by developing an automatic Event actor Identifier. The system achieves Recall, Precision and F- Score values of 82.06%, 72.32% and 75.73% respectively for Bengali event identification in training corpus. In the Bengali event actor evaluation framework, we have randomly selected 500 sentences from the Bengali corpus for testing. Each sentence is having around maximum100 words. We have manually annotated these 500 sentences with event actor tag as the reference data. The evaluation results for Bengali event-actor identification in the training corpus are shown in Table 1. Type Baseline Model Table 2 shows the statistics of events and event actors in the English and Bengali corpus. In the training corpus, 44.5% and 47.8% of the event actors are single-word event actors in English and Bangla respectively, which suggests that prior alignment of the single-word event actors, in addition to multi-word event actors alignment, should also be beneficial to word and phrase alignment. Our experiments have been carried out in three directions (i) Initially we single tokenized the identified events and event-actors on both sides of the parallel corpus (ii) we added the single tokenized event and event-actor alignment as an additional parallel data with the training corpus and (iii) we updated the word alignment table using hybrid word alignment technique. The table 3 shows that the successive evaluation of different experimental settings of PB-SMT system. Experiment 1 reports the baseline model score of the PB-SMT system. In experiment 2, we preprocessed the parallel corpus by single tokenizing the events and event actors, this makes significant gain over baseline system. Rest of the experiments (3, 4, 5 and 6) has been carried out with single tokenization of event and event actors along with their alignments. Experiment 3 and 4 reports that the alignment of events and event actors are added with the parallel corpus also improve the MT system performance. In experiment 5, both event and event actor alignments are combined together as additional parallel data with the training corpus, produced 5.51 (50.45%) BLEU point relative improvement over the baseline system. While in experiment 6, we updated the alignment table using event and event-actor alignment the performance has increased significantly with 5.79 (53.02%) BLEU point relative improvement over baseline system. Experiments No Conclusions and Future work The present work shows how three approaches (i) single tokenization of event and event-actors on both sides of the parallel corpus (ii) alignment of event and event-actor added as an additional training data with the parallel corpus and (iii) updating the word alignment table directly by event-actor and event alignment boost up the performances of the overall system. The method also reduces data sparsity problem. The single tokenization helps us to bound multi word events and event-actors into a single unit. On manual inspection we see that the translation output looks better than the baseline system output in terms of better lexical choice and word ordering. On experiment 3 and 4 our systems achieve 5.51 BLEU points absolute, 50.45% and 5.79 BLEU points absolute, 53.02% relative improvement over the baseline system on an English-Bengali translation task. The event and event actor alignment performance is also reflected indirectly by increasing the MT performance. The fact that only 28.5% of the testset event-actors appear in the training set, yet prior automatic alignment of the event and event actors brings about so much improvement in terms of MT quality, suggests that it not only improves the event and event actor alignment quality in the phrase table, but word alignment and phrase alignment quality must have also been improved significantly. Our future work will be focused on post editing the MT output using event and event-actor relation. As event and event-actor plays an important role in terms of discourse, we can reorder the output target sentences according to the occurrences of event on the source side. We will also focus to upgrade our system for paragraph translation. In future we can add temporal expression and location of event with event-actor as attributes. These attributes of event can further improve the performance machine translation result. in Bengali. Complex Predicates (CPs) in Bengali consists of both compound verbs and conjunct verbs. Complex Predicates contain [verb] + verb (compound verbs) or [noun/ adjective/adverb] +verb (conjunct verbs) combinations in South Asian languages [ARG1 A military coup] [TARGET followed], during which [ARG1 Allende] [TARGET committed] suicide rather than surrender to his attackers. the first trace, [A military coup] is identified as the event actor <eActor> of the corresponding event word [followed]. In the second trace, [Allende] is the event actorIn <eActor> of the corresponding event [committed] Compound verbs consist of two verbsa full verb followed by a light verb. The full verb is represented either as conjunctive participial form -[-e] or the infinitive form -[-te] at the surface level. The light verb bears the inflection based on tense, aspect and person information of the subject. On the other hand, each Bengali conjunct verb consists of adjective, adverb or noun followed by a light verb. These light verbs are semantically lightened, polysemous and limited into some definite candidate seedsmere phela] ‗to kill'): adjective/adverb/noun +verb pattern or compound verbs (e.g., [bharsha kara] ‗to depend'): verb + verb pattern. To identify such complex Predicates (CPs) in Bengali, Morphological knowledge is required. Table 2 : 2Event and Event-actor Statistics (T -Total occurrence, U -Unique) Table 3 : 3Evaluation results (The ‗ †' marked systems produce best score) Tools and ResourcesA sentence-aligned English-Bengali parallel corpus containing 23,492 parallel sentences from the travel and tourism domain has been used in the present work. The corpus has been collected from the consortium-mode project -Development of English to Indian Languages Machine Translation (EILMT) System 4 ‖. The Stanford Parser 5 ,4 The EILMT project is funded by the Department of Electronics and Information Technology (DEITY), Ministry of http://nlp.stanford.edu/software/lex-parser.shtml 6 http://crfchunker.sourceforge.net/ 7 http://wordnet.princeton.edu/ 8 The IL-ILMT project is funded by the Department of Electronics and Information Technology (DEITY), Ministry of Communications and Information Technology (MCIT), Government of India. AcknowledgementThe work has been carried out with support from the project -Development of English to Indian Languages Machine Translation (EILMT) System -Phase II‖ funded by Department of Information Technology, Government of India. The Mathematics of Statistical Machine Translation: Parameter Estimation. P F Brown, S A Della Pietra, V J Della Pietra, R L Mercer, Computational Linguistics. 192Brown, P.F., Della Pietra, S. A., Della Pietra, V. J., Mercer, R.L.(1993). The Mathematics of Statistical Machine Translation: Parameter Estimation. Com- putational Linguistics 19(2) 263-311. Aligning Sentences in Parallel Corpora. P F Brown, J C Lai, R L Mercer, Proceedings of the 29th Annual Meeting of the Association for Computational Linguistics. the 29th Annual Meeting of the Association for Computational LinguisticsBerkeley, CaliforniaBrown, P.F., Lai, J.C. and Mercer, R.L. (1991). Aligning Sentences in Parallel Corpora. In Pro- ceedings of the 29th Annual Meeting of the Asso- ciation for Computational Linguistics,Berkeley, California 169-176. Aligning Sentences in Bilingual Corpora Using Lexical Information. S F Chen, Proceedings of the 31st Annual Meeting of the ACL, Columbus. the 31st Annual Meeting of the ACL, ColumbusOhioChen, S.F.: 1993. Aligning Sentences in Bilingual Corpora Using Lexical Information. In Proceed- ings of the 31st Annual Meeting of the ACL, Co- lumbus, Ohio (1993) 9-16. Automatic Extraction of Complex Predicates in Bengali. D Das, S Pal, T Mondal, T Chakroborty, S Bandyopadhyay, MWE 2010 Workshop. Beijing, ChinaDas,D., Pal,S. Mondal,T. Chakroborty,T. and Bandy- opadhyay,S.:Automatic Extraction of Complex Predicates in Bengali . MWE 2010 Workshop, Coling 2010, Beijing, China. Voted NER system using appropriate unlabeled data. A Ekbal, S Bandyopadhyay, proceedings of the ACL-IJCNLP-2009 Named Entities Workshop. the ACL-IJCNLP-2009 Named Entities WorkshopSuntec, SingaporeEkbal, A. and Bandyopadhyay,S.(2009)."Voted NER system using appropriate unlabeled data". In pro- ceedings of the ACL-IJCNLP-2009 Named Enti- ties Workshop (NEWS 2009), Suntec, Singapore, pp. 202-210. A new approach for English-Chinese named entity alignment. Donghui Feng, Yajuan Lv, Ming Zhou, Proc. of the 2004 Conference on Empirical Methods in Natural Language Processing (EMNLP-2004). of the 2004 Conference on Empirical Methods in Natural Language essing (EMNLP-2004)Barcelona, SpainFeng, Donghui, Yajuan Lv, and Ming Zhou. 2004. A new approach for English-Chinese named entity alignment. In Proc. of the 2004 Conference on Empirical Methods in Natural Language Pro- cessing (EMNLP-2004), Barcelona, Spain, pp. 372- 379. A New Approach for Aligning Parallel Texts. P Fung, K Church, COLING-94: 15th International Conference on Computational Linguistics. K-vec.KyotoFung,P. and CHURCH, K.: -K-vec.(1994). A New Approach for Aligning Parallel Texts. In COLING- 94: 15th International Conference on Computation- al Linguistics, Kyoto: Aug., 1096--1102. A program for Aligning Sentences in Bilingual Corpora. W A Gale, K W Church, Proceedings of the 29th Annual Meeting of the Association for Computational Linguistics. the 29th Annual Meeting of the Association for Computational LinguisticsBerkeley, CaliforniaGale,W.A., Church, K.W.: A program for Aligning Sentences in Bilingual Corpora. In Proceedings of the 29th Annual Meeting of the Association for Computational Linguistics, Berkeley, California (1991) 177-184. Automatic Labeling of Semantic Roles. D Gildea, D Jurafsky, Computational Linguistics. 283Gildea, D. and Jurafsky, D. (2002). Automatic Label- ing of Semantic Roles. Computational Linguistics, 28(3):245-288 . The Compound Verbs in Hindi. The Michigan Series in South and South-east Asian Language and Linguistics. The University of Michigan. P Hook, Hook, P. (1974). The Compound Verbs in Hindi. The Michigan Series in South and South-east Asian Language and Linguistics. The University of Mich- igan. Automatic extraction of named entity translingual equivalence based on multi-feature cost minimization. Fei Huang, Stephan Vogel, Alex Waibel, Proc. of the ACL-2003 Workshop on Multilingual and Mixed-language Named Entity Recognition. of the ACL-2003 Workshop on Multilingual and Mixed-language Named Entity RecognitionSapporo, JapanHuang, Fei, Stephan Vogel, and Alex Waibel. 2003. Automatic extraction of named entity translingual equivalence based on multi-feature cost minimiza- tion. In Proc. of the ACL-2003 Workshop on Multi- lingual and Mixed-language Named Entity Recog- nition, 2003, Sapporo, Japan, pp. 9-16. A broadcoverage, comprehensive verb lexicon. K Kipper-Schuler, VerbNetPhiladelphia,PAComputer and Information Science Dept., University of PennsylvaniaPh.D. thesisKipper-Schuler and K.: VerbNet.(2005). A broad- coverage, comprehensive verb lexicon. Ph.D. the- sis,Computer and Information Science Dept., Uni- versity of Pennsylvania, Philadelphia,PA . Improved backing-off for m-gram language modeling. Reinhard Kneser, Hermann Ney, Proc. of the IEEE Internation Conference on Acoustics, Speech, and Signal Processing. of the IEEE Internation Conference on Acoustics, Speech, and Signal essingDetroit, MI1Kneser, Reinhard, and Hermann Ney. 1995. Improved backing-off for m-gram language modeling. In Proc. of the IEEE Internation Conference on Acoustics, Speech, and Signal Processing (ICASSP), vol. 1, pp. 181-184. Detroit, MI. Statistical phrase-based translation. Philipp Koehn, Franz Josef Och, Daniel Marcu, Proc. of HLT-NAACL 2003: conference combining Human Language Technology conference series and the North American Chapter of the Association for Computational Linguistics conference series. of HLT-NAACL 2003: conference combining Human Language Technology conference series and the North American Chapter of the Association for Computational Linguistics conference seriesKoehn, Philipp, Franz Josef Och, and Daniel Marcu. 2003. Statistical phrase-based translation. In Proc. of HLT-NAACL 2003: conference combining Hu- man Language Technology conference series and the North American Chapter of the Association for Computational Linguistics conference series, Ed- monton, Canada, pp. 48-54. Moses: open source toolkit for statistical machine translation. Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ondřej Bojar, Alexandra Constantin, Evan Herbst, Proc. of the 45th Annual meeting of the Association for Computational Linguistics (ACL. of the 45th Annual meeting of the Association for Computational Linguistics (ACLPrague, Czech RepublicProc. of demo and poster sessionsKoehn, Philipp, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Ber- toldi, Brooke Cowan, Wade Shen, Christine Mo- ran, Richard Zens, Chris Dyer, Ondřej Bojar, Alex- andra Constantin, and Evan Herbst. 2007. Moses: open source toolkit for statistical machine transla- tion. In Proc. of the 45th Annual meeting of the As- sociation for Computational Linguistics (ACL 2007): Proc. of demo and poster sessions, Prague, Czech Republic, pp. 177-180. A Hybrid Approach for Event Extraction and Event Actor. A Kolya, D Das, A Ekbal, S Bandyopadhyay, In RANLP. Kolya, A. Das, D. Ekbal A. and Bandyopadhyay, S.(2011). A Hybrid Approach for Event Extraction and Event Actor. In RANLP,12-14 September, Hissar, Bulgaria PP.592-597. A Geometric Approach to Mapping Bitext Correspondence. I D Melamed, 96-22University of PennsylvaniaIRCS Technical ReportMelamed, I.D.(1996). A Geometric Approach to Mapping Bitext Correspondence. IRCS Technical Report 96-22, University of Pennsylvania. A Portable Algorithm for Mapping Bitext Correspondence. I D Melamed, Proceedings of the 35th Annual Meeting of the Association for Computational Linguistics. the 35th Annual Meeting of the Association for Computational LinguisticsMadrid, SpainMelamed, I.D.(1997).A Portable Algorithm for Map- ping Bitext Correspondence. In Proceedings of the 35th Annual Meeting of the Association for Com- putational Linguistics, Madrid, Spain 305-312 Fast and Accurate Sentence Alignment of Bilingual Corpora. Robert C Moore, AMTA. Moore, Robert C. (2002), Fast and Accurate Sentence Alignment of Bilingual Corpora.AMTA, 135-144. Learning translations of named-entity phrases from parallel corpora. Robert C Moore, Proc. of 10th Conference of the European Chapter of the Association for Computational Linguistics (EACL 2003). of 10th Conference of the European Chapter of the Association for Computational Linguistics (EACL 2003)Budapest, HungaryMoore, Robert C. 2003. Learning translations of named-entity phrases from parallel corpora. In Proc. of 10th Conference of the European Chapter of the Association for Computational Linguistics (EACL 2003), Budapest, Hungary; pp. 259-266. Minimum error rate training in statistical machine translation. Franz J Och, Proc. of the 41st Annual Meeting of the Association for Computational Linguistics (ACL-2003). of the 41st Annual Meeting of the Association for Computational Linguistics (ACL-2003)Sapporo, JapanOch, Franz J. 2003. Minimum error rate training in statistical machine translation. In Proc. of the 41st Annual Meeting of the Association for Computa- tional Linguistics (ACL-2003), Sapporo, Japan, pp. 160-167. An HPSG Account of Bangla Compound Verbs with LKB Implementation. Ph.D dissertation. S Paul, HyderabadUniversity of HyderabadPaul, S. (2004). An HPSG Account of Bangla Com- pound Verbs with LKB Implementation. Ph.D dis- sertation, University of Hyderabad, Hyderabad. TimeML: Robust Specification of Event and Temporal Expressions in text. J Pustejovsky, J Castano, R Ingria, R Sauri, R Gaizauskas, A Setzer, Radev Katz, Proceedings of the Fifth International Workshop on Computational Semantics (IWCS-5). the Fifth International Workshop on Computational Semantics (IWCS-5)TilburgPustejovsky,J., Castano,J., Ingria, R.Sauri,R., Gai- zauskas,R., Setzer,A., Katz, and Radev.(2003). TimeML: Robust Specification of Event and Tem- poral Expressions in text. In Proceedings of the Fifth International Workshop on Computational Semantics (IWCS-5), Tilburg. Extensible Language Modeling Toolkit. A Stolcke, Srilm-An, Proc. Intl. Conf. on Spoken Language Processing. Intl. Conf. on Spoken Language essingDenver2Stolcke, A. SRILM-An Extensible Language Mod- eling Toolkit. Proc. Intl. Conf. on Spoken Lan- guage Processing, vol. 2, pp. 901-904, Denver (2002).
5,193,589
Pseudo-Passives as Adjectival Passives
The pseudo-passive is peculiar in that (i) the DP that appears to be the complement of a preposition undergoes passivization, and (ii) it is semantically characterized by the fact that it describes a resultant state or a characteristic of the Theme. The first peculiarity can be explained if the DP is not the complement of P but the complement of the V-P complex. However, the problem with this approach is that V and P cannot form a constituent in the corresponding active. In this paper, however, I propose that we can maintain the V-P complex approach if it is an adjectival passive. The adjectival passive describes a characteristic of the Theme, and it does not necessarily correspond to its active counterpart with regard to the internal argument structure. This suggests that the peculiarities of the pseudo-passive follow if it is an adjectival passive. This paper claims that it is indeed the case. In short, I claim that the passive morpheme in the pseudo-passive is the adjectival passive -en, which is empirically supported by the fact that they display the properties of adjectival passives.
[]
Pseudo-Passives as Adjectival Passives Kwang-Sup Kim kwangsup@hufs.ac.kr Hankuk University of Foreign Studies English Department 81 Oedae-lo Cheoin-Gu Yongin-City 449791 Republic of Korea Pseudo-Passives as Adjectival Passives PACLIC 28 ! 48 The pseudo-passive is peculiar in that (i) the DP that appears to be the complement of a preposition undergoes passivization, and (ii) it is semantically characterized by the fact that it describes a resultant state or a characteristic of the Theme. The first peculiarity can be explained if the DP is not the complement of P but the complement of the V-P complex. However, the problem with this approach is that V and P cannot form a constituent in the corresponding active. In this paper, however, I propose that we can maintain the V-P complex approach if it is an adjectival passive. The adjectival passive describes a characteristic of the Theme, and it does not necessarily correspond to its active counterpart with regard to the internal argument structure. This suggests that the peculiarities of the pseudo-passive follow if it is an adjectival passive. This paper claims that it is indeed the case. In short, I claim that the passive morpheme in the pseudo-passive is the adjectival passive -en, which is empirically supported by the fact that they display the properties of adjectival passives. Introduction It is well-known that once an argument is assigned Case, it cannot undergo further A-movement. However, pseudo-passives are quite peculiar in that the DP that appears to be the complement of a preposition moves to a Case position. (1) a. The hat was sat upon. b. These carpets have never been walked on. A plausible approach to this peculiarity is to argue that in (1a) sit upon is a constituent, and the hat is the complement of sit upon, not upon (Radford 1988, Drummond & Kush 2011. (2) the hat was [[sat upon] the hat]] |___________________| If this approach is correct, it is predicted that sit upon must be a constituent in the active as well as in the passive. However, there are insurmountable pieces of evidence that it cannot be a constituent in the active (Postal 1986, Koster 1987, and Baltin and Postal 1996. For instance, the objects can be conjoined, as illustrated in (3a-b), but in the active counterpart of (1a) the hat cannot be conjoined, as shown in (4a-b). (3) a. John bought a chair. b. John bought not a chair but a hat. (4) a. John sat upon the chair b. *John sat upon not the chair but the hat. 1 This suggests that the hat is not the complement of sat upon in (4a). If we assume that (1a) is analyzed as (2), we can explain why the hat can undergo passivization, but sat upon cannot be a constituent in (4a). This puts us in a dilemma, since it is usually known that there is parallelism between the verbal passive and its active counterpart. This paper explores the possibility of resolving this dilemma by proposing that the pseudo-passive is an adjectival passive. Problems with the Reanalysis Approach There are many idiomatic expressions that contain a preposition and permit passivization. The idiom take advantage of is a case in point. If we assume that the idiom is simply a word, we can explain why passivization is permitted although the object appears to be the complement of the preposition of. This section examines whether we can extend this approach to the pseudo-passive, and then points out some potential problems. Two Possible Ways of Generating Idioms Sentence (6) has two corresponding passive constructions, as shown in (7a-b). (6) John took advantage of Mary's honesty. (7) a. Mary's honesty was taken advantage of. b. Advantage was taken of Mary's honesty. This puzzle can be resolved if we assume that there are two ways of deriving the idiom take advantage of. Let us first assume that take advantage of is a word, not a phrase. (8) [ V [ V [ V take] advantage] of] 2 If so, it is quite straightforward why Mary's honesty can be preposed in (7a). If take advantage of is a constituent, the preposition of cannot assign Case to Mary's honesty, and furthermore, nor can the passive morpheme -en assign Case to it. That is, in (9a) Mary's honesty occurs in a Caseless position, and it needs to move to a position where it can be assigned Case. As shown in (9b-c), the SPEC-T position is available, and so it moves to the SPEC-T. Let us now assume that take advantage is a constituent, and the preposition of is not part of the idiom. In this case advantage is in a non-Case position when the VP is merged with the passive morpheme -en. On the other hand, Mary's honesty is in a Case position since it is the complement of the preposition of. Hence advantage moves to the SPEC-T position. We have seen that the idiom take advantage of permits either the direct object or the prepositional object to passivize, depending on whether or not the preposition of is part of the idiom. There are two other types of idioms. For instance, cast doubt on allows only the object DP to passivize, and lose sight of allows only the prepositional object to passivize. (11) a. Doubt was cast on his motives. b. *His motives were cast doubt on. (12) a. *Sight was lost of our goal. b. Our goal was lost sight of. This suggests that cast doubt on is a phrasal idiom, whereas lose sight of is a lexical idiom. In other words, cast doubt is a constituent, but cast doubt on is not, and lose sight of is a constituent, but lose sight is not. To recapitulate, the prepositional passive is permitted when the preposition is a part of a wordlevel idiom. Extension to the Pseudo-Passive With the above discussion in mind, let us attempt to account for the passives in (14a-b) while assuming that sleep in and walk on are constituents . 3 (14) a. This bed was slept in by Napoleon. b. These carpets have never been walked on. The most serious problem with this approach is that sleep in and walk on do not form constituents in actives (Postal 1986, Koster1987, and Baltin and Postal 1996. We have seen from (1-5) that sit upon is not a constituent in the active, but it is a constituent in the passive. There are many other examples in support of the claim that in the pseudo-passive V and P form a constituent, but in the corresponding active they do not. For instance, an adverb can intervene between V and P in the active, whereas it cannot in the pseudo-passive. (15) a. The lawyer will go thoroughly over the contract. b. *The contract will be gone thoroughly over by the lawyer. b'. The contract will be thoroughly gone over by the lawyer. (16) a. They spoke angrily to John. b. *John was spoken angrily to. c. John was spoken to. (Chomsky 1981: 123) There are many other data that show the same point. Gapping requires a verb to be elided, as shown in (17a-b). (17) a. Frank called Sandra and Arthur _______ Louise. b. Sandra was called by Frank and Louise by Arthur. Interestingly, talk to cannot be a gap in the active, but it must be a gap in the pseudo-passive. (18) a. Frank talked to Sandra and Arthur _______ *(to) Louise. b. Sandra was talked to by Frank and Louise (*to) Arthur. While discussing passivization of idioms, we have assumed that if an idiom is phrasal in the active, it is also phrasal in the passive, and if it is lexical in the active, it is also lexical in the passive. In the case of pseudo-passives, however, there is no parallelism between the active and the pseudopassive with regard to constituency. This is quite puzzling under the proposal that V and P form a constituent in the pseudo-passive. The next section is devoted to resolving this puzzle. 4 Pseudo-Passive as Adjectival Passive It is well-known that there are two-types of passives: the verbal passive and the adjectival passive. I propose that the peculiarities of the pseudo-passive can be explained if the pseudopassive is an adjectival passive. Contrast in Argument Structure between Verbal Passive and Adjectival Passive There are two types of passive en: the verbal passive en and the adjectival passive en. 5 (19) a. Mary was given the book. b. The rules are ungiven. What is peculiar about the adjectival passive ungiven is that the verb give can have two thetaroles-Theme and Goal, but the adjective ungiven can assign just one theta-role. (20) *Mary was ungiven the rules. This follows if we assume that the adjectival passive morpheme en assigns a Character role, which means 'has the property x', where x is the property expressed by the adjective. Theta-roles percolate when they cannot be assigned. 6 For instance, the theta-role of happy can percolate when happy is merged with un. However, they cannot percolate across another theta-role due to the intervention effect. For instance, in (22c) the Theme role is not allowed to cross the Character role. 7 Instead, it is identified as the Character role: it undergoes thetaidentification with Character in the sense of Higginbotham (1985). This is how a new predicate is formed in the syntax. Notice that just one theta-role can be identified as Character. Therefore, the newly-formed adjective given can assign just one theta role. 8,9 The main point is that the adjectival -en can be involved in forming a new predicate via theta-identification, and in this case only one theta-role can be realized. 10 Before turning into the verbal passive, let us consider the nature of theta-role assignment and theta-role percolation. I propose that theta-role assignment must obey the Earliness Condition in (23). (23) Earliness Condition: A theta-role must not be assigned late. Let us assume that the Theme role of X percolates and is assigned to Z in (24). Then, this is a violation of the Earliness Principle. It appears that given (23), there is no room for theta-role percolation. However, it is not the case. It is noteworthy that what is wrong with the derivation in (24) is not the theta-percolation in (24a-b) but with the late theta-role assignment (24b-c). If X were merged with Z, the Theme role could be assigned earlier. Hence the theta-role assignment in (24c) is in violation of the Earliness Condition. This means that once a theta-role percolates, it must not be assigned: it must be theta-identified with another theta-role; if the percolated theta-role is not assigned to an argument but identified with another theta-role, the Earliness Condition is not violated. With the Earliness Condition in mind, let us consider the verbal passive. The verbal passive participle given can assign two theta-roles. (25) Mary was given these books. The verbal passive morpheme -en assigns a thetarole, but it is a theta-role for an adjunct. So it cannot be involved in theta-identification. As illustrated in (26a), let us assume that the verb give is merged with the verbal passive morpheme, not with DPs. Then, the theta-roles must be percolated. In accordance with the Earliness Principle in (23), the percolated theta-roles in (26b) must undergo theta-identification. However, there is no thetarole that can identify the percolated theta-roles. As a result, there is no way for the theta-roles of give to be discharged: that is, (26b) cannot produce a well-formed sentence. If, on the other hand, the verbal passive morpheme is merged with a VP with its theta-roles discharged, a well-formed phrase can be generated. (27) [en [ VP Mary give (Goal, Theme) these books]] In (27) the two arguments of give can be syntactically realized. Now it is not surprising that the verbal passive is analogous to the active in terms of internal argument structure. The gist of the claim is that there is parallelism in internal argument structure between the active and the verbal passive, while there is no parallelism between the active and the adjectival passive. In what follows I argue that the asymmetry between the pseudo-passive and its active counterpart arises from the fact that the pseudo-passive is an adjectival passive. Derivation of the Pseudo-Passive The pseudo-passive obeys some semantic constraints that the verbal passive does not. It is subject to the affectedness condition: it describes a 'resultant' state of the subject. (28) a. The hat was sat upon. b. *The tree was sat under. c. John sat upon the hat. (29) a. This bed has been slept in. b. ??This bed has been slept beside. c. John slept in the bed. (30) a. The street [covered with snow] has not been walked on. b. *The street has not been walked on. c. We have not walked on the street. As will be discussed in 3.3, the affected Theme is closely related with characterization. Let us first consider the contrast between (28a) and (28b). If Theme was affected by an event, it can be characterized by the event. In (28a), for instance, the sitting event can affect the shape of the hat, and consequently it can be a characteristic of the hat. On the other hand, in (28b) the sitting event cannot affect the tree, and so cannot be a property of the tree. The same point is shown by (29a-b). If someone sleeps in a bed, the event assigns a new property to the bed in the sense that it is now a used one. By contrast, when someone sleeps beside a bed, the bed is not affected and so it is not assigned a new property. This point is corroborated by (30a-b). Walking event usually does not affect a street in general, and so cannot assign a new property to the street. However, the street covered with snow will be affected if someone walks on it, and hence it is assigned a new property as a result of walking. On these grounds we can generalize that the pseudo-passive denotes a characteristic of the Theme. These considerations lead us to the conclusion that the morpheme en in the pseudo-passive assigns a Character role: that is, it is an adjectival passive morpheme. With this in mind, let us attempt to derive (28a). If sit is merged with upon, the Theme role of upon cannot be assigned in situ, and so it undergoes percolation. If sit upon is merged with the Character role-assigning en, theta-identification takes place: the Theme role is identified as the Character role. As a result, the complex predicate [en (char) [ V sit upon (theme) ]] (char = theme) is generated. Merger with this hat and Theta-Role Assignment e. [[en (char) [ V sit upon (theme) ] (theme) ] (char = theme) this hat (char = theme) ]: Merger with be and T f. [T [be [[en (char) In this analysis this hat cannot be assigned Case from upon, since it is an argument of [en (char) [ V sit upon (theme) ]] (char, theme) , not an argument of upon. Therefore, it can undergo passivization. The immediate question begged for in this analysis is why the verb sit must be merged with PP, not with P in the active. Let us suppose that it can be merged with the preposition upon. If so, the Theme role of upon percolates, and it must be identified as Agent when sit upon is merged with the Agent-assigning v. However, the Theme and the Agent cannot refer to the same object: one cannot sit upon oneself. Therefore, sit must be merged with a PP like upon the hat. Generally speaking, the non-reflexive light verb does not permit theta-identification, since it requires its own theta-role to be different from the percolated theta-role. Almost every transitive light verb is a non-reflexive light verb. 11 In short, the Character role can be theta-identified with the Theme role, whereas the Agent role cannot, which resolves the long-standing puzzle: why can V-P be a constituent in the pseudo-passive, although it cannot be a constituent in its active counterpart? Another issue we need to address is what happens when the verbal passive morpheme -en is merged with sit upon. It is quite straightforward why (33c) is ill-formed. Let us recall that the verbal passive -en assigns a defective theta-role-an adjunct theta role, which cannot permit theta-identification. Accordingly, there is no way for the theta-role of upon to be realized. The percolated Theme role in (33c) must not be assigned to an argument in accordance with the Earliness Condition. However, it cannot be theta-identified with another theta-role. Therefore, (33c) is ill-formed. To conclude, only the adjectival passive morpheme en can be merged with sit upon. Affected Theme vs. Non-Affected Theme According to the Earliness Principle, V can be merged with P, forming a pseudo-passive only if the percolated Theme can be identified with another theta-role. It can undergo thetaidentification when the passive -en is adjectival and assigned a Character role. This implies that the pseudo-passive is permitted even by a verbal passive as long as the percolated thematic role can be theta-identified. This prediction is borne out. Thus far, I have claimed that the subject of the pseudo-passive is assigned a Character role by the adjectival passive morpheme -en. We have seen from (28-30) that the Character role is easily available when the Theme is affected, but it is not available when the Theme is not affected. 12 However, (34b) and (35b) show that the pseudopassive is permitted if the passive describes the characteristic of the raised Theme although it is not affected, (34) a. *Jeju City was walked around by his father. b. Jeju City can be walked around in a day. (35) a. *The hotel was stayed in by my sister. b. The hotel can be stayed in by foreigners. 13 Generally speaking, it is hard to get the reading that the sentence is about the characteristic of the subject when the Theme is not affected. In (34a) and (35a) Jeju City and the hotel cannot be affected, and hence it is not surprising that they are not grammatical. However, (34b) and (35b) are well-formed although the Theme is not affected. It seems that the Character role can be assigned by a modal such as can. Sentence (36b) is about the characteristic of the book, although (36a) is not. (36) a. This book was read by John. b. This book can be read in a day. This clearly shows that modals such as can can assign a Character role. In fact, Diesing (1992) proposes that even T can assign a property role when it takes an individual-level VP as its complement. The main claim made here is that a percolated theta-role must undergo thetaidentification, and if can assigns a Character role, a well-formed sentence can be generated when a theta-role percolates. If so, even the verbal passive can be a source for the pseudo-passive with the help of a modal. I propose that in (34b) and (35b) the passive morpheme is not adjectival but verbal. In fact, walk around is not compatible with the adjectival passive morpheme, since its Theme is not affected. So it is merged with the verbal passive and so the Theme role is percolated until it is theta-identified with the Character role of can. This analysis is based on the Earliness Principle in (23), according to which a theta-role can be percolated only if it can be identified by another theta-role. In (28a), (29a), and (30a), the affected Theme undergoes Theta-Identification since the adjectival passive morpheme -en assigns a Character role, and in (34b) and (35b) the unaffected Theme undergoes Theta-Identification with the Character role of can. This claim amounts to saying that even the verbal passive can be a source for the pseuso-passive if the Character role can be assigned to the subject. Account for the Puzzles Now we are in a position to account for the two major puzzles revolving around the pseudopassive: (i) why is it subject to the Characterization Condition, and (ii) why is it possible to move out of a Case position? According to the proposal advocated here, the two issues are related. The Case-related issue can be resolved if the verb sit can be merged with the preposition upon, and merger of sit with upon is permitted only when the resulting structure is merged with the adjectival passive morpheme en or the modal can, which assigns the Character role, thereby giving rise to the Characterization Condition. Thus far, I have claimed that most pseudopassives are adjectival passives. This is empirically supported by the fact that they display the properties of adjectival passives: (i) they can be used as a prenominal modifier, (ii) they can function as the complement of the raising verbs like look, (iii) they are compatible with the negative affix un-, and (iv) they can be modified by an adverb like very. (38) a. John is the most talked about player in the game. b. The bed looks slept in. c. Just ten years ago this would have been unheard of. d. Their living room is very lived in. (Wasow 1977, (90)) (39) a. After the tornado, the fields had a marched through look. b. Each unpaid for item will be returned. c. You can ignore any recently gone over accounts. d. His was not a well-looked on profession. e. They shared an unspoken (of) passion for chocolates. f. Filled with candy wrappers and crumpled bills, her bag always had a rummaged around in appearance. (Bresnan 1995, (16)) (40) a. a slept-in bed b. a much relied-upon technique (Bruening 2011: 2) These all support the claim that most pseudopassives are adjectival, 14 which is confirmed by the fact that the pseudo-passive does not permit the progressive aspect. (41) a. *This bed is being slept in. b. *The hat is being sat upon. Considering that the progressive aspect is compatible only with the verbal passive, we are led to the conclusion that the pseudo-passive is an adjectival passive. However, it is worthwhile to reiterate that even the verbal passive can produce the pseudo-passive with the help of modals such as can, when the Theme is not affected. Precisely speaking, the pseudo-passive is an adjectival passive when its Theme is affected, and it is a verbal passive when its Theme is not affected. Conclusion Let us summarize this paper. The passive sentences in (42a-b) are peculiar, since their subject appears to originate from the complement position of a preposition. (42) a. Mary's innocence was taken advantage of. b. Mary beds were slept in. This puzzle can be resolved if the preposition is a part of a bigger predicate. The analysis in (43a) is plausible, since take advantage of can be taken to be a constituent in the corresponding active, but the one (43b) is not, since sleep in cannot be a constituent in the active sentence. (44) a. John [took advantage of] Mary's innocence. b. *John [slept in] this bed. However, I have claimed that the analysis in (43b) is still tenable, because the passive morpheme en in (43b) is an adjectival en. The asymmetry between (43b) and (44b) does not undermine the claim that slept in is a constituent in the pseudopassive, since there is no parallelism between the adjectival passive and its corresponding active in terms of the internal argument structure. ( 22) a. [ V give (Theme) ]: Theta-Role Percolation b. [ V give (Theme) ] (Theme) : Merger with en (Character) c. [ A [ V give (Theme) ] en (Character) ]: Theta-Identification d. [ A [ V give (Theme) ] en (Character) ] (Character = Theme) : Merger with un & Theta-Role Percolation e. [un [ A [ V give (Theme) ] en (Character) ] (Character = Theme) ] (Character = Theme) ( 24) a. [… X (Theme) ]: Theta-Percolation b. [… X (Theme) ] (Theme) : Merger with Z and Theta-Role Assignment c. [[… X (Theme) ] (Theme) Z (Theme) ] ( 26) a. [en [ V give (Goal, Theme) ]]: Theta-Role Percolation, b. [en [ V give (Goal, Theme) ]] (Goal, Theme) ( 31) a. [ V sit upon (theme) ]: Theta-Role Percolation b. [ V sit upon (theme) ] (theme) : Merger with en char c. [en (char) [ V sit upon (theme) ] (theme) ]: Theta-Role Identification d. [en (char) [ V sit upon (theme) ] (theme) ] (char = theme) : ( 32) a. [ V sit upon (theme) ]: Theta-Role Percolation b. [ V sit upon (theme) ] (theme) : Merger with v c. [v (Agent) [ V sit upon (theme) ] (theme) ] ( 33) a. [ V sit upon (theme) ]: Theta-Role Percolation b. [ V sit upon (theme) ] (theme) : Merger with the verbal passive en c. [en [ V sit upon (theme) ] (theme) ] a. [ VP en [ VP [ VP take advantage] of Mary's honesty]]: Merger with be and T b. [T [be [ VP en [ VP [ VP take advantage] of Mary's honesty]]]]: Raising to the SPEC-T c. [Advantage T [be [ VP en [ VP [ VP take advantage] of Mary's honesty]]]] (43) a. Mary's innocence was [ vP en [ VP [ V take advantage of] Mary's innocence]] b. Mary beds were [ vP en [ VP [ V sleep in] many beds]]. The corresponding pseudo-passive sentence is well-formed.(i) Not the chair but the hat was sat upon.2Chomsky (1995) proposes that the transitive verbs like hit consist of the light verb v and its corresponding intransitive hit. In this analysis the active counterpart of (8) looks like (i).(i) [ vP v [ VP [ V [ V [ V take] advantage] of] Mary's honesty]3Radford (1988) assumes that V and P undergo reanalysis in the course of the derivation. In this paper, by contrast, I argue that V is merged with P from the start.4 Drummond & Kush (2011) try to support the reanalysis approach by making use of raising-to-object. 5 On the other hand,Freidin (1975)andEmonds (2006)claim that all the passive participles are adjectives.6SeeWilliams (1994)for thematic role percolation. 7Williams (1994)proposes that theta-percolation is blocked by a predicate that assigns an external theta-role.8It is usually known that only Theme percolates(Williams 1980). However, the Goal can percolate as well. (i) Victim remains denied her American nationality.Let us recall that proposition-taking adjectives are usually raising predicates.(ii) a. It is likely that John will come to the party. b. John is likely to come to the party.Verbs of the deny-class take a proposition as their internal argument. What is denied in (iii) is the proposition that the victim bears a relation with her American nationality.(iii) They denied the victim her American nationality.I propose that when the adjectival morpheme en is merged with a proposition-taking verb, it patterns like the proposition-taking adjectives: it is a raising morpheme in that it does not assign the Character role. The raising morpheme can maintain the argument structure of its complement. Therefore, (i) is grammatical.10 The possibility that the adjectival -en is merged with VP seems to be ruled out in(22). The un-is required to be merged with an X 0 -level constituent, which means that given must be X 0 . This claim amounts to saying that the adjectivalen can co-occur with VP if there is no negative morphemeun. To put differently, it is predicted that both Theme and Goal can be realized if given is not attached by un. This prediction is borne out.(i) She seemed given too much power.(Bruening 2014: 33)So I propose that when the adjectival -en is merged with VP, both Theme and Goal can be realized.11 There are few reflexive light verbs like shave and wash.(i) John {shaved, washed} 12 This is reminiscent of the Affectedness Condition on preposing in passive nominals(Anderson 2005(Anderson , 1979(Anderson , 1977.13Notice that a by-phrase can be licensed in the pseudopassive, as shown in(35b). This seems to support the claim that the pseudo-passive can be verbal. However, seeBruening (2014)for a claim that even the adjectival passive permits a by-phrase.14 Many linguists, includingBruening (2011), assume that the pseudo-passive is a verbal passive and sentences (32-34) are adjectival passives derived from verbal passives. However, I argue that they are well-formed, since pseudopassives are adjectival. . Mona Anderson, Affectedness. In the Blackwell companion to syntaxvol. Martin Everaert and Hen van Riemsdijk1BlackwellAnderson, Mona. 2005. Affectedness. In the Blackwell companion to syntaxvol 1, ed. by Martin Everaert and Hen van Riemsdijk.Malden, MA.: Blackwell. Noun Phrase Structure. Mona Anderson, University of ConnecticutPh.D dissertationAnderson, Mona. 1979. Noun Phrase Structure. Ph.D dissertation. University of Connecticut. NP Pre-posing in Noun Phrases. Mona Anderson, Graduate Linguistics Student Association. Amherst8Anderson, Mona. 1977. NP Pre-posing in Noun Phrases. Proceedings of the North Eastern Linguistic Society 8: 12-21. Amherst: Graduate Linguistics Student Association. More on reanalysis hypotheses. Mark Baltin, Paul M Postal, Linguistic Inquiry. 27Baltin, Mark and Paul M. Postal.1996. More on reanalysis hypotheses. Linguistic Inquiry 27: 127-145. Lexicality and Argument Structure. Joan Bresnan, Paper presented at the Paris Syntax and Semantics ConferenceBresnan, Joan. 1995.Lexicality and Argument Structure. Paper presented at the Paris Syntax and Semantics Conference, 1995. Word Formation is Syntactic: Adjectival Passives in English. Benjamin Bruening, Natural Language and Linguistic Theory. 32Bruening, Benjamin. 2014. Word Formation is Syntactic: Adjectival Passives in English. Natural Language and Linguistic Theory 32: 363-422. Bejamin Bruening, Pseudopassives, Expletive Passives, and Locative Inversion. Ms. University of DelawareBruening, Bejamin. 2011. Pseudopassives, Expletive Passives, and Locative Inversion. Ms. University of Delaware. The Minimalist Program. Noam Chomsky, MIT PressCambridge, MassChomsky, Noam. 1995. The Minimalist Program. Cambridge, Mass.: MIT Press. Noam Chomsky, Lectures on government and binding. Dordrecht: Foris. Chomsky, Noam. 1981. Lectures on government and binding. Dordrecht: Foris. . Molly Diesing, MIT PressCambridge, MassDiesing, Molly. 1992. Indefinites. Cambridge, Mass.: MIT Press. Reanalysis as Raising to Object. Alex &amp; Dave Drummond, Kush, Ms. University of MarylandDrummond, Alex & Dave Kush. 2011. Reanalysis as Raising to Object. Ms. University of Maryland. Adjectival Passives. Joseph E Emonds, The Blackwell Companion to Syntax. Martin Everaert and Henk van RiemsdijkOxfordBlackwell1Emonds, Joseph E. 2006.Adjectival Passives. In Martin Everaert and Henk van Riemsdijk, eds., The Blackwell Companion to Syntax, Oxford: Blackwell, vol. 1: 16-60. The analysis of passives. Robert Freidin, Language. 51Freidin, Robert. 1975. The analysis of passives. Language 51: 384-405. On semantics. James Higginbotham, Linguistic Inquiry. 16Higginbotham, James 1985. On semantics. Linguistic Inquiry 16: 547-593. Domain and dynasties: the radical autonomy of syntax. Koster, ForisDordrechtKoster, Jan.1987. Domain and dynasties: the radical autonomy of syntax. Dordrecht: Foris. Studies of passive clauses. Paul M Postal, AlbanyState University of New York PressPostal, Paul. M. 1986. Studies of passive clauses. Albany: State University of New York Press. Transformational grammar: a first course. Andrew Radford, Cambridge University PressCambridgeRadford, Andrew. 1988. Transformational grammar: a first course. Cambridge: Cambridge University Press. Transformations and the Lexicon. Thomas Wasow, Formal Syntax. P. Culicover, A. Akmajian, and T. WasowNew YorkAcademic PressWasow, Thomas. 1977. Transformations and the Lexicon. In P. Culicover, A. Akmajian, and T. Wasow, eds., Formal Syntax, New York: Academic Press, pp. 327-360. Thematic structure in syntax. Edwin Williams, MIT PressCambridge, MassWilliams, Edwin. 1994. Thematic structure in syntax. Cambridge, Mass.: MIT Press. Argument structure and morphology. Edwin Williams, The Linguistic Review. 1Williams, Edwin. 1980. Argument structure and morphology. The Linguistic Review 1: 81-114.
3,062,643
Two-Phase Biomedical NE Recognition based on SVMs
Using SVMs for named entity recognition, we are often confronted with the multi-class problem. Larger as the number of classes is, more severe the multiclass problem is. Especially, one-vs-rest method is apt to drop the performance by generating severe unbalanced class distribution. In this study, to tackle the problem, we take a two-phase named entity recognition method based on SVMs and dictionary; at the first phase, we try to identify each entity by a SVM classifier and post-process the identified entities by a simple dictionary look-up; at the second phase, we try to classify the semantic class of the identified entity by SVMs. By dividing the task into two subtasks, i.e. the entity identification and the semantic classification, the unbalanced class distribution problem can be alleviated. Furthermore, we can select the features relevant to each task and take an alternative classification method according to the task. The experimental results on the GENIA corpus show that the proposed method is effective not only in the reduction of training cost but also in performance improvement: the identification performance is about 79.9(F β = 1), the semantic classification accuracy is about 66.5(F β = 1).
[ 10262770, 14533915 ]
Two-Phase Biomedical NE Recognition based on SVMs Ki-Joong Lee kjlee@nlp.korea.ac.kr Young-Sook Hwang yshwang@nlp.korea.ac.kr Hae-Chang Rim rim@nlp.korea.ac.kr Department of Computer Science & Engineering Korea University 1 5-ka, Anam-dong SEOUL 136-701KOREA Two-Phase Biomedical NE Recognition based on SVMs Using SVMs for named entity recognition, we are often confronted with the multi-class problem. Larger as the number of classes is, more severe the multiclass problem is. Especially, one-vs-rest method is apt to drop the performance by generating severe unbalanced class distribution. In this study, to tackle the problem, we take a two-phase named entity recognition method based on SVMs and dictionary; at the first phase, we try to identify each entity by a SVM classifier and post-process the identified entities by a simple dictionary look-up; at the second phase, we try to classify the semantic class of the identified entity by SVMs. By dividing the task into two subtasks, i.e. the entity identification and the semantic classification, the unbalanced class distribution problem can be alleviated. Furthermore, we can select the features relevant to each task and take an alternative classification method according to the task. The experimental results on the GENIA corpus show that the proposed method is effective not only in the reduction of training cost but also in performance improvement: the identification performance is about 79.9(F β = 1), the semantic classification accuracy is about 66.5(F β = 1). Introduction Knowledge discovery in the rapidly growing area of biomedicine is very important. While most knowledge are provided in a vast amount of texts, it is impossible to grasp all of the huge amount of knowledge provided in the form of natural language. Recently, computational text analysis techniques based on NLP have received a spotlight in bioinformatics. Recognizing the named entities such as proteins, DNAs, RNAs, cells etc. has become one of the most fundamental tasks in the biomedical knowledge discovery. Conceptually, named entity recognition consists of two tasks: identification, which finds the boundaries of a named entity in a text, and classification, which determines the semantic class of that named entity. Many machine learning approaches have been applied to biomedical named entity recognition (Nobata, 1999) (Hatzivalssiloglou, 2001) (Kazama, 2002). However, no work has achieved sufficient recognition accuracy. One reason is the lack of annotated corpora. This is somewhat appeased with announcement of the GENIA corpus v3.0 (GENIA, 2003). Another reason is that it is difficult to recognize biomedical named entities by using general features compared with the named entities in newswire articles. In addition, since nonentity words are much more than entity words in biomedical documents, class distribution in the class representation combining a B/I/O tag with a semantic class C is so severely unbalanced that it costs too much time and huge resources, especially in SVMs training (Hsu, 2001). Therefore, Kazama and his colleagues tackled the problems by tuning SVMs (Kazama, 2002). They splitted the class with unbalanced class distribution into several subclasses to reduce the training cost. In order to solve the data sparseness problem, they explored various features such as word cache features and HMM state features. According to their report, the word cache and HMM state features made a positive effect on the performance improvement. But, not separating the identification task from the semantic classification, they tried to classify the named entities in the integrated process. By the way, the features for identifying the biomedical entity are different from those for semantically classifying the entity. For example, while orthographical characteristics and a part-of-speech tag sequence of an entity are strongly related to the identification, those are weakly related to the semantic classification. On the other hand, context words seem to provide useful clues to the semantic classification of a given entity. Therefore, we will separate the identification task from the semantic classification task. We try to select different features according to the task. This approach enables us to solve the unbalanced class distribution problem which often occurs in a single complicated approach. Besides, to improve the performance, we will post-process the results of SVM classifiers by utilizing the dictionary. That is, we adopt a simple dictionary lookup method to correct the errors by SVMs in the identification phase. Through some experiments, we will show how separating the entity recognition task into two subtasks contributes to improving the performance of biomedical named entity recognition. And we will show the effect the hybrid approach of the SVMs and the dictionary-lookup. Definition of Named Entity Classification Problem We divide the named entity recognition into two subtasks, the identification task which finds the regions of the named entities in a text and the semantic classification which determines the semantic classes of them. Figure 1 illustrates the proposed method, which is called two-phase named entity recognition method. Figure 1: Examples of Biomedical Named Entity Recognition The identification task is formulated as classification of each word into one of two classes, T or O that represent region information. The region information is encoded by using simple T/O representation: T means that current word is a part of a named entity, and O means that the word is not in a named entity. With the representation, we need only one binary SVM classifier of two classes, T, O. The semantic classification task is to assign one of semantic classes to the identified entity. At the semantic classification phase, we need to classify only the identified entities into one of the N semantic classes because the entities were already identified. Non-entity words are ignored at this phase. The classes needed to be classified are just only the N semantic classes. Note that the number of total classes, N + 1 is remarkably small compared with the number, 2N + 1 required in the complicated recognition approaches in which a class is represented by combining a region information B/I/O with a semantic class C. It can considerably reduce workload in the named entity recognition. Especially when using SVMs, the number of classes is very critical to the training in the aspect of training time and required resources. Let L be the number of training samples and let N be the number of classes. Then one-vs-rest method takes N × O(L) in the training step. The complicated approach with the B/I/O notation requires (2N + 1) × O(L words ) (L is number of total words in a training corpus). In contrast, the proposed approach requires (N × O(L entities )) + O(L words ). Here, O(L words ) stands for the number of words in a training corpus and O(L entities ) for the number of entities. It is a considerable reduction in the training cost. Ultimately, it affects the performance of the entity recognizer. To achieve a high performance of the defined tasks, we use SVM (Joachims, 2002) as a machine learning approach which has showed the best performance in various NLP tasks. And we post-process the classification results of SVMs by utilizing a dictionary. Figure 2 outlines the proposed two-phase named entity recognition system. At each phase, each classifier with SVMs outputs the class of the best score. For classifying multi-classes based on a binary classifier SVM, we use the one-vs-rest classification method and the linear kernel in both tasks. Furthermore, for correcting the errors by SVMs, the entity-word dictionary constructed from a training corpus is utilized in the identification phase. The dictionary is searched to check whether the boundary words of an identified entity were excluded or not because the boundary words of an entity might be excluded during the entity identification. If a boundary word was excluded, then we concatenate the left or the right side word adjacent to the identified entity. This post-processing may enhance the capability of the entity identifier. Biomedical Named Entity Identification The named entity identification is defined as the classification of each word to one of the classes that represent the region information. The region information is encoded by using simple T/O representation: T means that the current word is a part of a named entity, and O means that the current word is not in a named entity. The above representation yields two classes of the task and we build just one binary SVM classifiers for them. By accepting the results of the SVM classifier, we determine the boundaries of an entity. To correct boundary errors, we post-process the identified entities with the entity-word dictionary. Features for Entity Identification An input x to a SVM classifier is a feature representation of a target word to be classified and its context. We use a bit-vector representation. The features of the designated word are composed of orthographical characteristic features, prefix, suffix, and lexical of the word. Table 1 shows all of the 24 orthographical features. Each feature may be a discriminative feature appeared in biomedical named entites such as protein, DNA and RNA etc. Actually, the name of protein, DNA or RNA is composed by combining alpha-numeric string with several characters such as Greek or special symbols and so on. And the suffix/prefix, the designated word and the context word features are as follows: w i =      1 if the word is the i th word in the vocabulary V 0 otherwise In the definition, k is the relative word position from the target word. A negative value represents a preceeding word and a positive value represents a following word. Among them, the part-of-speech tag sequence of the word and the context words is a kind of a syntactic rule to compose an entity. And lexical information is a sort of filter to identify an entity which is as possible as semantically cohesive. Post-Processing by Dictionary Look-Up After classifying the given instances, we do postprocessing of the identified entities. During the post-processing, we scan the identified entities and examine the adjacent words to those. If the part-of-speech of an adjacent word belongs to one of the group, adjective, noun, or cardinal, then we look up the dictionary to check whether the word is in it or not. If it exists in the dictionary, we include the word into the entity region. The dictionary is constructed of words consisting of the named entities in a training corpora and stopwords are ignored. Figure 3 illustrates the post-processing algorithm. In Figure 3, the word cell adjacent to the left of the identified entity cycle-dependent transcription, has the part-of-speech NN and exists in the dictionary. The word factor adjacent to the right of the entity has the part-of-speech NN. It exists in the dictionary, too. Therefore, we include the words cell and factor into the entity region and change the position tags of the words in the entity. By taking the post-processing method, we can correct the errors by a SVM classifier. It also gives us a great effect of overcoming the low coverage problem of the small-sized entity dictionary. Semantic Classification of Biomedical Named Entity The objects of the semantic tagging are the entities identified in the identification phase. Each entity is assigned to a proper semantic class by voting the SVM classifiers. Features for Semantic Classification For semantically tagging an entity, an input x to a SVM classifier is represented by a feature vector. The vector is composed of following features: 1 if noun or verb word in the right context is the i th word in the right context word list 0 otherwise Of the above features, f w i checks whether the entity contains one of functional words. The functional words are similar to the feature terms used by (Fukuda, 1998). For example, the functional words such as factor, receptor and protein are very helpful to classifying named entities into protein and the functional words such as gene, promoter and motif are very useful for classifying DNA. f w i =      1 if In case of the context features of a given entity, we divide them into two kinds of context features, inside context features and outside context features. As inside context features, we take at most three words from the backend of the entity 1 . We make a list of the inside context words by collecting words in the range of the inside context. If one of the three words is the i th word in the inside context word list, we set the inw i bit to 1. The outside context features are grouped in the left ones and the right ones. For the left and the right context features, we restrict them to noun or verb words in a sentence, whose position is not specified. This grouping make an effect of alleviating the data sparseness problem when using a word as a feature. For example, given a sentence with the entity, RNA polymerase II as follows: General transcription factor are required for accurate initiation of transcription by RNA polymerase II P ROT EIN . The nouns transcription, factor, initiation and the verbs are, required are selected as left context features, and the words RNA, polymerase, II are selected as inside context features. The bit field corresponding to each of the selected word is set to 1. In this case, there is no right context features. And since the entity contains the functional word RNA, the bit field of RNA is set to 1. For classifying a given entity, we build SVM classifiers as many as the number of semantic classes. We take linear kernel and one-vs-rest classification method. Experiments Experimental Environments Experiments have been conducted on the GENIA corpus(v3.0p) (GENIA, 2003), which consists of 2000 MEDLINE abstracts annotated with Penn Treebank (PTB) POS tags. There exist 36 distinct semantic classes in the corpus. However, we used 22 semantic classes which are all but protein, DNA and RNA's subclasses on the GENIA ontology 2 . The corpus was transformed into a B/I/O annotated corpus to represent entity boundaries and a semantic class. We divided 2000 abstracts into 10 collections for 10-fold cross validation. Each collection contains not only abstracts but also paper titles. The vocabularies for lexical features and prefix/suffix lists were constructed by taking the most frequent 10,000 words from the training part only. Also, we made another experimental environment to compare with the previous work by (Kazama, 2002). From the GENIA corpus, 590 abstracts(4,808 sentences; 20,203 entities; 128,463 words) were taken as a training part and 80 abstracts(761 sentences; 3,327 entities; 19,622 words) were selected as a test part. Because we couldn't make the experimental environment such as the same as that of Kazama's, we tried to make a comparable environment. We implemented our method using the SVM-light package (Joachims, 2002). Though various learning parameters can significantly affect the performance of the resulting classifiers, we used the SVM system with linear kernel and default options. The performance was evaluated by precision, recall and F β=1 . The overall F β=1 for two models and ten collections, were calculated using 10-fold cross validation on total test collection. Effect of Training Data Size In this experiment, varying the size of training set, we observed the change of F β=1 in the entity identification and the semantic classification. We fixed the test data with 200 abstracts(1,921 sentences; 50,568 words). Figure 4 shows that the performance was improved by increasing the training set size. As the performance of the identification increases, the gap between the performance of the identification and that of the semantic classification is gradually decreased. Computational Efficiency When using one-vs-rest method, the number of negative samples is very critical to the training in The SVM classifier for entity identifiation determines whether each word is included in an entity or not. Figure 5 shows there are much more negative samples than positive samples in the identification phase. Once entities are identified, non-entity words are not considered in next semantic classification phase. Therefore, the proposed method can effectively remove the unnecessary samples. It enables us effectively save the training costs. Furthermore, the proposed method could effectively decrease the degree of the unbalance among classes by simplifying the classes. Figure 6 shows how much the proposed method can alleviate the unbalanced class distribution problem compared with 1-phase complicated classification model. However, even though the unbalanced class distribution problem could be alleviated in the identification phase, we are still suffering from the problem in the semantic classification as long as we take the one-vs-rest method. It indicates that we need to take another classification method such as a pairwise method in the semantic classification (Krebel, 1999). Discriminative Feature Selection We subsequently examined several alternatives for the feature sets described in section 3.1 and section 4.1. The column (A) in Table 2 shows the identification cases. The base feature set consisted of only the designated word and the context words in the range from the left 2 to the right 2. Several alternatives for feature sets were constructed by adding a different combination of features to the base feature set. From Table 2 shows semantic classification cases with the identification phase of the best performance. We took the feature set composed of the inside words of an entity as a base feature set. And we made several alternatives by adding another features. The experimental results show that functional words and left context features are useful, but right context features are not. Furthermore, part-ofspeech information was not effective in the semantic classification while it was useful for the entity identification. That is, when we took the part-ofspeech tags of inside context words instead of the inside context words, the performance of the semantic classification was very low(F β=1.0 was 25.1). Effect of PostProcessing by Dictionary Lookup Our two-phase model has the problem that identification errors are propagated to the semantic classification. For this reason, it is necessary to ensure a high accuracy of the boundary identification by adopting a method such as post processing of the identified entities. Table 3 shows that the post processing by dictionary lookup is effective to improving the performance of not only the boundary identification accurary(79.2 vs. 79.9) but also the semantic classification accuracy(66.1 vs. 66.5). When comparing with the (Kazama, 2002) even though the environments is not the same, the proposed two-phase model showed much better performance in both the entity identification (73.6 vs. 81.4) and the entity classification (54.4 vs. 68.0). One of the reason of the performance improvement is that we could take discriminative features for each subtask by separating the task into two subtasks. Conclusion In this paper, we proposed a new method of twophase biomedical named entity recognition based on SVMs and dictionary-lookup. At the first phase, we tried to identify each entity with one SVM classifier and to post-process with a simple dictionary look-up for correcting the errors by the SVM. At the second phase, we tried to classify the identified entity into its semantic class by voting the SVMs. By dividing the task into two subtasks, the identification and the semantic classification task, we could select more relevant features for each task and take an alternative classification method according to the task. This is resulted into the mitigation effect of the unbalanced class distribution problem but also improvement of the performance of the overall tasks. Figure 2 : 2System the word is assigned the i th POS tag in the POS tag list 0 otherwise suf i = the word contains the i th suffix in the suffix list 0 otherwise pre i = a word at k is assigned the i th POS tag in the POS tag list 0 otherwise Figure 3 : 3An example of the post-processing of an entity identification noun or verb word in the left context is the i th word in the left context word list 0 otherwise rcw i = Figure 4 : 4Perfomance shift according to the increase of training data size w/o post-processing the aspect of training time and required resources. Figure 5 : 5training size vs. positive and negative sample size in identification phase and semantic classification phase Figure 6: 2-phase model vs. 1-phase model : change of the negative and the positive sample size according to the training data size ( A ) Table 1 : 1Orthographical characteristic features of the designated wordOrthographic Feature examples DIGITS 1 , 39 SINGLE CAP A , M COMMA , PERIOD . HYPHON - SLASH / QUESTION MARK ? OPEN SQUARE [ CLOSE SQUARE ] OPEN PAREN ( CLOSE PAREN ) COLON : SEMICOLON ; PERCENT % APOSTROPHE ' ETC SYMBOL +, *, etc. TWO CAPS alphaCD28 ALL UPPER AIDS INCLUDE CAPS c-Jun GREEK LETTER NF-kappa ALPHA NUMERIC p65 ALL LOWER motif CAPS DIGIT CD40 INIT CAP Rel Table 2 : 2Effect of each feature set(training with 900 abstracts, test with 100 abstracts): (A) identification phase, (B) semantic classification phase Table 2 , 2we can see that part-of-speech information certainly improves the identification accuracy(about +2.8). Prefix and suffix features made a positive effect, but only modestly(about +1.2 on average).The column (B) in The average length of entities is about 2.2 in GENIA corpus. That is, All of the protein's subclass such as protein molecule, protein family or group were regarded as protein. Performance comparison with or w/o post-processing(F β=1 ): (A)10-fold cross validation(1800 abstracts, test with 200 abstracts), (B)training with 590 abstracts, test with 80 abstracts A B (Kazama, 2002) No. of W/O PostProc with PostProc No. of W/O PostProc with PostProc No. of Inst Inst Inst Identification. 3823: Performance comparison with or w/o post-processing(F β=1 ): (A)10-fold cross validation(1800 abstracts, test with 200 abstracts), (B)training with 590 abstracts, test with 80 abstracts A B (Kazama, 2002) No. of W/O PostProc with PostProc No. of W/O PostProc with PostProc No. of Inst Inst Inst Identification 76.2/82 Extracting the Names of Genes and Gene Products with a Hidden Markov Model. N References, C Collier, J Nobata, Tsujii, Proc. of Coling2000. of Coling2000References N. Collier, C. Nobata, and J. Tsujii 2000. Extracting the Names of Genes and Gene Products with a Hidden Markov Model. In Proc. of Coling2000, pages 201- 207. Information extraction: identifying protein nmes from biological papers. K Fukuda, T Tsunoda, A Tamura, T Takagi, Proc. of the Pacific Symposium on Biocomputing '98(PSB'98). of the Pacific Symposium on Biocomputing '98(PSB'98)K. Fukuda, T. Tsunoda, A. Tamura, and T. Takagi. 1998. Information extraction: identifying protein nmes from biological papers. In Proc. of the Pacific Symposium on Biocomputing '98(PSB'98). GENIA Corpus 3.0p. GENIA Corpus 3.0p. 2003. available at http://www-tsujii.is.s.u-tokyo.ac.jp/ Disambiguating proteins, genes, and RNA in text: a machine learning approach. V Hatzivalssiloglou, P A Duboue, A Rzhetsky, Bioinformatics. 171V. Hatzivalssiloglou, P. A. Duboue, and A. Rzhetsky. 2001. Disambiguating proteins, genes, and RNA in text: a machine learning approach. Bioinformatics. 17 Supple 1. A comparison on methods for multi-class support vector machines. C Hsu, C Lin, TaiwanNational Taiwan UniversityTechnical reportC. Hsu and C. Lin. 2001. A comparison on methods for multi-class support vector machines. Technical report, National Taiwan University, Taiwan. Making Large-Scale SVM Learning Practical. T Joachims, 24Universitat Dortmund, LS VIII-ReportT. Joachims. 1998. Making Large-Scale SVM Learning Practical. LS8-Report, 24, Universitat Dortmund, LS VIII-Report. Estimating the generalization performance of a SVM efficiently. T Joachims, Proc. of the Seventeenth International Conference on Machine Learning. of the Seventeenth International Conference on Machine LearningT. Joachims. 2000. Estimating the generalization per- formance of a SVM efficiently. In Proc. of the Seven- teenth International Conference on Machine Learning. . Morgan Kaufmann, Morgan Kaufmann, pages 431-438. . Svm Light, SVM Light. 2002. available at http://svmlight.joachims.org/ Tuning support vector machines for biomedical named entity recognition. Takaki Jun&apos;ichi Kazama, Yoshihiro Makino, Jun&apos;ichi Ohta, Tsujii, Proc. of ACL-02 Workshop on Natural Language Processing in the Biomedical Domain. of ACL-02 Workshop on Natural Language essing in the Biomedical DomainJun'ichi Kazama, Takaki Makino, Yoshihiro Ohta and Jun'ichi Tsujii. 2002. Tuning support vector machines for biomedical named entity recognition. In Proc. of ACL-02 Workshop on Natural Language Processing in the Biomedical Domain, pages 1-8. Pairwise Classification and Support Vector machines. U H , Advances in Kernel Methods: Support Vector Learning. B. Scholkopf, C.J.C. BurgesCambridge, MAThe MIT PressU. H.-G Krebel 1999. Pairwise Classification and Sup- port Vector machines. In B. Scholkopf, C.J.C. Burges, Advances in Kernel Methods: Support Vector Learn- ing, pp. 255-268, The MIT Press, Cambridge, MA. Automatic term identification and classification in biology texts. C Nobata, N Collier, J Tsujii, Proc. of the 5th NLPRS. of the 5th NLPRSC. Nobata, N. Collier, and J. Tsujii. 1999. Automatic term identification and classification in biology texts. In Proc. of the 5th NLPRS, pages 369-374. Predicting the Sub-Cellular Location of Proteins from Text Using Support Vector Machines. B J Stapley, L A Kelley, M J E Sternberg, Proc. of Pacific Symposium on Biocomputing 7. of Pacific Symposium on Biocomputing 7B.J. Stapley, L.A. Kelley, and M.J.E. Sternberg. 2002. Predicting the Sub-Cellular Location of Proteins from Text Using Support Vector Machines. In Proc. of Pa- cific Symposium on Biocomputing 7, pages 374-385. Vladimir Vapnik, Statistical Learning Theory Wiley. New YorkVladimir Vapnik. 1998. Statistical Learning Theory Wi- ley, New York.
33,353,746
A Computational Perspective on the Romanian Dialects
In this paper we conduct an initial study on the dialects of Romanian. We analyze the differences between Romanian and its dialects using the Swadesh list. We analyze the predictive power of the orthographic and phonetic features of the words, building a classification problem for dialect identification.
[ 7116388, 4654823, 18987788, 16228781, 18954389, 490045 ]
A Computational Perspective on the Romanian Dialects Alina Maria Ciobanu alina.ciobanu@my.fmi.unibuc.ro Faculty of Mathematics and Computer Science Center for Computational Linguistics University of Bucharest University of Bucharest Liviu P Dinu ldinu@fmi.unibuc.ro Faculty of Mathematics and Computer Science Center for Computational Linguistics University of Bucharest University of Bucharest A Computational Perspective on the Romanian Dialects Romaniandialectslanguage similarity In this paper we conduct an initial study on the dialects of Romanian. We analyze the differences between Romanian and its dialects using the Swadesh list. We analyze the predictive power of the orthographic and phonetic features of the words, building a classification problem for dialect identification. Introduction and Related Work The rapid development of the online repositories has lead to a significant increase in the number of multilingual documents, allowing users from all over the world to access information that has never been available before. This accelerated growth created the stringent need to overcome the language barrier by developing methods and tools for processing multilingual information. Nowadays, NLP tools for the official languages spoken in the European Union and for the most popular languages are constantly created and improved. However, there are many other language varieties and dialects that could benefit from such NLP tools. The effort for building NLP tools for resource-poor language varieties and dialects can be reduced by adapting the tools from related languages for which more resources are available. The importance of adapting NLP tools from resource-rich to resource-poor closely related languages has been acknowledged by the research communities and has been materialized through multiple events, such as the workshop on Language Technology for Closely Related Languages and Language Variants (Nakov et al., 2014) or the workshop on Applying NLP Tools to Similar Languages, Varieties and Dialects (Zampieri et al., 2014). A related problem occurs when researchers are interested in the cultural heritage of small communities, who developed their own techniques of communication and prefer using dialects instead of the official language of the region they live in. In some situations, these dialects are close enough to the standard language, but in other situations the difference is consistent, so much so that some dialects have become languages of their own (for example Friulian, spoken in the North-East of Italy). These matters raise interesting research problems, since many such dialects are used only in speaking. Moreover, they often tend to be used only in very specific situations (such as speaking in the family), very rarely being taught in schools. Thus, many dialects are in danger of extinction, according to the UNESCO list of endangered languages (Moseley, 2010). In this paper, we conduct an initial study on the dialects of Romanian. This investigation has the purpose of providing a deeper understanding of the differences between dialects, which would aid the adaptation of existing NLP tools for related varieties. The aim of our investigation is to assess the orthographic and phonetic differences between the dialects of Romanian. In this paper, we quantify only the orthographic and phonetic differences, but the morphology and the syntax are other important aspects which contribute to the individualization of each variety, that we leave for further study. Previously, Tonelli et al. (2010) proposed such an adaptation for a morphological analyzer for Venetan. Similarly, Kanayama et al. (2014) built a dependency parser for Korean, leveraging resources (transfer learning) from Japanese. They showed that Korean sentences could be successfully parsed using features learnt from a Japanese training corpus. In the field of machine translation, Aminian et al. (2014) developed a method for translating from dialectal Arabic into English in which they reduce the OOV ratio by exploiting resources from standard Arabic. Although dialects and varieties have been investigated for other languages, such as Spanish and Portuguese (Zampieri and Gebre, 2012;Zampieri et al., 2013), Romanian dialects did not receive much attention in NLP. To our knowledge, while the syllabic structure of Aromanian has been previously investigated (Nisioi, 2014), this is one of the very first computational comparative studies on the Romanian dialects. The Romanian Dialects Romanian is a Romance language, belonging to the Italic branch of the Indo-European language family, and is of particular interest regarding its geographic setting. It is surrounded by Slavic languages and its relationship with the big Romance kernel was difficult. According to Tagliavini (1972), Romanian has been isolated for a long period from the Latin culture in an environment of different languages. Joseph (1999) emphasizes the reasons which make Romanian of special interest to linguists with comparative interests. Besides general typological comparisons that can be made between any two or more languages, Romanian can be studied based on comparisons of genetic and geographical nature. Joseph further states that, regarding genetic relationships, Romanian can be studied in the context of those languages most closely related to it and that the well-studied Romance languages enable comparisons that might not be possible otherwise, within less well docu-mented families of languages. Romanian is of particular interest also regarding its geographic setting, participating in numerous areally-based similarities that define the Balkan convergence area. Romanian is spoken by over 24 million people as native language, out of which 17 million are located in Romania, and most of the others in territories that surround Romania (Lewis et al., 2015). According to most Romanian linguists (Puşcariu, 1976;Petrovici, 1970;Caragiu Marioţeanu, 1975), Romanian has four dialects: • Daco-Romanian, or Romanian (RO) -spoken primarily in Romania and the Republic of Moldova, where it has an official status. • Macedo-Romanian, or Aromanian (AR) -spoken in relatively wide areas in Macedonia, Albania, Greece, Bulgaria, Serbia and Romania. • Megleno-Romanian (ME) -spoken in a more narrow area in the Meglen region, in the South of the Balkan Peninsula. • Istro-Romanian (IS) -spoken in a few villages from the North-East of the Istrian Peninsula in Croatia. It is much closer to Italy than to Romania, from a geographical point of view, but shows obvious similarities with Romanian. It seems that the community of Istro-Romanians exists here since before the 12th century. Istro-Romanian is today on the "selected" list of endangered languages, according to the UNESCO classification. 1 Romanian was originally a single language, descendant of the oriental Latin, spoken in the regions around the romanized Danube: Moesia Inferior and Superior, Dacia and Pannonia Inferior (Rosetti, 1966). The period of common Romanian begun in the 7th-8th century and ended in the 10th century, when a part of the population migrated to the South of the Danube, beginning the creation of the dialects. Densuşianu (1901) places the migration to the South even earlier in time, in the 6th and 7th century. Thus, starting with the 10th century, given a series of political, military, economical and social events, the 4 dialects of Romanian were born: Daco-Romanian (to the North of the Danube), Aromanian, Megleno-Romanian and Istro-Romanian (to the South of the Danube). Among these dialects, only Daco-Romanian could develop into a national standard language, in the context of several political and historical factors, leading to the Romanian language that is spoken today inside the borders of Romania. The other three dialects are spoken in communities spread in different countries. An explanation for this fact is the setting of the Slavic people at the South of the Danube, which has lead, among others, to the dispersion of the groups that spoke the three dialects to the South of the Danube. According to the Ethnologue (Lewis et al., 2015), the three dialects to the South of the Danube were developed between the 5th and the 10th century, while according to Rosetti (1966), this process took place after the 10th century. Thus, according to Rosetti (1966), Aromanian and Megleno-Romanian developed in the 11th century, while Istro-Romanian developed in the 13th century. Rosetti (1966) states that there are, actually, two main dialects of Romanian: Daco-Romanian and Aromanian, the other two being derived from them (Megleno-Romanian derived from Aromanian and Istro-Romanian derived from Daco-Romanian). RO IS ME AR Language Experiments In this section we describe our investigations and experiments on the dialects of Romanian. We are mainly interested in assessing the differences between the dialects from the South of the Danube and Daco-Romanian. We henceforth refer to Daco-Romanian, the standard language spoken in Romania, as Romanian. Data We use a dataset of 108 words comprising the short Swadesh list for the Romanian dialects. 2 The Swadesh list has been widely used in lexicostatistics and comparative linguistics, to investigate the classification of the languages (Dyen et al., 1992;McMahon and McMahon, 2003). The dataset is provided in two versions: orthographic and phonetic. In Figure 1 we represent the average word length (considering their orthographic form) for the Romanian dialects. Istro-Romanian has the shortest words, followed by Romanian. Megleno-Romanian and Aromanian have slightly longer words, on average, but the differences are not significant. The orthographic or phonetic distance has been widely used for analyzing related words and for reconstructing phylogenies (Kondrak, 2004;Delmestri and Cristianini, 2012). We use the edit distance to observe how close the Romanian dialects are to one another (Table 1). The edit distance (Levenshtein, 1965) counts the minimum number of operations (insertion, deletion and substitution) required to transform one string into another. We use a normalized version of this metric, dividing the edit distance by the length of the longest string. Using the orthographic form of the words (see also Figure 2), Aromanian words are closest to the Romanian words (0.44), followed by Megleno-Romanian (0.47) and Istro-Romanian (0.55). When using the phonetic form of the words, Megleno-Romanian words are closest to the Romanian words (0.39), followed by Aromanian (0.40) and Istro-Romanian (0.42). At the phonetic level, the distance between Romanian and the other three dialects is much smaller than the same distance measured at the orthographic level. In both situations, Istro-Romanian is farthest from the other dialects. One possible reason could be the geographical regions in which Istro-Romanian is spoken, farther from the regions where the other dialects are spoken. In Figure 3 we represent the dendrogram for the Romanian dialects, based on the computed distances on the orthographic version of the dataset. IS ME Dialect Identification We are interested to see if the orthographic or phonetic differences between Romanian and the other Romanian dialects (spoken at the South of the Danube) are dialect-specific (i.e., if they have enough discriminative power to identify the dialect to which a word belongs). To this end, we build a classification problem as follows: given the parallel list of 108 words (in all the Romanian dialects), we extract pairs having the form (romanian-word, dialect-word), where dialect ∈ {Istro-Romanian, Megleno-Romanian, Aromanian}. We obtain, thus, a dataset of 324 such input pairs. The goal is to automatically decide to which dialect the dialect-word belongs. The dialect identification problem is not trivial and our goal, in this paper, is not to improve on the state-of-the-art methods in this research area, but to investigate the predictive power of the orthographic and phonetic differences between Romanian and its dialects. We use a methodology that has been previously used for discriminating between related and unrelated words, and for distinguishing the type of relationship between the words (Ciobanu and Dinu, 2014b;Ciobanu and Dinu, 2015). We align the words using the Needleman-Wunsch alignment algorithm (Needleman and Wunsch, 1970) and we extract n-gram features from the alignment of the words. Additionally, we also extract n-grams of characters from the dialect-word. We search for the optimum n-gram size in {1, 2, 3, 4}, both for the n-grams extracted from the alignment and for the n-grams extracted from the dialect-word. We train a Logistic Regression classifier, using the imple- mentation provided by Weka (Hall et al., 2009). Since our dataset is small, we evaluate the performance of the model with 5-fold cross-validation. For both experiments (orthographic and phonetic), n = 2 proves to be the optimal n-gram size. In Table 2 we report the most common 2-grams for each dialect (using the orthographic version of the words) and in Table 3 we show examples of word pairs aligned with the Needleman-Wunsch algorithm. Results In Table 4 we report the cross-validation results for dialect identification, for the orthographic version of the dataset (Table 4a) and for the phonetic version of the dataset (Table 4b). For the former, the best results, in terms of F-score values, are obtained for Istro-Romanian (0.70), followed by Aromanian (0.60) and Megleno-Romanian (0.56). This shows that the Istro-Romanian dialect can be identified easier, and the orthographic features of the Istro-Romanian words have the highest predictive power. For the later, the ranking is different: Aromanian is identified with the highest F-score (0.71), followed by Istro-Romanian (0.70), Megleno-Romanian being on the last position (0.53). At the phonetic level, we notice that the Megleno-Romanian dialect is the most difficult to identify. In Table 5 we report the confusion matrix for both experiments (orthographic and phonetic). We report the number Table 5: Confusion matrix for dialect identification using the orthographic (a) and the phonetic (b) form of the words. We report the number of correctly classified and misclassified instances. Conclusions In this paper we conducted an initial study on the Romanian dialects. We analyzed the orthographic and phonetic differences between the Romanian dialects, using the Swadesh list and building a classification problem for dialect identification. The results obtained so far show that Istro-Romanian has more dialect-specific differences from Romanian, followed by Aromanian and Megleno-Romanian. The next steps in our investigation will be to conduct a similar study on corpora (Ciobanu and Dinu, 2014a) instead of word lists, as far as resources are available, and to assess the mutual intelligibility of the Romanian dialects. The necessity of such a study is increased by the fact that at least one of the Romanian dialects (namely Istro-Romanian) is today on the "selected" list of endangered languages, according to the UNESCO classification (Moseley, 2010). Figure 1 : 1Average word length, using the orthographic form of the words. Figure 2 : 2Average edit distance from Romanian, using the orthographic form of the words. Figure 3 : 3Dendrogram representing the hierarchical clustering using the farthest neighbor algorithm and the orthographic form of the words as input. Table 1 : 1The average edit distance between the words. Table 2 : 2The most common 2-grams for each dialect. Table 3 : 3Alignment of the Romanian word roşu (meaning red) with its translations in the other Romanian dialects. Table 4 : 4Cross-validation results for dialect identification using the orthographic (a) and the phonetic (b) form of the words. of instances that are correctly classified and misclassified for each dialect. In both versions of the dataset, the maximum number of correctly classified instances is reported for Istro-Romanian (with a maximum of 88 for the phonetic version of the dataset). While for the phonetic version of the dataset only 3 Istro-Romanian words are classified as Aromanian, for the orthographic version of the dataset we notice an increase, with 10 Istro-Romanian words being classified as Aromanian. For Aromanian, most of the misclassified instances are labeled as Megleno-Romanian, in both versions of the dataset. For Megleno-Romanian, most of the misclassified instances in the orthographic version of the dataset are labeled as Aromanian(25), while for the phonetic version of the dataset most of the misclassified instances are labeled as Istro-Romanian (38).ME AR IS ME 65 25 18 AR 34 62 12 IS 23 10 75 (a) Orthographic ME AR IS ME 54 16 38 AR 23 70 15 IS 17 3 88 (b) Phonetic UNESCO Interactive Atlas of the World's Languages inDanger (Moseley, 2010) provides for Istro-Romanian the following information: severely endangered, with an estimation of 300 first-language and 100 second-language speakers in Istria, plus 1000 others living outside of Istria. http://starling.rinet.ru/new100/main.htm AcknowledgementsWe thank the anonymous reviewers for their helpful and constructive comments. The research of Liviu P. Dinu was supported by a grant of the Romanian National Authority for Scientific Research, CNCS UEFISCDI, project number PN-II-ID-PCE-2011-3-0959. Handling OOV Words in Dialectal Arabic to English Machine Translation. Maryam Aminian, Mahmoud Ghoneim, Mona Diab, Proceedings of the Workshop on Language Technology for Closely Related Languages and Language Variants. the Workshop on Language Technology for Closely Related Languages and Language VariantsMaryam Aminian, Mahmoud Ghoneim, and Mona Diab. 2014. Handling OOV Words in Dialectal Arabic to En- glish Machine Translation. In Proceedings of the Work- shop on Language Technology for Closely Related Lan- guages and Language Variants, LT4CloseLang 2014, pages 99-108. Compendiu de Dialectologie Românȃ. Editura Ş tiinţificȃ şi Enciclopedicȃ. Matilda Caragiu, Marioţeanu , BucureştiMatilda Caragiu Marioţeanu. 1975. Compendiu de Di- alectologie Românȃ. Editura Ş tiinţificȃ şi Enciclopedicȃ, Bucureşti. An Etymological Approach to Cross-Language Orthographic Similarity. Application on Romanian. Alina Maria Ciobanu, Liviu P Dinu, Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing. the 2014 Conference on Empirical Methods in Natural Language ProcessingAlina Maria Ciobanu and Liviu P. Dinu. 2014a. An Et- ymological Approach to Cross-Language Orthographic Similarity. Application on Romanian. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, EMNLP 2014, pages 1047-1058. Automatic Detection of Cognates Using Orthographic Alignment. Alina Maria Ciobanu, Liviu P Dinu, Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics. the 52nd Annual Meeting of the Association for Computational Linguistics2Alina Maria Ciobanu and Liviu P. Dinu. 2014b. Automatic Detection of Cognates Using Orthographic Alignment. In Proceedings of the 52nd Annual Meeting of the Asso- ciation for Computational Linguistics, volume 2: Short Papers, ACL 2014, pages 99-105. Automatic Discrimination between Cognates and Borrowings. Alina Maria Ciobanu, Liviu P Dinu, Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing. the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language ProcessingShort Papers2Alina Maria Ciobanu and Liviu P. Dinu. 2015. Automatic Discrimination between Cognates and Borrowings. In Proceedings of the 53rd Annual Meeting of the Associ- ation for Computational Linguistics and the 7th Interna- tional Joint Conference on Natural Language Process- ing, volume 2: Short Papers, ACL-IJCNLP 2015, pages 431-437. Antonella Delmestri, Nello Cristianini, Linguistic Phylogenetic Inference by PAM-like Matrices. Journal of Quantitative Linguistics. 19Antonella Delmestri and Nello Cristianini. 2012. Linguis- tic Phylogenetic Inference by PAM-like Matrices. Jour- nal of Quantitative Linguistics, 19(2):95-120. Histoire de la Langue Roumaine. Ovid Densuşianu, . ; E Leroux, 1Ovid Densuşianu. 1901. Histoire de la Langue Roumaine, volume 1. E. Leroux. An Indoeuropean Classification: a Lexicostatistical Experiment. Isidore Dyen, Joseph B Kruskal, Paul Black, Transactions of the Americal Philosophical Society. 825Isidore Dyen, Joseph B. Kruskal, and Paul Black. 1992. An Indoeuropean Classification: a Lexicostatistical Ex- periment. Transactions of the Americal Philosophical Society, 82(5):1-132. Mark Hall, Eibe Frank, Geoffrey Holmes, Bernhard Pfahringer, Peter Reutemann, Ian H Witten, The WEKA Data Mining Software: An Update. SIGKDD Explorations. 11Mark Hall, Eibe Frank, Geoffrey Holmes, Bernhard Pfahringer, Peter Reutemann, and Ian H. Witten. 2009. The WEKA Data Mining Software: An Update. SIGKDD Explorations, 11(1):10-18. Romanian and the Balkans: Some Comparative Perspectives. D Brian, Joseph, The Emergence of the Modern Language Sciences. Sheila Embleton, John E. Joseph, and Hans-Joseph NiedereheJohn Benjamins Publishing CompanyBrian D. Joseph. 1999. Romanian and the Balkans: Some Comparative Perspectives. In Sheila Embleton, John E. Joseph, and Hans-Joseph Niederehe, editors, The Emer- gence of the Modern Language Sciences. John Ben- jamins Publishing Company. Learning from a Neighbor: Adapting a Japanese Parser for Korean Through Feature Transfer Learning. Hiroshi Kanayama, Youngja Park, Yuta Tsuboi, Dongmook Yi, Proceedings of the Workshop on Language Technology for Closely Related Languages and Language Variants. the Workshop on Language Technology for Closely Related Languages and Language VariantsHiroshi Kanayama, Youngja Park, Yuta Tsuboi, and Dong- mook Yi. 2014. Learning from a Neighbor: Adapting a Japanese Parser for Korean Through Feature Trans- fer Learning. In Proceedings of the Workshop on Lan- guage Technology for Closely Related Languages and Language Variants, LT4CloseLang 2014, pages 2-12. Combining Evidence in Cognate Identification. Grzegorz Kondrak, Proceedings of the 17th Conference of the Canadian Society for Computational Studies of Intelligence. the 17th Conference of the Canadian Society for Computational Studies of IntelligenceCanadian AIGrzegorz Kondrak. 2004. Combining Evidence in Cognate Identification. In Proceedings of the 17th Conference of the Canadian Society for Computational Studies of Intel- ligence, Canadian AI 2004, pages 44-59. Vladimir I Levenshtein, Binary Codes Capable of Correcting Deletions, Insertions, and Reversals. Soviet Physics Doklady. 10Vladimir I. Levenshtein. 1965. Binary Codes Capable of Correcting Deletions, Insertions, and Reversals. Soviet Physics Doklady, 10:707-710. Paul Lewis, Gary Simons, Charles Fennig, Ethnologue: Languages of the World. Dallas, TexasSummer Institute of Linguistics18th editionPaul Lewis, Gary Simons, and Charles Fennig. 2015. Eth- nologue: Languages of the World. 18th edition. Summer Institute of Linguistics, Dallas, Texas. Finding Families: Quantitative Methods in Language Classification. April Mcmahon, Robert Mcmahon, Transactions of the Philological Society. 1011April McMahon and Robert McMahon. 2003. Find- ing Families: Quantitative Methods in Language Clas- sification. Transactions of the Philological Society, 101(1):7-55. Atlas of the World's Languages in Danger. Christopher MoseleyUNESCO PublishingParis3rd editionChristopher Moseley, editor. 2010. Atlas of the World's Languages in Danger, 3rd edition. UNESCO Publish- ing, Paris. Preslav Nakov, Petya Osenova, Cristina Vertan, Proceedings of the Workshop on Language Technology for Closely Related Languages and Language Variants. the Workshop on Language Technology for Closely Related Languages and Language VariantsAssociation for Computational LinguisticsPreslav Nakov, Petya Osenova, and Cristina Vertan, editors. 2014. Proceedings of the Workshop on Language Tech- nology for Closely Related Languages and Language Variants, LT4CloseLang 2014. Association for Compu- tational Linguistics. A General Method Applicable to the Search for Similarities in the Amino Acid Sequence of two Proteins. B Saul, Christian D Needleman, Wunsch, Journal of Molecular Biology. 483Saul B. Needleman and Christian D. Wunsch. 1970. A General Method Applicable to the Search for Similarities in the Amino Acid Sequence of two Proteins. Journal of Molecular Biology, 48(3):443-453. On the Syllabic Structures of Aromanian. Sergiu Nisioi, Proceedings of the 8th Workshop on Language Technology for Cultural Heritage, Social Sciences, and Humanities LaTeCH. the 8th Workshop on Language Technology for Cultural Heritage, Social Sciences, and Humanities LaTeCHSergiu Nisioi. 2014. On the Syllabic Structures of Aroma- nian. In Proceedings of the 8th Workshop on Language Technology for Cultural Heritage, Social Sciences, and Humanities LaTeCH 2014, pages 110-118. Studii de Dialectologie şi Toponimie. Emil Petrovici, Editura Academiei. Emil Petrovici. 1970. Studii de Dialectologie şi Toponimie. Editura Academiei, Bucureşti. Limba Românȃ. Editura Minerva. Sextil Puşcariu, Sextil Puşcariu. 1976. Limba Românȃ. Editura Minerva. Istoria Limbii Române. Editura Ş tiinţificȃ. Alexandru Rosetti, Alexandru Rosetti. 1966. Istoria Limbii Române. Editura Ş tiinţificȃ. Le Origini delle Lingue Neolatine. Carlo Tagliavini, Casa editrice PatronCarlo Tagliavini. 1972. Le Origini delle Lingue Neolatine. Casa editrice Patron. VenPro: A Morphological Analyzer for Venetan. Sara Tonelli, Emanuele Pianta, Rodolfo Delmonte, Michele Brunelli, Proceedings of the International Conference on Language Resources and Evaluation, LREC 2010. the International Conference on Language Resources and Evaluation, LREC 2010Sara Tonelli, Emanuele Pianta, Rodolfo Delmonte, and Michele Brunelli. 2010. VenPro: A Morphological Analyzer for Venetan. In Proceedings of the Interna- tional Conference on Language Resources and Evalua- tion, LREC 2010, pages 866-870. Automatic Identification of Language Varieties: The Case of Portuguese. Marcos Zampieri, Binyam Gebrekidan Gebre, Proceedings of the 11th Conference on Natural Language Processing. the 11th Conference on Natural Language Processing2012Marcos Zampieri and Binyam Gebrekidan Gebre. 2012. Automatic Identification of Language Varieties: The Case of Portuguese. In Proceedings of the 11th Confer- ence on Natural Language Processing, KONVENS 2012, pages 233-237. N-Gram Language Models and POS Distribution for the Identification of Spanish Varieties. Marcos Zampieri, Sascha Binyam Gebrekidan Gebre, Diwersy, Proceedings of the 20th Conférence du Traitement Automatique du Langage Naturel, TALN 2013. the 20th Conférence du Traitement Automatique du Langage Naturel, TALN 2013Marcos Zampieri, Binyam Gebrekidan Gebre, and Sascha Diwersy. 2013. N-Gram Language Models and POS Distribution for the Identification of Spanish Varieties. In Proceedings of the 20th Conférence du Traitement Au- tomatique du Langage Naturel, TALN 2013, pages 580- 587. Proceedings of the First Workshop on Applying NLP Tools to Similar Languages, Varieties and Dialects. Marcos Zampieri, Liling Tan, Nikola Ljubešić, and Jörg Tiedemannthe First Workshop on Applying NLP Tools to Similar Languages, Varieties and DialectsAssociation for Computational Linguistics and Dublin City UniversityMarcos Zampieri, Liling Tan, Nikola Ljubešić, and Jörg Tiedemann, editors. 2014. Proceedings of the First Workshop on Applying NLP Tools to Similar Languages, Varieties and Dialects, VarDial 2014. Association for Computational Linguistics and Dublin City University.
235,097,299
[]
Measuring Biases of Word Embeddings: What Similarity Measures and Descriptive Statistics to Use? June 10, 2021 Hossein Azarpanah hossein.azarpanah@concordia.ca School of Business Concordia University Montreal QC, CA Mohsen Farhadloo mohsen.farhadloo@concordia.ca School of Business Concordia University Montreal QC, CA John Molson School of Business Concordia University Montreal QC, CA Measuring Biases of Word Embeddings: What Similarity Measures and Descriptive Statistics to Use? Proceedings of the First Workshop on Trustworthy Natural Language Processing the First Workshop on Trustworthy Natural Language ProcessingJune 10, 20218 Word embeddings are widely used in Natural Language Processing (NLP) for a vast range of applications. However, it has been consistently proven that these embeddings reflect the same human biases that exist in the data used to train them. Most of the introduced bias indicators to reveal word embeddings' bias are average-based indicators based on the cosine similarity measure. In this study, we examine the impacts of different similarity measures as well as other descriptive techniques than averaging in measuring the biases of contextual and non-contextual word embeddings. We show that the extent of revealed biases in word embeddings depends on the descriptive statistics and similarity measures used to measure the bias. We found that over the ten categories of word embedding association tests, Mahalanobis distance reveals the smallest bias, and Euclidean distance reveals the largest bias in word embeddings. In addition, the contextual models reveal less severe biases than the noncontextual word embedding models with GPT showing the fewest number of WEAT biases. Introduction Word embedding models including Word2Vec (Mikolov et al., 2013), GloVe (Pennington et al., 2014), BERT (Devlin et al., 2018), ELMo (Peters et al., 2018), and GPT (Radford et al., 2018) have become popular components of many NLP frameworks and are vastly used for many downstream tasks. However, these word representations preserve not only statistical properties of human language but also the human-like biases that exist in the data used to train them (Bolukbasi et al., 2016;Caliskan et al., 2017;Kurita et al., 2019;Basta et al., 2019;Gonen and Goldberg, 2019). It has also been shown that such biases propagate to the downstream NLP tasks and have negative impacts on their performance (May et al., 2019;Leino et al., 2018). There are studies investigating how to miti-gate biases of word embeddings (Liang et al., 2020;Ravfogel et al., 2020). Different approaches have been used to present and quantify corpus-level biases of word embeddings. Bolukbasi et al. (2016) proposed to measure the gender bias of word representations in Word2Vec and GloVe by calculating the projections into principal components of differences of embeddings of a list of male and female pairs. Basta et al. (2019) adapted the idea of "gender direction" of (Bolukbasi et al., 2016) to be applicable to contextual word embeddings such as ELMo. In (Basta et al., 2019) first, the gender subspace of ELMo vector representations is calculated and then, the presence of gender bias in ELMo is identified. Gonen and Goldberg (2019) introduced a new gender bias indicator based on the percentage of sociallybiased terms among the k-nearest neighbors of a target term and demonstrated its correlation with the gender direction indicator. Caliskan et al. (2017) developed Word Embedding Association Test (WEAT) to measure bias by comparing two sets of target words with two sets of attribute words and documented that Word2Vec and GloVe contain human-like biases such as gender and racial biases. May et al. (2019) generalized the WEAT test to phrases and sentences by inserting individual words from WEAT tests into simple sentence templates and used them for contextual word embeddings. Kurita et al. (2019) proposed a new method to quantify bias in BERT embeddings based on its masked language model objective using simple template sentences. For each attribute word, using a simple template sentence, the normalized probability that BERT assigns to that sentence for each of the target words is calculated, and the difference is considered the measure of the bias. Kurita et al. (2019) demonstrated that this probabilitybased method for quantifying bias in BERT was more effective than the cosine-based method. Motivated by these recent studies, we comprehensively investigate different methods for bias exposure in word embeddings. Particularly, we investigate the impacts of different similarity measures and descriptive statistics to demonstrate the degree of associations between the target sets and attribute sets in the WEAT. First, other than cosine similarity, we study Euclidean, Manhattan, and Mahalanobis distances to measure the degree of association between a single target word and a single attribute word. Second, other than averaging, we investigate minimum, maximum, median, and a discrete (gridbased) optimization approach to find the minimum possible association to report between a single target word and the two attribute sets in each of the WEAT tests. We consistently compare these bias measures for different types of word embeddings including non-contextual (Word2Vec, GloVe) and contextual ones (BERT, ELMo, GPT, GPT2). Method Implicit Association Test (IAT) was first introduced by Greenwald et al. (1998a) in psychology to demonstrate the enormous differences in response time when participants are asked to pair two concepts they deem similar, in contrast to two concepts they find less similar. For example, when subjects are encouraged to work as quickly as possible, they are much likely to label flowers as pleasant and insects as unpleasant. In IAT, being able to pair a concept to an attribute quickly indicates that the concept and attribute are linked together in the participants' minds. The IAT has widely been used to measure and quantify the strength of a range of implicit biases and other phenomena, including attitudes and stereotype threat (Karpinski and Hilton, 2001;Kiefer and Sekaquaptewa, 2007;Stanley et al., 2011). Inspired by IAT, Caliskan et al. (2017) introduced WEAT to measure the associations between two sets of target concepts and two sets of attributes in word embeddings learned from large text corpora. A hypothesis test is conducted to demonstrate and quantify the bias. The null hypothesis states that there is no difference between the two sets of target words in terms of their relative distance/similarity to the two sets of attribute words. A permutation test is performed to measure the null hypothesis's likelihood. This test computes the probability that target words' random permutations would produce a greater difference than the observed difference. Let X and Y be two sets of target word embeddings and A and B be two sets of attribute embeddings. The test statistics is defined as: s(X, Y, A, B) = | x∈X s(x, A, B) − y∈Y s(y, A, B)| where: s(w, A, B) = f a∈A (s( − → w , − → a )) − f b∈B (s( − → w , − → b )) (1) In other words, s(w, A, B) quantifies the association of a single word w with the two sets of attributes, and s(X, Y, A, B) measures the differential association of the two sets of targets with the two sets of attributes. Denoting all the partitions of X ∪ Y with (X i , Y i ) i , the one-sided p-value of the permutation test is: P r i (s(X i , Y i , A, B) > s(X, Y, A, B)) The magnitude of the association of the two target sets with the two attribute sets can be measured with the effect size as: d = |s(x, A, B) − s(y, A, B)| std-dev w∈X∪Y s(w, A, B) It is worth mentioning that d is a measure used to calculate how separated two distributions are and is basically the standardized difference of the means of the two distributions (Cohen, 2013). Controlling for the significance, a larger effect size reflects a more severe bias. WEAT and almost all the other studies inspired by it (Garg et al., 2018;Brunet et al., 2018;Gonen and Goldberg, 2019;May et al., 2019) use the following approach to measure the association of a single target word with the two sets of attributes (equation 1). First, they use cosine similarity to measure the target word's similarity to each word in the attribute sets. Then they calculate the average of the similarities over each attribute set. In this paper we investigate the impacts of other functions such as min(·), mean(·), median(·), or max(·) for function f (·) in equation (1) (originally only mean(·) has been used). Also, in this paper in addition to cosine similarity, we consider Euclidean and Manhattan distances as well as the following measures for the s( − → w , − → a ) in equation (1). Mahalanobis distance: introduced by P. C. Mahalanobis (Mahalanobis, 1936) this distance measures the distance of a point from a distribution: s( − → w , − → a ) = (( − → w − − → a ) T Σ −1 A ( − → w − − → a )) 1 2 . It is worth noting that the Mahalanobis distance takes into account the distribution of the set of attributes while measuring the association of the target word w with an attribute vector. Discrete optimization of the association measure: In equation (1), s(w, A, B) quantifies the association of a single target word w with the two sets of attributes. To quantify the minimum possible association of a target word w with the two sets of attributes, we first calculate the distance of w from all attribute words in A and B, then calculate all possible differences and find the minimum difference. s(w, A, B) = min a∈A,b∈B |s( − → w , − → a ) − s( − → w , − → b )| (2) Biases studied We studied all ten bias categories introduced in IAT (Greenwald et al., 1998a) and replicated in WEAT to measure the biases in word embeddings. The ten WEAT categories are briefly introduced in Table 1. For more detail and example of target and attribute words, please check Appendix A. Although WEAT 3 to 5 have the same names, they have different target and attribute words. Table 1: The associations studied in the WEAT As described in section 2, we need each attribute set's covariance matrix to compute Mahalanobis distance. To get stable covariance matrix estimation due to the high dimension of the embeddings we first created larger attribute sets by adding synonym terms. Next, we estimated the sparse covariance matrices as the number of samples in each attribute set is smaller than the number of features. To enforce sparsity, we estimated the l1 penalty using k-fold cross validation with k=3. Results of experiments We examined the 10 different types of biases in WEAT (Table 1) for word embedding models listed in Table 2. We used publicly available pre-trained models. For contextual word embeddings, we used single word sentences as input instead of using simple template sentences used in other studies (May et al., 2019;Kurita et al., 2019). The simple template sentences such as "this is TARGET" or "TARGET is ATTRIBUTE" used in other studies do not really provide any context to reveal the contextual capability of embeddings such as BERT or ELMo. This way, the comparisons between the contextual embeddings and non-contextual embeddings are fairer as both of them only get the target or attribute terms as input. For each model, we performed the WEAT tests using four similarity metrics mentioned in section 2: cosine, Euclidean, Manhattan, Mahalanobis. For each similarity metric, we also used min(·), mean(·), median(·), or max(·) as the f (·) in equation (1). Also, as explained in section 2, we discretely optimized the association measure and found the minimum association in equation (1). In these experiments (Table 3 and Table 4), the larger and more significant effect sizes imply more severe biases. Impacts of different descriptive statistics: Our first goal was to report the changes in the measured biases when we change the descriptive statistics. The range of effect sizes was from 0.00 to 1.89 (µ = 0.65, σ = 0.5). Our findings show that mean has a better capability to reveal biases as it provides the most cases of significant effect sizes (µ = 0.8, σ = 0.52) across models and distance measures. Median is close to the mean with (µ = 0.74, σ = 0.48) among all its effect sizes. The effect sizes for minimum (µ = 0.68, σ = 0.48) and maximum (µ = 0.65, σ = 0.48) are close to each other, but smaller than mean and median. The discretely optimized association measure (Eq. 2) provides the smallest effect sizes (µ = 0.39, σ = 0.3) and reveals the least number of implicit biases. These differences as the result of applying different descriptive statistics in the association measure (Eq. (1)) show that the revealed biases depend on the applied statistics to measure the bias. For example, in the cosine distance of Word2Vec, if we change the descriptive statistic from mean to minimum, the biases for WEAT 3 and WEAT 4 will become insignificant (no bias will be reported). In another example, in GPT model, while the result of mean cosine is not significant for WEAT 3 and WEAT 4, they become significant for median cosine. Moreover, almost for all models, the effect size of the discretely optimized minimum distance is not significant. Our intention for considering this statistic was to report the minimum possible association of a target word with the attribute sets. If this measure is used for reporting biases, one can misleadingly claim that there is no bias. Impacts of different similarity measures: The effect sizes for cosine, Manhattan, and Euclidean are closer to each other and greater than the Mahalanobis distance (cosine: (µ = 0.72, σ = 0.49), Euclidean: (µ = 0.67, σ = 0.5), Manhattan: (µ = 0.63, σ = 0.48), Mahalanobis: (µ = 0.58, σ = 0.45)). Mahalanobis distance also detects the fewest number of significant bias types across all models. As an example, while mean and median effect sizes for WEAT 3 and WEAT 5 in GloVe and Word2Vec are mostly significant for cosine, Euclidean, and Manhattan; the same results are not significant for the Mahalanobis distance. That means with the Mahalanobis distance as the measure of the bias, no bias will be reported for WEAT 3 and WEAT 5 tests. This emphasizes the importance of chosen similarity measures in detecting biases of word embeddings. More importantly, as the Mahalanobis distance considers the distribution of attributes in measuring the distance, it may be a better choice than the other similarity measures for measuring and revealing biases with GPT showing fewer number of biases. Biases in different word embedding models: Using any combination of descriptive statistics and similarity measures, all the contextualized models have less significant biases than GloVe and Word2Vec. In Table 3 the number of tests with significant implicit biases out of the 10 WEAT tests along with the mean and standard deviation of the effect sizes for all embedding models have been reported. The complete list of effect sizes along with their p-value are provided in Table 4. Following our findings in the previous sections, we choose mean of Euclidean to reveal biases. By doing so, GloVe and Word2Vec show the most number of significant biases with 9 and 7 significant biases in 10 WEAT categories (Table 3). Using mean of Euclidean, our results confirm all the results by Caliskan et al. (2017), which used mean of cosine in all WEAT tests. The difference is that with the mean of Euclidean measure, the biases are revealed as being more severe. (smaller p-values). Using mean of Euclidean, GPT and ELMo show the fewest number of implicit biases. GPT model shows bias in WEAT 2, 3, and 5. ELMo's significant biases are in WEAT 1, 3, and 6. Using mean Euclidean, almost all models (except for ELMo) confirm the existence of a bias in WEAT 3 to 5. Moreover, all contextualized models found no bias in associating female with arts and male with science (WEAT 7), mental diseases with temporary attributes and physical diseases with permanent attributes (WEAT 9), and young people's name with pleasant attribute and old people's name with unpleasant attributes (WEAT 10). Table 3: Number of revealed biases out of the 10 WEAT bias types for the studied word embeddings along with the (µ, σ) of their effect sizes. The larger the effect size the more severe the bias. Conclusions We studied the impacts of different descriptive statistics and similarity measures on association tests for measuring biases in contextualized and non-contextualized word embeddings. Our findings demonstrate that the detected biases depend on the choice of association measure. Based on our experiments, mean reveals more severe biases and the discretely optimized version reveals fewer number of severe biases. In addition, cosine distance reveals more severe biases and the Mahalanobis distance reveals less severe ones. Reporting biases with mean of Euclidean/Mahalanobis distances identifies more/less severe biases in the models. Furthermore, contextual models show less biases than the non-contextual ones across all 10 WEAT tests with GPT showing the fewest number of biases. Table 4: WEAT effect size, *: significance at 0.01, **: significance at 0.001, ***: significance at 0.0001, ****: significance at 0.00001. WEAT Association 1 1Flowers vs insects with pleasant vs unpleasant 2 Instruments vs weapons with pleasant vs unpleasant 3 Eur.-American vs Afr.-American names with Pleasant vs unpleasant (Greenwald et al., 1998b) 4 Eur.-American vs Afr.-American names (Bertrand and Mullainathan, 2004) with Pleasant vs unpleasant (Greenwald et al., 1998b) 5 Eur.-American vs Afr.-American names (Bertrand and Mullainathan, 2004) with Pleasant vs unpleasant (Nosek et al., 2002) 6 Male vs female names with Career vs family 7 Math vs arts with male vs female terms 8 Science vs arts with male vs female terms 9 Mental vs physical disease with temporary vs permanent 10 Young vs old people's name with pleasant vs unpleasant Table 2 : 2Word embedding models, used representations, and their dimensions. A The studied associations: 10 WEAT categoriesWEATAssociation N T N A Evaluating the underlying gender bias in contextualized word embeddings. Christine Basta, Marta R Costa-Jussà, Noe Casas, arXiv:1904.08783arXiv preprintChristine Basta, Marta R Costa-jussà, and Noe Casas. 2019. Evaluating the underlying gender bias in contextualized word embeddings. arXiv preprint arXiv:1904.08783. Are emily and greg more employable than lakisha and jamal? a field experiment on labor market discrimination. Marianne Bertrand, Sendhil Mullainathan, American economic review. 944Marianne Bertrand and Sendhil Mullainathan. 2004. Are emily and greg more employable than lakisha and jamal? a field experiment on labor market dis- crimination. American economic review, 94(4):991- 1013. Man is to computer programmer as woman is to homemaker? debiasing word embeddings. Tolga Bolukbasi, Kai-Wei Chang, Y James, Venkatesh Zou, Adam T Saligrama, Kalai, Advances in neural information processing systems. Tolga Bolukbasi, Kai-Wei Chang, James Y Zou, Venkatesh Saligrama, and Adam T Kalai. 2016. Man is to computer programmer as woman is to homemaker? debiasing word embeddings. In Ad- vances in neural information processing systems, pages 4349-4357. Marc-Etienne Brunet, Colleen Alkalay-Houlihan, Ashton Anderson, Richard Zemel, arXiv:1810.03611Understanding the origins of bias in word embeddings. arXiv preprintMarc-Etienne Brunet, Colleen Alkalay-Houlihan, Ash- ton Anderson, and Richard Zemel. 2018. Under- standing the origins of bias in word embeddings. arXiv preprint arXiv:1810.03611. Semantics derived automatically from language corpora contain human-like biases. Aylin Caliskan, Joanna J Bryson, Arvind Narayanan, Science. 3566334Aylin Caliskan, Joanna J Bryson, and Arvind Narayanan. 2017. Semantics derived automatically from language corpora contain human-like biases. Science, 356(6334):183-186. Statistical power analysis for the behavioral sciences. Jacob Cohen, Academic pressJacob Cohen. 2013. Statistical power analysis for the behavioral sciences. Academic press. Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, arXiv:1810.04805Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprintJacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understand- ing. arXiv preprint arXiv:1810.04805. Word embeddings quantify 100 years of gender and ethnic stereotypes. Proceedings of the National Academy of. Nikhil Garg, Londa Schiebinger, Dan Jurafsky, James Zou, Sciences. 11516Nikhil Garg, Londa Schiebinger, Dan Jurafsky, and James Zou. 2018. Word embeddings quantify 100 years of gender and ethnic stereotypes. Pro- ceedings of the National Academy of Sciences, 115(16):E3635-E3644. Lipstick on a pig: Debiasing methods cover up systematic gender biases in word embeddings but do not remove them. Hila Gonen, Yoav Goldberg, arXiv:1903.03862arXiv preprintHila Gonen and Yoav Goldberg. 2019. Lipstick on a pig: Debiasing methods cover up systematic gender biases in word embeddings but do not remove them. arXiv preprint arXiv:1903.03862. Measuring individual differences in implicit cognition: the implicit association test. Debbie E Anthony G Greenwald, Jordan Lk Mcghee, Schwartz, Journal of personality and social psychology. 7461464Anthony G Greenwald, Debbie E McGhee, and Jor- dan LK Schwartz. 1998a. Measuring individual dif- ferences in implicit cognition: the implicit associa- tion test. Journal of personality and social psychol- ogy, 74(6):1464. Measuring individual differences in implicit cognition: the implicit association test. Debbie E Anthony G Greenwald, Jordan Lk Mcghee, Schwartz, Journal of personality and social psychology. 7461464Anthony G Greenwald, Debbie E McGhee, and Jor- dan LK Schwartz. 1998b. Measuring individual dif- ferences in implicit cognition: the implicit associa- tion test. Journal of personality and social psychol- ogy, 74(6):1464. Journal of personality and social psychology. Andrew Karpinski, James L Hilton, 81774Attitudes and the implicit association testAndrew Karpinski and James L Hilton. 2001. Attitudes and the implicit association test. Journal of person- ality and social psychology, 81(5):774. Implicit stereotypes and women's math performance: How implicit gender-math stereotypes influence women's susceptibility to stereotype threat. K Amy, Denise Kiefer, Sekaquaptewa, Journal of experimental social psychology. 435Amy K Kiefer and Denise Sekaquaptewa. 2007. Im- plicit stereotypes and women's math performance: How implicit gender-math stereotypes influence women's susceptibility to stereotype threat. Journal of experimental social psychology, 43(5):825-832. Keita Kurita, Nidhi Vyas, Ayush Pareek, Alan W Black, Yulia Tsvetkov, arXiv:1906.07337Measuring bias in contextualized word representations. arXiv preprintKeita Kurita, Nidhi Vyas, Ayush Pareek, Alan W Black, and Yulia Tsvetkov. 2019. Measuring bias in con- textualized word representations. arXiv preprint arXiv:1906.07337. Klas Leino, Emily Black, Matt Fredrikson, Shayak Sen, Anupam Datta, arXiv:1812.08999Feature-wise bias amplification. arXiv preprintKlas Leino, Emily Black, Matt Fredrikson, Shayak Sen, and Anupam Datta. 2018. Feature-wise bias ampli- fication. arXiv preprint arXiv:1812.08999. Irene Mengze Paul Pu Liang, Emily Li, Zheng, Chong Yao, Ruslan Lim, Louis-Philippe Salakhutdinov, Morency, arXiv:2007.08100Towards debiasing sentence representations. arXiv preprintPaul Pu Liang, Irene Mengze Li, Emily Zheng, Yao Chong Lim, Ruslan Salakhutdinov, and Louis-Philippe Morency. 2020. Towards debi- asing sentence representations. arXiv preprint arXiv:2007.08100. On the generalized distance in statitics. Prasanta Chandra Mahalanobis, Proceedings of the National Institute of Sciences of India. the National Institute of Sciences of IndiaPrasanta Chandra Mahalanobis. 1936. On the gener- alized distance in statitics. Proceedings of the Na- tional Institute of Sciences of India, pages 49-55. On measuring social biases in sentence encoders. Chandler May, Alex Wang, Shikha Bordia, Rachel Samuel R Bowman, Rudinger, arXiv:1903.10561arXiv preprintChandler May, Alex Wang, Shikha Bordia, Samuel R Bowman, and Rachel Rudinger. 2019. On mea- suring social biases in sentence encoders. arXiv preprint arXiv:1903.10561. Distributed representations of words and phrases and their compositionality. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, Jeff Dean, Advances in neural information processing systems. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Cor- rado, and Jeff Dean. 2013. Distributed representa- tions of words and phrases and their compositional- ity. In Advances in neural information processing systems, pages 3111-3119. Harvesting implicit group attitudes and beliefs from a demonstration web site. A Brian, Nosek, R Mahzarin, Anthony G Banaji, Greenwald, Group Dynamics: Theory, Research, and Practice. 6101Brian A Nosek, Mahzarin R Banaji, and Anthony G Greenwald. 2002. Harvesting implicit group atti- tudes and beliefs from a demonstration web site. Group Dynamics: Theory, Research, and Practice, 6(1):101. Glove: Global vectors for word representation. Jeffrey Pennington, Richard Socher, Christopher D Manning, Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP). the 2014 conference on empirical methods in natural language processing (EMNLP)Jeffrey Pennington, Richard Socher, and Christopher D Manning. 2014. Glove: Global vectors for word rep- resentation. In Proceedings of the 2014 conference on empirical methods in natural language process- ing (EMNLP), pages 1532-1543. E Matthew, Mark Peters, Mohit Neumann, Matt Iyyer, Christopher Gardner, Kenton Clark, Luke Lee, Zettlemoyer, arXiv:1802.05365Deep contextualized word representations. arXiv preprintMatthew E Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word repre- sentations. arXiv preprint arXiv:1802.05365. Improving language understanding by generative pre-training. Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya SutskeverAlec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language under- standing by generative pre-training. Shauli Ravfogel, Yanai Elazar, Hila Gonen, Michael Twiton, Yoav Goldberg, arXiv:2004.07667Null it out: Guarding protected attributes by iterative nullspace projection. arXiv preprintShauli Ravfogel, Yanai Elazar, Hila Gonen, Michael Twiton, and Yoav Goldberg. 2020. Null it out: Guarding protected attributes by iterative nullspace projection. arXiv preprint arXiv:2004.07667. Implicit race attitudes predict trustworthiness judgments and economic trust decisions. A Damian, Peter Stanley, Sokol-Hessner, R Mahzarin, Elizabeth A Banaji, Phelps, Proceedings of the National Academy of Sciences. 10819Damian A Stanley, Peter Sokol-Hessner, Mahzarin R Banaji, and Elizabeth A Phelps. 2011. Implicit race attitudes predict trustworthiness judgments and eco- nomic trust decisions. Proceedings of the National Academy of Sciences, 108(19):7710-7715.
32,007,803
A Calibration Method for the Evaluation of Sentiment Analysis
Sentiment analysis is the computational task of extracting sentiment from a text document -for example whether it expresses a positive, negative or neutral opinion. Various approaches have been introduced in recent years, using a range of different techniques to extract sentiment information from a document. Measuring these methods against a gold standard dataset is a useful way to evaluate such systems. However, different sentiment analysis techniques represent sentiment values in different ways, such as discrete categorical classes or continuous numerical sentiment scores. This creates a challenge for evaluating and comparing such systems; in particular assessing numerical scores against datasets that use fixed classes is difficult, because the numerical outputs have to be mapped onto the ordered classes. This paper proposes a novel calibration technique that uses precision vs. recall curves to set class thresholds to optimize a continuous sentiment analyser's performance against a discrete gold standard dataset. In experiments mapping a continuous score onto a threeclass classification of movie reviews, we show that calibration results in a substantial increase in f-score when compared to a non-calibrated mapping.
[ 3191704, 7105713, 3181362, 3264224 ]
A Calibration Method for the Evaluation of Sentiment Analysis Sep 4-6 2017 F Sharmila Satthar Computing, Engineering and Mathematics University of Brighton BrightonUK Roger Evans r.p.evans@brighton.ac.uk Computing, Engineering and Mathematics University of Brighton BrightonUK Gulden Uchyigit g.uchyigit@brighton.ac.uk Computing, Engineering and Mathematics University of Brighton BrightonUK A Calibration Method for the Evaluation of Sentiment Analysis Proceedings of Recent Advances in Natural Language Processing Recent Advances in Natural Language ProcessingVarna, BulgariaSep 4-6 201710.26615/978-954-452-049-6_084 Sentiment analysis is the computational task of extracting sentiment from a text document -for example whether it expresses a positive, negative or neutral opinion. Various approaches have been introduced in recent years, using a range of different techniques to extract sentiment information from a document. Measuring these methods against a gold standard dataset is a useful way to evaluate such systems. However, different sentiment analysis techniques represent sentiment values in different ways, such as discrete categorical classes or continuous numerical sentiment scores. This creates a challenge for evaluating and comparing such systems; in particular assessing numerical scores against datasets that use fixed classes is difficult, because the numerical outputs have to be mapped onto the ordered classes. This paper proposes a novel calibration technique that uses precision vs. recall curves to set class thresholds to optimize a continuous sentiment analyser's performance against a discrete gold standard dataset. In experiments mapping a continuous score onto a threeclass classification of movie reviews, we show that calibration results in a substantial increase in f-score when compared to a non-calibrated mapping. Introduction Sentiment analysis is the computational study of people's opinions, appraisals, emotional attitudes toward entities, events and their attributes. The sentiment analysis task involves classifying texts according to the sentiment content they contain. Sentiment analysis is a very active research area in natural language processing, with many research projects working on building sentiment classifiers using different techniques and algorithms. Evaluation is an important process when estimating the performance of text/data classification in information retrieval or natural language processing systems. The accuracy of a (binary) classifier is typically measured based on its precision, recall and f-score values when applied to a gold standard dataset. This approach has been adopted for the evaluation of sentiment analysis systems too (Turney, 2002;Pang et al., 2002;Nasukawa and Yi, 2003;Prabowo and Thelwall, 2009), but it is complicated by the fact that sentiment analysis is usually a multi-class classification task. Many sentiment analysis approaches focus on three classes such as positive, negative and neutral. However Saif et al. (2016) introduced an extra class, in addition to the neutral class, called mixed-sentiment, which is a mixture of positive and negative opinions, while Pang and Lee (2005) and Nakov et al. (2016) explored 4 or 5 star scales/classifications. To evaluate these types of multi-class classification tasks, precision, recall and f-score values are calculated for each class separately, and the performance measures for the whole system are then calculated by averaging those values using micro or macro-averaging (Prabowo and Thelwall, 2009). Most supervised machine learning methods for sentiment analysis produce categorical outputs such as positive, negative and neutral, with no assumptions about the relationship between classes; they simply map texts into classes by associating text features with class labels. But other multiclass systems use rated or scaled methods so that their categorical outputs are implicitly ordered in a natural 'sentiment order' based on sentiment polarity and/or magnitude/intensity, such as the fol- In addition, some sentiment analysis applications are based more explicitly on sentiment scores, rather than sentiment classes, and produce numerical values with positive and negative signs as the output for a given text, such as +0.987, −0.786 . . . or +187, −243 . . . etc. Such methods typically use the sign to indicate the polarity of the given text and numerical values to define the sentiment strength (generally over a systemdependent range), with a sentiment value of 0 indicating a neutral text. A simple mapping from such scores to a 3-class sentiment model just uses the sign (+,0,-) to identify sentiment classes (positive, neutral, negative). However, there is no correspondingly simple way to use the magnitude to extend this to more classes (such as 'strong positive', 'weak positive', 'positive' . . . etc.), and no clear justification for the implicit claim that neutral is a single point (0). This paper introduces a method to address these concerns, by calibrating the mapping from a numerical score to a semantic class in a way that optimises the system's performace as a multi-class classifier. To transform a numeric scale to an ordinal (categorical) scale, boundaries (upper and lower) for each sentiment class needed to be identified from the given numeric scale. These boundary values are 'cut-off values' for the sentiment classes, and are the parameters for a multi-class sentiment classification system based on the numerical scores. This paper proposes new techniques to assign cut-off values for each class using a learningbased evaluation technique. This transformation allows us to both optimise and evaluate a system that gives numeric outputs against a gold standard dataset that contains fixed categorical outputs. We use evaluation performance measures (precision and recall) on a training subset of the dataset to adjust the parameters to produce an optimal result, by using Precision vs Recall (PR) curve visualisation. The parameters are optimised to give the best performance on the training set, and then evaluated using test set. In addition we can determine how far misclassified texts deviate from actual classes in multi-class ordered classification tasks, by computing macro-averaged mean absolute error which is the popular approach for ordinal classification (Nakov et al., 2016;Baccianella et al., 2009;Gaudette and Japkowicz, 2009). We demonstrate our technique for tuning the parameters using the Galadriel sentiment analysis system (Satthar, 2015), which we built for sentiment analysis using an inheritance-based lexicon. Galadriel is an example of a class of systems which calculate sentiment scores by combining raw lexical scores using a range of arithmetic rules (summming, scaling, averaging etc.). The final output of Galadriel for a text is a signed real number which reflects sentiments expressed by the lexical items in quite a complex way, making the intepretation of scores as classes challenging. The calibration method achieves this mapping in an optimal way. In this paper, section 2 discusses relevant previous research, in particular pre-evaluation processes and some general methods involved in sentiment classification (section 2.1) and use of the PR curve for evaluation (rsection 2.2). In section 3, we present our novel techniques for tuning the parameters. In section 4, we present our experiments with the Galadriel system, and the results of optimising cut-off values for sentiment classes. Section 5 compares the evaluation results using the cut-off values which are computed in the previous section with evaluation without calibration. Finally, section 6 provides the conclusion. 2 Related Work 2.1 Approaches to sentiment classification Sentiment classification is most simply expressed as a two-class (positive and negative) or threeclass (including neutral) classification problem. In recent work, sentiment analysis researchers have also been interested in greater than three class classification such as strong positive to strong negative and scale-1 to scale-5 (Aly, 2005;Lee and Grafe, 2010;Pang and Lee, 2005). For supervised machine learning methods, the classes come directly from the labelled training data, which means that such systems can directly produce positive or negative labelled outputs without any direct intepretation of what the classes 'mean' (Pang et al., 2002;Hsu et al., 2010). Similarly, unsupervised learning methods directly produce positive or negative labelled outputs using different techniques and algorithms such as k- means, TF-IDF and PMI-IR algorithms (Turney, 2002;Zagibalov and Carroll, 2008;Unnisa et al., 2016), but again without a clear interpretation of the classes identified. Lexicon-based approaches (as well as some unsupervised learning methods methods such as Turney (2002)) have proceeded by calculating the semantic orientation (a numerical score) and deciding the polarity of the document depending on its sign and the sentiment strength based on its magnitude. Such methods calculate the semantic orientation of a document by the aggregating semantic orientation of words or phrases, using various arithmetic combinations of scores (Taboada et al., 2011;Palanisamy et al., 2013). The sentiment analysis approaches based on semantic orientation use different semantic dictionaries (lists of sentiment words/lexical items with their semantic orientation or sentiment scores) to determine each individual word's semantic orientation. The range of the sentiment scores assigned to words in these dictionaries varies considerably. For instance, Taboada et al. (2011) used a dictionary with a sentiment score range between −5 and +5 whereas Esuli and Sebastiani (2007) has positive and sentiment words with scores between 0 and 1. Table 1 shows the different semantic scores for some common sentiment words in a number of recent semantic dictionaries 1 . In addition, the aggregation operations involved also vary, and do not always have straightforward semantic interpretations (for example, sentiment negation is achieved in some systems but inverting the score polarity, and in others by shifting the value towards zero). Comparing the outputs of such systems, or eval- uating them against a gold standard, is therefore, very challenging. Word Bing Liu Harvard GI Vader SentiWordNet SenticNet Taboada good +1 POS +1.9 0.75 (POS) +0.883 +3 glad +1 POS +2.0 0.5 (POS) +0.413 +2 incapable −1 NEG −1.6 0.625 (NEG) −0.736 −1 sad −1 NEG −2.1 0.25(NEG) −0.306 −2 bad −1 NEG −2.5 0.875(NEG) −0.367 −3 The Precision vs. Recall Curve The use of graphical representations to visualise classifer performance is well-established. The Receiver Operation Characteristic (ROC) curve, originally used in signal detection theory (Egan, 1975), has also been adopted to visualise classifier performances in text classification. The ROC is created by plotting true positive rates (TPR) against false positive rates (FPR) at various thresholds, and the area under the curve has been used as a measure of accuracy in evaluation methods. More recently, researchers have used the Precision-Recall (PR) curve, which plots precision against the true positive rate, and taken the area under this curve as a measure of performance (West et al., 2014;Manning and Schütze, 1999;Raghavan et al., 1989). Both curves can be used to visualise classifier performance; however, PR curves produce a more informative visualisation, particularly for highly imbalanced data sets (Davis and Goadrich, 2006). Moreover, a PR curve is more useful for problems where one class is considered to be more important than other classes. On the other hand, there are issues with PR curves too, for example unlike in ROC space it is complicated to interpolate two points in PR space. Furthermore, the area under a PR curve produces the arithmetic mean, whereas the also commonly used f-score is the harmonic mean of precision and recall 2 . However, these issues do not affect this work as in our calibration method we only use visualisation of the PR curve to set values for boundaries of sentiment classes. (b) Example histograms and cut-off points for different balances between n th and n + 1 th classes in different folds Figure 1: k-fold class mixtures, to produce PR curves for each cut-off candidate fold-1 fold-4 fold-5 fold-2 ---------------------------------------------------------------------------- ---------------------------------------------------------------------------- ---------------------------------------------------------------------------- ---------------------------------------------------------------------------- ---------------------------------------------------------------------------- ---------------------------------------------------------------------------- fold-k m 1 d - A Calibration Method for Cut-off Values of Sentiment Classes In this section, we introduce a calibration method for setting sentiment class cut-off values from numerical sentiment scores using learning-based techniques. We use a training data set to assign boundaries of sentiment classes, where the classes have a natural 'sentiment order'. Our method is inspired by the cross-validation method. We calculate upper and lower boundary values of each sentiment class at a time in sentiment order. For instance, in a three-class classification, we first calculate boundary values for negative (1 st class), then neutral (2 nd class) and then positive (3 rd class). We then determine the optimal cut-off value between these two boundaries to delimit the classes. To compute the cut-off value, first we reduce the problem of multi-classes and convert it into the standard binary class problem. That is, we consider the n th order class and the (n + 1) th order class to compute the cut-off values between those two classes. We select documents belonging to the n th and (n + 1) th classes from the training dataset and run our semantic classifier over these two sets. As a result, we get a set of numerical scores, one for each document in each class. We consider the maximum score for the n th class, M ax n , and the minimum score for the (n + 1) th class, M in n+1 . The cut-off value, C n/n+1 , for those two classes should lie between these two scores 3 . We plot different PR curves for candidate cut-off values between these scores to determine the cut-off value 3 Note that the classes score ranges may overlap -M axn may be greater than M inn+1. which gives optimum performance. For a given candidate cut-off value, the PR curve plots the classifier system's ability to classify using that cut-off as the class boundary, for different mixtures of the two classes. The data set is divided into k subsets (folds) with an equal number (d) of documents. We assume the data set is normally distributed. Each subset contains n th class documents and (n + 1) th class documents in different proportions. For example, the 1 st subset contains m 1 number of n th class documents and (d−m 1 ) number of (n+1) th class documents, the 2 nd subset contains m 2 number of n th class documents and (d − m 2 ) number of (n + 1) th class documents, and the k th subset contains m k number of n th class documents and (d − m k ) number of (n + 1) th class documents (see figure 1a). Each fold represents a different distribution of sentiment scores for the two classes (see figure 1b) and hence a different precision and recall score for each class for the given cut-off. We then calculate the macro-average precison and recall across the two classes; the PR curve plots these different precision/recall values for a single cut-off value across all the folds. The best cut-off value produces high and almost equal values of precision and recall. Therefore, the PR curve of the best cut-off value lies to the top right hand corner of the graph as well as close to the diagonal line (p = r). We originally hoped that we could choose the best PR curve by visual inspection, but in practice, while this is sufficient to rule out many candidates, the final choice was also supported by additionally plotting average recall and precision for each PR curve. Once the best cut-off value, C n/n+1 , has been Figure 2: PR curves for all candidate cut-off values established, we repeat the process for the other class boundaries (C n+1/n+2 etc.). These cut-off values can then be used to map the numerical scores to classes in an optimal way. For example, in the three class negative, neutral, positive case, with classes 1, 2 and 3, we use C 1/2 as the boundary between negative and neutral, and C 2/3 as the boundary between neutral and positive, and classify as follows: S i =      positive, If Tot i > C 2/3 neutral, If C 1/2 < Tot i < C 2/3 negative, If Tot i < C 1/2 (1) where S i is the sentiment class of document i and Tot i is the total sentiment score of the document i. Experiments and Results To test the above method, we performed an experiment with the Galadriel sentiment analysis system (Satthar, 2015) on a scaled dataset 4 used by Pang and Lee (2005). The dataset is a collection of movie reviews labelled with values of 0, 1, 2. When analysed by the Galadriel system, the documents in this dataset return scores ranging between −10 and +25. The purpose of this experiment was to show that by assigning optimal cut-off values for Galadriel scores according to this scaled dataset, we can map the system's output into this three-class system in a way which maximises its performance as a sentiment classifier. We selected 300 documents of approximately equal length from the dataset (100 documents for each scale value in an approximately normal distribution). First we divided the dataset into two parts, one for training and other for testing. We used 240 documents (80 documents from each scale) as our training set. First, we computed boundaries for the scale-0 class, then for the scale-1 class and finally for the scale-2 class. Since scale-0 is the lowest class it is not necessary to compute the lower boundary for scale-0. To determine the upper boundary of the Galadriel score for scale-0, the cut-off value of the Galadriel score between scale-0 and scale-1 needed to be computed. For this, we used our scale-0 and scale-1 training documents (160 documents). We found that the maximum normalised Galadriel score for scale-0 documents was +0.17 and minimum Galadriel score for scale-1 documents was −1.41 (rounded up to two decimals). Therefore, we set up candidate cut-off values (C i ) between Figure 2 shows the resulting 28 different PR curves. The ideal cut-off value will have a PR curve as close to the diagonal, and as far towards the top right corner as possible. As can be seen in figure 2, although the general trend is for all the curves to be in the top right half of the graph, many of them deviate significantly from the diagonal line. We focused on the six curves closest to the diagonal (by visual inspection), shown in figure 3, for further analysis. The 6 right-hand corner compared to the PR curves for −0.75 and −0.70. We therefor discard these two, but the remaining curves track each other very closely -too closely for visual discrimination. We therefore calculated the (macro-)average precision and recall values of each cut-off value and plotted these in a scatter plot(figure 4). From this plot, we concluded that the best cut-off value for scale-0 and scale-1 classes is −0.65. To validate this cut-off value, we also compared f-scores for the candidate cut-off values from these macro-averaged recall and precision values. We only considered the candidate values used in figure 3 as the remaining cut-off values had already been rejected. Table 2 also shows these numbers for the different candidate cut-off values. The fscore of the cut-off value −0.65 has the maximum value. Similarly, the cut-off value C 1/2 for scale-1 and scale-2 classes were computed with an optimal value of +1.05. Evaluation of the Calibrated System In order to demonstrate the effect of the calibration process, we evaluated the calibrated Galadriel system against Pang and Lee (2005)'s dataset and compared this with evaluation of the uncalibrated version. For this evaluation, we selected 50 random unseen test documents from the dataset and analysed them using Galadriel, giving numerical scores for each document as its output. The output scores were classified according to Galadriel cutoff values −0.65 (C 0/1 ) and +1.05 (C 1/2 ). Table 3 shows the resulting confusion matrix. It is interesting to note that this optimum score range for the neutral class is quite small in comparison to the total score range of the system (1.70 out of 30), and also not balanced around zero. Table 4 shows precison, recall and f-score results for each class and overall macro-average results, for both the calibrated system and the uncalibrated system, which maps sentiment scores simply on the basis of their sign (negative, zero or positive). The effect of calibrating is to increase the macro-averaged f-score from 0.48 to 0.82. Moreover, the calibrated system gives overall macroaveraged mean absolute error (MAE) of 0.2167 whereas the uncalibrated system shows 0.5166. Conclusion This paper presented a novel calibration method to transform numerical sentiment scores into fixed ordered classes. This method uses corpus-based evaluation techniques, as widely used in supervised machine learning approaches, calibrating a system using gold standard labelled data. The effect is to optimise a continuous sentiment analysis system for the discrete classification model represented by the gold standard data. The calibrated system can then be evaluated and compared with other systems by using additional unseen gold standard data for the same model, or applied to new data assumed to follow the same model, with the confidence provided by the evaluation results. The availability of a general calibration method also means that the same system can be calibrated independently for different classification tasks as required. We also presented a comparison between the performance of a calibrated system and the corresponding uncalibrated system, where sentiment scores are mapped into classes based solely on their sign, and showed that calibration can provide a substantial increase in performance. Although the uncalibrated system might be considered a poor baseline for comparison, it is worth bearing in mind that it is a simple model such as this which often guides the assignment of lexical semantic orientation scores such as those given in table 1. The effectiveness of calibration is a measure of the extent to which the document analysis process as a whole deviates from the simple lexical model, in a way that is difficult to capture by other means, and reveals interesting biases in the way the process maps sentiment onto scores. In future work, we hope to look at automating the process of selecting the best PR curve, so that the entire calibration process is essentially automatic. − 1 .Figure 3 :Figure 4 : 13445 and +0.2 in an equal interval of 0.05, i.e., −1.45, −1.40, −1.35, −1.30, −1.25, −1.20, −1.15, −1.10, −1.05, −1.00, −0.05, 0.00, +0.5, +0.1, +0.15, +0.2. Then, for each candidate cut-off value, we calculated precision and recall value for 5 sub training data sets, each subset containing a mixture of 35 scale-0 and scale-1 documents. For each cut-off values (C i ) precision and recall values were calculated for scale-0 class and scale-1 class. Then the precision and recall values were summarised by taking macro average of both classes' values. Finally, we had 5 pairs of precision and recall values for each of our 28 candidate Most Average of Precision and Recall values cut-off values. candidate cut-off values remaining after this step are −0.75, −0.70, −0.65, −0.60, −0.55 and −0.50. The PR curves of those values lie closest to the diagonal line, and largely in the upper right corner. Thus we concluded that one of those 6 test values is the optimal cut-off value C 0/1 for scale-0 and scale-1 classes. Looking more closely, we can see that the PR curves for −0.65,−0.60, −0.55 and −0.5 lie noticeably closer to the top Cut-off values Recall Precision F Positive > Neutral > Negative Strong-Positive > Positive > Weak-Positive > Neutral> Weak-Negative > Negative > Strong-Negative 3 stars > 2 stars > 1 star652 lowing examples: Table 1 : 1Some lexical entries with their semantic orientation according to different lexicon dictionaries. www.wjh.harvard.edu/˜inquirer/; Vader Sentiment: github.com/cjhutto/vaderSentiment/ tree/master/vaderSentiment; SentiWordNet: www.sentiwordnet.isti.cnr.it/; SenticNet: www.sentic.net/downloads/; Taboada et al. (2011)'s lexicon kindly made available by the authors for this research.1 Bing Liu's opinion lexicon: www.cs. uic.edu/˜liub/; Harvard General Inquirer: Table 2 : 2Average Precison, Recall and F-score measures for candidate cut-off values Table 4 : 4Comparing performance measures calculated by the calibrated and uncalibrated versions of Galadriel.Galadriel scores of documents Scaled documents 0 1 2 −0.65 > Gal i 15 1 0 −0.65 < Gal i < +1.05 3 16 2 +1.05 < Gal i 2 3 18 Table 3 : 3Confusion Matrix for the classification Such issues can be mitigated by plotting a Precision-Recall-Gain curve(Flach and Kull, 2015) and considering its associated area. However this is beyond the scope of this paper. www.cs.cornell.edu/people/pabo/ movie-review-data/ Survey on multiclass classification methods. Mohamed Aly, Neural Networks. 19Mohamed Aly. 2005. Survey on multiclass classifica- tion methods. Neural Networks 19:1-9. Evaluation measures for ordinal regression. Stefano Baccianella, Andrea Esuli, Fabrizio Sebastiani, Intelligent Systems Design and Applications. IEEEISDA'09. Ninth International Conference onStefano Baccianella, Andrea Esuli, and Fabrizio Se- bastiani. 2009. Evaluation measures for ordinal re- gression. In Intelligent Systems Design and Appli- cations, 2009. ISDA'09. Ninth International Confer- ence on. IEEE, pages 283-287. The relationship between precision-recall and roc curves. Jesse Davis, Mark Goadrich, 10.1145/1143844.1143874Proceedings of the 23rd International Conference on Machine Learning. the 23rd International Conference on Machine LearningNew York, NY, USA, ICML '06ACMJesse Davis and Mark Goadrich. 2006. The re- lationship between precision-recall and roc curves. In Proceedings of the 23rd International Conference on Machine Learning. ACM, New York, NY, USA, ICML '06, pages 233-240. https://doi.org/10.1145/1143844.1143874. Signal detection theory and ROC analysis. James P Egan, Academic PressNew YorkJames P. Egan. 1975. Signal detection theory and ROC analysis. Academic Press, New York. Sentiwordnet: A high-coverage lexical resource for opinion mining. Andrea Esuli, Fabrizio Sebastiani, Andrea Esuli and Fabrizio Sebastiani. 2007. Senti- wordnet: A high-coverage lexical resource for opin- ion mining. Evaluation pages 1-26. Precision-recallgain curves: Pr analysis done right. Peter Flach, Meelis Kull, Advances in Neural Information Processing Systems. C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. GarnettCurran Associates, Inc28Peter Flach and Meelis Kull. 2015. Precision-recall- gain curves: Pr analysis done right. In C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett, editors, Advances in Neural Information Processing Systems 28, Curran Associates, Inc., pages 838-846. http://papers.nips.cc/paper/5867- precision-recall-gain-curves-pr-analysis-done- right.pdf. Evaluation methods for ordinal classification. Lisa Gaudette, Nathalie Japkowicz, Canadian Conference on Artificial Intelligence. SpringerLisa Gaudette and Nathalie Japkowicz. 2009. Evalu- ation methods for ordinal classification. In Cana- dian Conference on Artificial Intelligence. Springer, pages 207-210. Machine learning for sentiment analysis on the Experience project. Raymond Hsu, Bozhi See, Alan Wu, Raymond Hsu, Bozhi See, and Alan Wu. 2010. Machine learning for sentiment analysis on the Experience project. Accessed on July 31, 2017. http://cs229.stanford.edu/proj2010/HsuSeeWu- MachineLearningForSentimentAnalysis.pdf. Multiclass sentiment analysis with restaurant reviews. Moontae Lee, Patrick Grafe, Moontae Lee and Patrick Grafe. 2010. Mul- ticlass sentiment analysis with restaurant reviews. Accessed on July 31, 2017. https://nlp.stanford.edu/courses/cs224n/2010/ reports/pgrafe-moontae.pdf. Foundations of Statistical Natural Language Processing. D Christopher, Hinrich Manning, Schütze, MIT PressCambridge, MA, USAChristopher D. Manning and Hinrich Schütze. 1999. Foundations of Statistical Natural Language Pro- cessing. MIT Press, Cambridge, MA, USA. Semeval-2016 task 4: Sentiment analysis in twitter. Preslav Nakov, Alan Ritter, Sara Rosenthal, Fabrizio Sebastiani, Veselin Stoyanov, Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016). the 10th International Workshop on Semantic Evaluation (SemEval-2016)San Diego, CaliforniaAssociation for Computational LinguisticsPreslav Nakov, Alan Ritter, Sara Rosenthal, Fabrizio Sebastiani, and Veselin Stoyanov. 2016. Semeval- 2016 task 4: Sentiment analysis in twitter. In Pro- ceedings of the 10th International Workshop on Se- mantic Evaluation (SemEval-2016). Association for Computational Linguistics, San Diego, California, pages 1-18. http://www.aclweb.org/anthology/S16- 1001. Sentiment analysis: Capturing favorability using natural language processing. Tetsuya Nasukawa, Jeonghee Yi, 10.1145/945645.945658Proceedings of the 2Nd International Conference on Knowledge Capture. the 2Nd International Conference on Knowledge CaptureNew York, NY, USA, K-CAP '03ACMTetsuya Nasukawa and Jeonghee Yi. 2003. Sentiment analysis: Capturing favorability using natural lan- guage processing. In Proceedings of the 2Nd Inter- national Conference on Knowledge Capture. ACM, New York, NY, USA, K-CAP '03, pages 70-77. https://doi.org/10.1145/945645.945658. Serendio: Simple and practical lexicon based approach to sentiment analysis. Prabu Palanisamy, Vineet Yadav, Harsha Elchuri, proceedings of Second Joint Conference on Lexical and Computational Semantics. Citeseer. Second Joint Conference on Lexical and Computational Semantics. CiteseerPrabu Palanisamy, Vineet Yadav, and Harsha Elchuri. 2013. Serendio: Simple and practical lexicon based approach to sentiment analysis. In proceedings of Second Joint Conference on Lexical and Computa- tional Semantics. Citeseer, pages 543-548. Seeing stars: Exploiting class relationships for sentiment categorization with respect to rating scales. Bo Pang, Lillian Lee, 10.3115/1219840.1219855Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics. Association for Computational Linguistics. the 43rd Annual Meeting on Association for Computational Linguistics. Association for Computational LinguisticsStroudsburg, PA, USA, ACL '05Bo Pang and Lillian Lee. 2005. Seeing stars: Ex- ploiting class relationships for sentiment catego- rization with respect to rating scales. In Pro- ceedings of the 43rd Annual Meeting on As- sociation for Computational Linguistics. Asso- ciation for Computational Linguistics, Strouds- burg, PA, USA, ACL '05, pages 115-124. https://doi.org/10.3115/1219840.1219855. Thumbs up?: sentiment classification using machine learning techniques. Bo Pang, Lillian Lee, Shivakumar Vaithyanathan, Proceedings of the ACL-02 conference on Empirical methods in natural language processing. the ACL-02 conference on Empirical methods in natural language processingAssociation for Computational Linguistics10Bo Pang, Lillian Lee, and Shivakumar Vaithyanathan. 2002. Thumbs up?: sentiment classification using machine learning techniques. In Proceedings of the ACL-02 conference on Empirical methods in natu- ral language processing-Volume 10. Association for Computational Linguistics, pages 79-86. Sentiment analysis: A combined approach. Rudy Prabowo, Mike Thelwall, Journal of Informetrics. 32Rudy Prabowo and Mike Thelwall. 2009. Sentiment analysis: A combined approach. Journal of Infor- metrics 3(2):143-157. A critical investigation of recall and precision as measures of retrieval system performance. Vijay Raghavan, Peter Bollmann, Gwang S Jung, 10.1145/65943.65945ACM Trans. Inf. Syst. 73Vijay Raghavan, Peter Bollmann, and Gwang S. Jung. 1989. A critical investigation of recall and precision as measures of retrieval system per- formance. ACM Trans. Inf. Syst. 7(3):205-229. https://doi.org/10.1145/65943.65945. Contextual semantics for sentiment analysis of twitter. Hassan Saif, Yulan He, Miriam Fernandez, Harith Alani, Information Processing & Management. 521Hassan Saif, Yulan He, Miriam Fernandez, and Harith Alani. 2016. Contextual semantics for sentiment analysis of twitter. Information Processing & Man- agement 52(1):5-19. Modelling so-cal in an inheritance-based sentiment analysis framework. Satthar F Sharmila, OASIcs-OpenAccess Series in Informatics. Schloss Dagstuhl-Leibniz-Zentrum fuer Informatik. 49F Sharmila Satthar. 2015. Modelling so-cal in an inheritance-based sentiment analysis framework. In OASIcs-OpenAccess Series in Informatics. Schloss Dagstuhl-Leibniz-Zentrum fuer Informatik, vol- ume 49. Lexicon-based methods for sentiment analysis. Maite Taboada, Julian Brooke, Milan Tofiloski, Kimberly Voll, Manfred Stede, Computational linguistics. 372Maite Taboada, Julian Brooke, Milan Tofiloski, Kim- berly Voll, and Manfred Stede. 2011. Lexicon-based methods for sentiment analysis. Computational lin- guistics 37(2):267-307. Thumbs up or thumbs down?: semantic orientation applied to unsupervised classification of reviews. D Peter, Turney, Proceedings of the 40th annual meeting on association for computational linguistics. ACL. the 40th annual meeting on association for computational linguistics. ACLPeter D Turney. 2002. Thumbs up or thumbs down?: semantic orientation applied to unsupervised classi- fication of reviews. In Proceedings of the 40th an- nual meeting on association for computational lin- guistics. ACL, pages 417-424. Opinion mining on twitter data using unsupervised learning technique. Muqtar Unnisa, Ayesha Ameen, Syed Raziuddin, International Journal of Computer Applications. 14812Muqtar Unnisa, Ayesha Ameen, and Syed Raziuddin. 2016. Opinion mining on twitter data using unsu- pervised learning technique. International Journal of Computer Applications 148(12). Exploiting social network structure for person-to-person sentiment analysis. Robert West, S Hristo, Jure Paskov, Christopher Leskovec, Potts, arXiv:1409.2450arXiv preprintRobert West, Hristo S Paskov, Jure Leskovec, and Christopher Potts. 2014. Exploiting social network structure for person-to-person sentiment analysis. arXiv preprint arXiv:1409.2450 . Automatic seed word selection for unsupervised sentiment classification of chinese text. Taras Zagibalov, John Carroll, Proceedings of the 22nd International Conference on Computational Linguistics. the 22nd International Conference on Computational Linguistics1Association for Computational LinguisticsTaras Zagibalov and John Carroll. 2008. Auto- matic seed word selection for unsupervised senti- ment classification of chinese text. In Proceedings of the 22nd International Conference on Computa- tional Linguistics-Volume 1. Association for Com- putational Linguistics, pages 1073-1080.
8,473,142
Belgisch Staatsblad Corpus: Retrieving French-Dutch Sentences from Official Documents
We describe the compilation of a large corpus of French-Dutch sentence pairs from official Belgian documents which are available in the online version of the publication Belgisch Staatsblad/Moniteur belge, and which have been published between 1997 and 2006. After downloading files in batch, we filtered out documents which have no translation in the other language, documents which contain several languages (by checking on discriminating words), and pairs of documents with a substantial difference in length. We segmented the documents into sentences and aligned the latter, which resulted in 5 million sentence pairs (only one-to-one links were included in the parallel corpus); there are 2.4 million unique pairs. Sample-based evaluation of the sentence alignment results indicates a near 100% accuracy, which can be explained by the text genre, the procedure filtering out weakly parallel articles and the restriction to one-to-one links. The corpus is larger than a number of well-known French-Dutch resources. It is made available to the community. Further investigation is needed in order to determine the original language in which documents were written.
[ 26124282, 38407095 ]
Belgisch Staatsblad Corpus: Retrieving French-Dutch Sentences from Official Documents Tom Vanallemeersch tallem@ccl.kuleuven.be Centre for Computational Linguistics Lessius University College AntwerpenBelgium K.U.Leuven Belgium Belgisch Staatsblad Corpus: Retrieving French-Dutch Sentences from Official Documents We describe the compilation of a large corpus of French-Dutch sentence pairs from official Belgian documents which are available in the online version of the publication Belgisch Staatsblad/Moniteur belge, and which have been published between 1997 and 2006. After downloading files in batch, we filtered out documents which have no translation in the other language, documents which contain several languages (by checking on discriminating words), and pairs of documents with a substantial difference in length. We segmented the documents into sentences and aligned the latter, which resulted in 5 million sentence pairs (only one-to-one links were included in the parallel corpus); there are 2.4 million unique pairs. Sample-based evaluation of the sentence alignment results indicates a near 100% accuracy, which can be explained by the text genre, the procedure filtering out weakly parallel articles and the restriction to one-to-one links. The corpus is larger than a number of well-known French-Dutch resources. It is made available to the community. Further investigation is needed in order to determine the original language in which documents were written. Introduction The Belgian authorities daily disclose a number of articles with official texts, such as laws, decrees etc., through a publication called the Belgisch Staatsblad in Dutch and Moniteur belge in French. It appears on paper and, since a number of years, online also 1 . The official languages of Belgium are Dutch, French and German. As the latter is the native language of less than 1 percent of the population, the publication contains mainly articles in French and Dutch, and relatively few in German 2 . Some articles are a translation of another article. The online version of the Belgisch Staatsblad is targeted towards legal and other specialists looking for specific articles. It provides a search interface, allowing them to enter keywords, a range of dates, the language of the articles, etc. The online version is also interesting for translators, but for their purposes (e.g. finding out the possible Dutch equivalents of a French term), the search interface is inefficient, as articles need to be consulted one by one and no button is provided for switching to the equivalent article in another language. The online data are also potentially interesting for building a statistical machine translation system, creating a bilingual lexicon, performing translation studies etc. Therefore, we have built a French-Dutch parallel corpus from these data. We focused on these two languages because of their strong representation within the whole set of articles. In the following sections, we present the procedure for obtaining and filtering online articles, describe the sentence alignment procedure, compare the corpus with other resources, and discuss the format in which it is made available. Finally, we present conclusions and future research. We have rounded some of the article, sentence and word counts for the sake of readability, using K as an abbreviation for thousands and m for millions. Obtaining and Filtering Documents We downloaded a large number of articles, in a similar fashion as the web crawling procedure which lead to the Europarl corpus (Koehn, 2005). As far as the intellectual property of the online version of the Belgisch Staatsblad is concerned, it is legally stated that the electronic files can be used freely, for personal or commercial use 3 . We created a list of URLs to be downloaded in batch by a web crawler 4 . By consulting some websites specialized in legal matter, we found out the form of a URL that directly leads to the summary of all articles which appeared during one day. Such a URL contains keywords whose values indicate language and date (year, month and day). We generated automatically a list of all possible URLs for a period of 10 years (1997 until 2006), for both languages, 1997 being the first year for which a substantial amount of summaries were digitally available. In the summaries which we downloaded in batch using the automatically generated list, each article is tagged with a so-called numac, a unique code starting with a year. Web sites on legal matter provided information on the form of a URL that leads directly to a specific article. Based on the numacs in the daily summaries, we created a list of URLs containing keywords whose values indicate the numac, the date of the summary and the language 5 . By downloading those URLs in batch, we obtained a total of 199K articles. The whole download process took us several days. We converted the articles, downloaded as HTML files, into pure text using a utility 6 , configuring it in such a way that para-graphs were stored as a singe line rather than a set of lines. We filtered out a number of downloaded articles. We applied the following cascade of filters (illustrated by Figure 1): • We filtered out articles available in only one language (7K in French, 11K in Dutch), based on the fact that corresponding documents in French and Dutch have their numac in common. • We filtered out pairs of articles with a substantial difference in length. Such difference is caused, for instance, by the fact that an article focuses on a language-specific political entity such as the Communauté française and provides the other language group with a less detailed translation. These article pairs could present difficulties during sentence alignment. To this purpose, we randomly selected 50 parallel articles, and verified whether the articles were completely monolingual (see next filter) and completely parallel to each other. On average, French articles appeared to be 5% shorter than their Dutch counterpart (in terms of characters, after removing redundant spaces); the biggest differences involved a French article which was 13% shorter and one which was 5% longer than its Dutch counterpart. We decided to filter out parallel pairs in which the French article is more than 20% shorter or longer than its counterpart. This resulted in a reduction by 599 parallel articles. • We filtered out parallel pairs in which less than 90% of the French article consists of French text (e.g. a mix of French and German in texts concerning the German-speaking part of Belgium), or less than 90% of the Dutch article consists of Dutch text. For each of three languages (French, Dutch and German), we created a list of discriminating words, i.e. words that are unique to a language compared to the other two (such as certain function words). For each article pair, we estimated the portion written in French by comparing the number of occurrences of French discriminating words with the number of occurrences of any discriminating word, be it a French, Dutch or German one. Similarly, we estimated the Dutch and the German portion. We preferred this approach over standard language identification techniques (Padró and Padró, 2004), as the latter primarily deal with fully monolingual files. By setting a threshold of 90%, we didn't filter out articles with a sporadic text fragment in another language (e.g. a reference to a book). This resulted in a reduction by 6K parallel pairs, leaving us with 85K pairs. Sentence Alignment We wrote a script that converts the running text in the articles into a list of sentences (although, more accurately, we should talk about segments, as not all independent text units are sentences). The script disambiguates periods, for instance by recognizing abbreviations. As the average article size in terms of words is rather low for both languages Figure 1: Reduction of number of articles through filtering (939 for French, 919 for Dutch), we didn't undertake paragraph alignment as a preparatory step. We performed sentence alignment using the GMA (Geometric Mapping and Alignment) system of Melamed (2000). This system applies two steps, SIMR and GSA. The SIMR (Smooth Injective Map Recognizer) algorithm creates a list of anchors. These are potential points of correspondence, which link identical words, cognates (similar orthography) or words that are equivalent according to a bilingual lexicon. The algorithm also uses stop word lists (e.g. function words) in order to avoid linking such words and causing a proliferation of anchors. The GSA (Geometric Segment Alignment) postprocessor links one or more source sentences to one or more target sentences by grouping anchors. We restricted language-specific knowledge to stop word lists, as an extensive bilingual lexicon was not at hand. The alignment resulted in a parallel corpus with a total of 5m one-to-one links. We ignored links that involve more than one sentence in at least one language (one-to-many or many-to-many links) and null links (sentences without equivalent), assuming that they may be the product of a lack of alignment evidence. We estimated the quality of the sentence alignment results by evaluating a small sample of aligned articles of different sizes, a sample of aligned portions of the two largest article pairs in the corpus, and a random sample of 500 sentence pairs taken from the whole corpus. It turned out that the alignment quality was almost perfect. Apart from a serious alignment problem caused by a glossary at the end of the largest article pair (the alphabetic order of the items in each language disturbs the positional correspondence of translation equivalents), we found only one completely incorrect link between two sentences, as well as a sporadic link that was partially incorrect due to segmentation errors caused by a colon inside brackets or an unrecognized abbreviation. The high alignment quality can be explained by the following factors: we had previously filtered out article pairs that are potentially hard to align, we restricted ourselves to one-to-one links, legally oriented translations are accurate rather than creative, and corresponding articles contain many identical words (proper names, dates, section numbers etc.). Comparison with Other Resources As a basis for comparing the size of the parallel corpus with that of other resources, we looked for the degree of repetition among sentences. Both the French and Dutch sentences contain on average 14 words. Among the 5m sentence pairs, there are 2.4m unique pairs, which contain a total of 52.3m French words and 52.6m Dutch words, and 22 words per sentence on average. This difference in average number of words indicates that especially shorter sentences are often repeated in the corpus. We also simplified the list of unique sentence pairs by removing the pairs in which one or both of the sentences consist of non-letters only (e.g. a date), by replacing non-letter sequences in the other sentences with a space, and by lowercasing the letters in those sentences. This lead to a total of 2.0m unique simplified sentence pairs, indicating a substantial amount of repetition caused by differences in punctuation and case (e.g. repetition among "Vroedvrouw.", "vroedvrouw", "Vroedvrouw :" etc.). Figure 2 shows the relation between number of words and number of sentences (averaged over both languages) before alignment, after alignment, after removing non-unique sentence pairs and after simplifying the list of unique sentence pairs. Even when we merely count unique sentence pairs in the corpus, its size is larger than a number of existing resources for the French-Dutch language pair. For instance, both the French-English subset and Dutch-English subset of the Europarl corpus contain 1.3m sentence pairs and around 40m words per language. The JRC-Acquis corpus (Steinberger et al., 2006) is based on the Acquis Communautaire (body of common rights and obligations binding all the Member States together within the EU) and was produced using two alignment tools which linked paragraphs that "can contain a small number of sentences, but they sometimes contain sentence parts (ending with a semicolon or a comma)" (p. 2144). The French-Dutch subset of the JRC-Acquis corpus contains 1.3m paragraph links, 35m French words and 33m Dutch words. The recently released Dutch Parallel Corpus (Rura et al., 2008) contains a total of 10m words. Its purpose is different from the above corpora, as it is a balanced corpus for two language pairs (Dutch-English and Dutch-French) and different text types, and contains linguistic annotations. The sentence alignment was performed by three tools, among which GMA; the results of the tools were merged. As for the determination of the original language of a sentence pair, which is important for instance when we want to study translation effects (Johansson, 2007), the relevant information is coded in the Dutch Parallel Corpus but not or insufficiently in the other corpora mentioned. In Europarl, the tag indicating the language used by the speaker is not consistently coded on all speeches (van Halteren, 2008). In JRC-Acquis, the original language is not indicated at all. In case of our corpus, three alternatives apply, i.e. the original text was written in French, the original text was written in Dutch, or some parts of the original text were written in French and some in Dutch (source: personal communication with a translator working for the Belgian authorities). However, the Belgisch Staatsblad doesn't indicate which of the three alternatives applied for a specific article pair. It may be worth investigating the approach by van Halteren (2008), who trained a classifier on Europarl speeches known to contain original sentences, in order to predict the source language of other speeches. Availability The corpus is made available to the community in the following formats 7 : • Downloaded articles in HTML; their file names contain the date of publication, numac and language. • One-to-one-links in TMX format, an open standard for exchanging translation memories 8 ; metadata: each sentence pair is associated with a date of publication and a numac, and each sentence is associated with a language. • Pairs of files containing the French and Dutch sentences that were aligned; the file names contain the date of publication, the numac and the language; a sentence in a French file has the same line number as its translation equivalent in the Dutch file. Conclusions and Future Research We have created a French-Dutch bilingual corpus containing legislative information, whose size (5m one-to-one sentence links, 2.4m unique sentence pairs) is larger than that of well-known existing resources for the language pair in question. Articles containing text in multiple languages were excluded from alignment by checking on words that are unique to a language compared to two other languages. Sample-based evaluation of the sentence alignment results indicates a near 100% accuracy, which can be explained by the text genre, the procedure for filtering out weakly parallel article pairs, and the restriction to one-to-one links. The corpus is made available for the community. The fact that the original language of the articles is currently not known requires further investigation of the data in order to make the corpus apt for studying translation effects. Other future research on our Belgisch Staatsblad corpus will involve the construction of a statistical machine translation system, the extraction of a bilingual lexicon and term candidates, and word alignment based on a bilingual lexicon and word fragments (Vanallemeersch and Wermuth, 2008). Figure 2 : 2Relation words/sentences according to degree of corpus reduction http://www.ejustice.just.fgov.be/cgi/ welcome.pl (last consultation: 16 March 2010) 2 Figures of November 2009: 38% of the available articles written in French, 60% in Dutch and 2% in German. Article 477 of the law (programmawet) of 24 December 2002. 4 http://www.gnu.org/software/wget 5 Example of keywords and values: numac = 2006011348, article lang = N, pub date = 2006-09-04. 6 http://www.nirsoft.net http://www.ccl.kuleuven.be/ ∼ tallem 8 http://www.lisa.org/tmx Seeing through Multilingual Corpora. S Johansson, Studies in Corpus Linguistics 26. John Benjamins. PhiladelphiaS. Johansson. 2007. Seeing through Multilingual Cor- pora. Studies in Corpus Linguistics 26. John Benjamins, Philadelphia. Europarl: A parallel corpus for statistical machine translation. P Koehn, Proc. of MT Summit. of MT SummitP. Koehn. 2005. Europarl: A parallel corpus for statistical machine translation. In Proc. of MT Summit, pages 79- 86. Parallel text processing: Alignment and use of translation corpora. D Melamed, J. VéronisPattern recognition for mapping bitext correspondenceD. Melamed. 2000. Pattern recognition for mapping bitext correspondence. In J. Véronis, editor, Parallel text pro- cessing: Alignment and use of translation corpora, pages 25-47. Comparing methods for language identification. M Padró, L Padró, Procesamiento del Lenguaje Natural. esamiento del Lenguaje NaturalM. Padró and L. Padró. 2004. Comparing methods for lan- guage identification. Procesamiento del Lenguaje Natu- ral, (33):155-162. Designing a parallel corpus as a multifunctional translator's aid. L Rura, W Vandeweghe, M. Montero Perez, Proceedings of XVIII FIT World Congress. FIT, Translators Association of China. XVIII FIT World Congress. FIT, Translators Association of ChinaL. Rura, W. Vandeweghe, and M. Montero Perez. 2008. Designing a parallel corpus as a multifunctional transla- tor's aid. In Proceedings of XVIII FIT World Congress. FIT, Translators Association of China. The JRC-Acquis: A multilingual aligned parallel corpus with 20+ languages. R Steinberger, B Pouliquen, A Widiger, C Ignat, T Erjavec, D Tufiş, D Varga, Proceedings of the Fifth International Conference on Language Resources and Evaluation (LREC). the Fifth International Conference on Language Resources and Evaluation (LREC)R. Steinberger, B. Pouliquen, A. Widiger, C. Ignat, T. Er- javec, D. Tufiş, and D. Varga. 2006. The JRC-Acquis: A multilingual aligned parallel corpus with 20+ languages. In In Proceedings of the Fifth International Conference on Language Resources and Evaluation (LREC), pages 2142-2147. Source language markers in EU-ROPARL translations. H Van Halteren, COLING '08: Proceedings of the 22nd International Conference on Computational Linguistics. Morristown, NJ, USAAssociation for Computational LinguisticsH. van Halteren. 2008. Source language markers in EU- ROPARL translations. In COLING '08: Proceedings of the 22nd International Conference on Computational Linguistics, pages 937-944, Morristown, NJ, USA. As- sociation for Computational Linguistics. Linguisticsbased word alignment for medical translators. T Vanallemeersch, C Wermuth, Journal of Specialized Translation (Jostrans). 9T. Vanallemeersch and C. Wermuth. 2008. Linguistics- based word alignment for medical translators. Journal of Specialized Translation (Jostrans), (9).
19,558,838
Modeling Context Words as Regions: An Ordinal Regression Approach to Word Embedding
Vector representations of word meaning have found many applications in the field of natural language processing. Word vectors intuitively represent the average context in which a given word tends to occur, but they cannot explicitly model the diversity of these contexts. Although region representations of word meaning offer a natural alternative to word vectors, only few methods have been proposed that can effectively learn word regions. In this paper, we propose a new word embedding model which is based on SVM regression. We show that the underlying ranking interpretation of word contexts is sufficient to match, and sometimes outperform, the performance of popular methods such as Skip-gram. Furthermore, we show that by using a quadratic kernel, we can effectively learn word regions, which outperform existing unsupervised models for the task of hypernym detection.
[ 1526915, 6585702, 13468104, 15214701, 931054, 9674799, 11440692, 14305557, 1957433, 5959482, 12730203, 7634844 ]
Modeling Context Words as Regions: An Ordinal Regression Approach to Word Embedding Association for Computational LinguisticsCopyright Association for Computational LinguisticsCoNLL 2017. August 3 -August 4, 2017. 2017 Shoaib Jameel jameels1@cardiff.ac.uk School of Computer Science Informatics Cardiff University Steven Schockaert schockaerts1@cardiff.ac.uk School of Computer Science Informatics Cardiff University Modeling Context Words as Regions: An Ordinal Regression Approach to Word Embedding Proceedings of the 21st Conference on Computational Natural Language Learning the 21st Conference on Computational Natural Language LearningVancouver, CanadaAssociation for Computational LinguisticsCoNLL 2017. August 3 -August 4, 2017. 2017 Vector representations of word meaning have found many applications in the field of natural language processing. Word vectors intuitively represent the average context in which a given word tends to occur, but they cannot explicitly model the diversity of these contexts. Although region representations of word meaning offer a natural alternative to word vectors, only few methods have been proposed that can effectively learn word regions. In this paper, we propose a new word embedding model which is based on SVM regression. We show that the underlying ranking interpretation of word contexts is sufficient to match, and sometimes outperform, the performance of popular methods such as Skip-gram. Furthermore, we show that by using a quadratic kernel, we can effectively learn word regions, which outperform existing unsupervised models for the task of hypernym detection. Introduction Word embedding models such as Skip-gram (Mikolov et al., 2013b) and GloVe (Pennington et al., 2014) represent words as vectors of typically around 300 dimensions. The relatively lowdimensional nature of these word vectors makes them ideally suited for representing textual input to neural network models (Goldberg, 2016;Nayak, 2015). Moreover, word embeddings have been found to capture many interesting regularities (Mikolov et al., 2013b;Kim and de Marneffe, 2013;Gupta et al., 2015;Rothe and Schütze, 2016), which makes it possible to use them as a source of semantic and linguistic knowledge, and to align word embeddings with visual features (Frome et al., 2013) or across different languages (Zou et al., 2013;Faruqui and Dyer, 2014). Notwithstanding the practical advantages of representing words as vectors, a few authors have advocated the idea that words may be better represented as regions (Erk, 2009), possibly with gradual boundaries (Vilnis and McCallum, 2015). One important advantage of region representations is that they can distinguish words with a broad meaning from those with a more narrow meaning, and should thus in principle be better suited for tasks such as hypernym detection and taxonomy learning. However, it is currently not well understood how such region based representations can best be learned. One possible approach, suggested in (Vilnis and McCallum, 2015), is to learn a multivariate Gaussian for each word, essentially by requiring that words which frequently occur together are represented by similar Gaussians. However, for large vocabularies, this is computationally only feasible with diagonal covariance matrices. In this paper, we propose a different approach to learning region representations for words, which is inspired by a geometric view of the Skip-gram model. Essentially, Skip-gram learns two vectors p w andp w for each word w, such that the probability that a word c appears in the context of a target word t can be expressed as a function of p t ·p c (see Section 2). This means that for each threshold λ ∈ [−1, 1] and context word c, there is a hyperplane H c λ which (approximately) separates the words t for which p t ·p c ≥ λ from the others. Note that this hyperplane is completely determined by the vectorp c and the choice of λ. An illustration of this geometric view is shown in Figure 1(a), where e.g. the word c is strongly related to a (i.e. a has a high probability of occurring in the context of c) but not closely related to b. Note in particular that there is a half-space containing those words which are strongly related to a (w.r.t. a given threshold λ). Our contribution is twofold. First, we empirically show that effective word embeddings can be learned from purely ordinal information, which stands in contrast to the probabilistic view taken by e.g. Skip-gram and GloVe. Specifically, we propose a new word embedding model which uses (a ranking equivalent of) max-margin constraints to impose the requirement that p t ·p c should be a monotonic function of the probability P (c|t) of seeing c in the context of t. Geometrically, this means that, like Skip-gram, our model associates with each context word a number of parallel hyperplanes. However, unlike in the Skip-gram model, only the relative position of these hyperplanes is imposed (i.e. if λ 1 < λ 2 < λ 3 then H λ 2 c should occur between H λ 1 c and H λ 3 c ). Second, by using a quadratic kernel for the max-margin constraints, we obtain a model that can represent context words as a set of nested ellipsoids, as illustrated in Figure 1 (b). From these nested ellipsoids we can then estimate a Gaussian which acts as a convenient region based word representation. Note that our model thus jointly learns a vector representation for each word (i.e. the target word representations) as well as a region based representation (i.e. the nested ellipsoids representing the context words). We present experimental results which show that the region based representations are effective for measuring synonymy and hypernymy. Moreover, perhaps surprisingly, the region based modeling of context words also benefits the target word vectors, which match, and in some cases outperform the vectors obtained by standard word embedding models on various benchmark evaluation tasks. Background and Related Work Word Embedding Various methods have already been proposed for learning vector space representations of words, e.g. based on matrix factorization (Turney and Pantel, 2010) or neural networks. Here we briefly review Skip-gram and GloVe, two popular models which share some similarities with our model. The basic assumption of Skip-gram (Mikolov et al., 2013b) is that the probability P (c|t) of seeing word c in the context of word t is given as: In principle, based on this view, the target vectors p w and context vectorsp w could be learned by maximizing the likelihood of a given corpus. Since this is computationally not feasible, however, it was proposed in (Mikolov et al., 2013b) to instead optimize the following objective: P (c|t) = p t ·p c c p t ·p cN i=1 c ∈C i log(σ(p w i ·p c ))+ c ∈C i log(−σ(p w i ·p c )) where the left-most summation is over all N word occurrences in the corpus, w i is the i th word in the corpus, C i are the words appearing in the context of w i and C i consists of k · |C i | randomly chosen words, called the negative samples for w i . The context C i contains the t i words immediately preceding and succeeding w i , where t i is randomly sampled from {1, ..., t max } for each i . The probability of choosing word w as a negative sample is proportional to occ(w) N 0.75 , with occ(w) the number of occurrences of word w in the corpus. Finally, to reduce the impact of frequent words, some word occurrences are removed from the corpus before applying the model, with the probability of removing an occurrence of word w being 1 − θ occ(w) . Default parameter values are t max = 5 and θ = 10 −5 . GloVe is another popular model for word embedding (Pennington et al., 2014). Rather than explicitly considering all word occurrences, it directly uses a global co-occurrence matrix X = (x ij ) where x ij is the number of times the word w j appears in the context of w i . Like Skip-gram, it learns both a target vector p w and context vectorp w for each word w, but instead learns these vectors by optimizing the following objective: i j f (x ij )(p w i ·p w j + b w i +b w j − log x ij ) 2 where b w i andb w j are bias terms, and f is a weighting function to reduce the impact of very rare terms, defined as: f (x ij ) = ( x ij xmax ) α if x ij < x max 1 otherwise The default values are x max = 100 and α = 0.75. Region Representations The idea of representing words as regions was advocated in (Erk, 2009), as a way of modeling the diversity of the contexts in which a word appears. It was argued that such regions could be used to more accurately model the meaning of polysemous words and to model lexical entailment. Rather than learning region representations directly, it was proposed to use a vector space representation of word occurrences. Two alternatives were investigated for estimating a region from these occurrence vectors, respectively inspired by prototype and exemplar based models of categorization. The first approach defines the region as the set of points whose weighted distance to a prototype vector for the word is within a given radius, while the second approach relies on the k-nearest neighbor principle. In contrast, (Vilnis and McCallum, 2015) proposed a method that directly learns a representation in which each word corresponds to a Gaussian. The model uses an objective function which requires the Gaussians of words that co-occur to be more similar than the Gaussians of words of negative samples (which are obtained as in the Skipgram model). Two similarity measures are considered: the inner product of the Gaussians and the KL-divergence. It is furthermore argued that the asymmetric nature of KL-divergence makes it a natural choice for modeling hypernymy. In particular, it is proposed that the word embeddings could be improved by imposing that words that are in a hypernym relation have a low KL-divergence, allowing for a natural way to combine corpus statistics with available taxonomies. Finally, another model that represents words using probability distributions was proposed in (Jameel and Schockaert, 2016). However, their model is aimed at capturing the uncertainty about vector representations, rather than at modeling the diversity of words. They show that capturing this uncertainty leads to vectors that outperform those of the GloVe model, on which their model is based. However, the resulting distributions are not suitable for modeling hypernymy. For example, since more information is available for general terms than for narrow terms, the distributions associated with general terms have a smaller variance, whereas approaches that are aimed at modeling the diversity of words have the opposite behavior. Ranking Embedding The model we propose only relies on the rankings induced by each context word, and tries to embed these rankings in a vector space. This problem of "ranking embedding" has already been studied by a few authors. An elegant approach for embedding a given set of rankings, based on the product order, is proposed in (Vendrov et al., 2016). However, this method is specifically aimed at completing partially ordered relations (such as taxonomies), based on observed statistical correlations, and would not be directly suitable as a basis for a word embedding method. The computational complexity of the ranking embedding problem was characterized in (Schockaert and Lee, 2015), where the associated decision problem was shown to be complete for the class ∃R (which sits between NP and PSPACE). Note that the problem of ranking embedding is different from the learning-to-rank task (Liu, 2009). In the former case we are interested in learning a vector space representation that is somehow in accordance with a given completely specified set of rankings, whereas in the latter case the focus is on representing incompletely specified rankings in a given vector space representation. Ordinal Regression Word Embedding Learning the Embedding In this section we explain how a form of ordinal regression can be used to learn both word vectors and word regions at the same time. First we introduce some notations. Recall that the Positive Pointwise Mutual Information (PPMI) between two words w i and w j is defined as PPMI(w i , w j ) = max(0, PMI(w i , w j )), with PMI(w i , w j ) given by: log n(w i , w j ) · ( w∈W w ∈W n(w, w )) ( w∈W n(w i , w)) · ( w∈W n(w, w j )) where we write n(w i , w j ) for the number of times word w j occurs in the context of w i , and W represents the vocabulary. For each word w j , we write W j 0 , ..., W j n j for the stratification of the words in the vocabulary according to their PPMI value with w j , i.e. we have that: 1. PPMI(w, w j ) = 0 for w ∈ W j 0 ; 2. PPMI(w, w j ) < PPMI(w , w j ) for w ∈ W j i and w ∈ W j k with i < k; and 3. PPMI(w, w j ) = PPMI(w , w j ) for w, w ∈ W j i . As a toy example, suppose W = {w 1 , w 2 , w 3 , w 4 , w 5 } and: PPMI(w 2 , w 1 ) = 3.4 PPMI(w 3 , w 1 ) = 4.1 PPMI(w 4 , w 1 ) = 0 PPMI(w 5 , w 1 ) = 0 PPMI(w 1 , w 1 ) = 0 Then we would have W 1 0 = {w 1 , w 4 , w 5 }, W 1 1 = {w 2 } and W 1 2 = {w 3 }. To learn the word embedding, we use the following objective function, which requires that for each context word w j there is a sequence of parallel hyperplanes that separate the representations of the words in W j i−1 from the representations of the words in W j i (i ∈ {1, ..., n j }): j n j i=1 pos(j, i − 1) + neg(j, i) |W j i−1 ∪ W j i | + λ p w j 2 where pos(j, i − 1) = w∈W j i−1 [1 − (φ(p w ) ·p w j +b i j )] 2 + neg(j, i) = w∈W j i [1 + (φ(p w ) ·p w j +b i j )] 2 + subject to 1 b 1 j < ... < b n j j for each j. Note that we write [x] + for max(0, x) and φ denotes the feature map of the considered kernel function. In this paper, we will in particular consider linear and quadratic kernels. If a linear kernel is used, then φ is simply the identity function. Using a quadratic kernel leads to a quadratic increase in the dimensionality of φ(p w ) andp w j . In practice, we found our model to be about 3 times slower when a quadratic kernel is used, when the word vectors p w are chosen to be 300-dimensional. Note that p w j and b i j define a hyperplane, separating the kernel space into a positive and a negative half-space. The constraints of the form pos(j, i − 1) essentially encode that the elements from W j i−1 should be represented in the positive half-space, whereas the constraints of the form neg(j, i) encode that the elements from W j i should be represented in the negative half-space. When using a linear kernel, the model is similar in spirit to Skip-gram, in the sense that it associates with each context word a sequence of parallel hyperplanes. In our case, however, only the ordering of these hyperplanes is specified, i.e. the specific offsets b i j are learned. In other words, we make the assumption that the higher PPMI(w, w j ) the stronger w is related to w j , but we do not otherwise assume that the numerical value of PPMI(w, w j ) is relevant. When using a quadratic kernel, each context word is essentially modeled as a sequence of nested ellipsoids. This gives the model a lot more freedom to satisfy the constraints, which may potentially lead to more informative vectors. The model is similar in spirit to the fixed margin variant for ranking with large-margin constraints proposed in (Shashua and Levin, 2002), but with the crucial difference that we are learning word vectors and hyperplanes at the same time, rather than finding hyperplanes for a given vector space representation. We use stochastic gradient descent to optimize the proposed objective. Note that we use a squared hinge loss, which makes optimizing the objective more straightforward. As usual, the parameter λ controls the trade-off between maintaining a wide margin and minimizing classifica-tion errors. Throughout the experiments we have kept λ at a default value of 0.5. We have also added L2 regularization for the word vectors w t with a weight of 0.01, which was found to increase the stability of the model. In practice, W j 0 is typically very large (containing most of the vocabulary), which would make the model too inefficient. To address this issue, we replace it by a small subsample, which is similar in spirit to the idea of negative sampling in the Skip-gram model. In our experiments we use 2k randomly sampled words from W , where k = n j i=1 |W j i | is the total number of positive samples. We simply use a uniform distribution to obtain the negative samples, as initial experiments showed that using other sampling strategies had almost no effect on the result. Using Region Representations When using a quadratic kernel, the hyperplanes defined by the vectorp w j and offsets b i j define a sequence of nested ellipsoids. To represent the word w j , we estimate a Gaussian from these nested ellipsoids. The use of Gaussian representations is computationally convenient and intuitively acts as a form of smoothing. In Section 3.2.1 we first explain how these Gaussians are estimated, after which we explain how they are used for measuring word similarity in Section 3.2.2 Estimating Gaussians Rather than estimating the Gaussian representation of a given word w j from the vectorp w j and offsets b i j directly, we will estimate it from the locations of the words that are inside the corresponding ellipsoids. In this way, we can also take into account the distribution of words within each ellipsoid. In particular, for each word w j , we first determine a set of words w whose vector p w is inside these ellipsoids. Specifically, for each word w that occurs at least once in the context of w j , or is among the 10 closest neighbors in the vector space of such a word, we test whether φ(p w )·p w j < −b 1 j , i.e. whether w is in the outer ellipsoid for w j . Let M w j be the set of all words w for which this is the case. We then represent w j as the Gaussian G(.; µ w j , C w j ), where µ w j and C w j are estimated as the sample mean and covariance of the set {p w | w ∈ M w j }. We also consider a variant in which each word w from M w j is weighted as follows. First, we determine the largest k in {1, ..., n j } for which φ(p w ) ·p w j < −b k j ; note that since w ∈ M w j such a k exists. The weight λ w of w is defined as the PPMI value that is associated with the set W k j . When using this weighted setting, the mean µ w j and covariance matrix C w j are estimated as: µ w j = w∈Mw j λ w p w w∈Mw j λ w C w j = w∈Mw j λ w (p w − µ)(p w − µ) T w∈Mw j λ w Note that the two proposed methods to estimate the Gaussian G(.; µ w j , C w j ) do not depend on the choice of kernel, hence they could also be applied in combination with a linear kernel. However, given the close relationships between Gaussians and ellipsoids, we can expect quadratic kernels to lead to higher-quality representations. This will be confirmed experimentally in Section 4. Measuring similarity To compute the similarity between w and w , based on the associated Gaussians, we consider two alternatives. First, following (Vilnis and Mc-Callum, 2015), we consider the inner product, defined as follows: E(w, w ) = G(x; µ w , C w )G(x; µ w , C w )dx = G(0; µ w − µ w , C w + C w ) The second alternative is the Jensen-Shannon divergence, given by: JS(w, w ) = KL(f w f w ) + KL(f w f w ) with f w = G(.; µ w , C w ), f w = G(.; µ w , C w ), and KL the Kullback-Leibler divergence. When computing the KL-divergence we add a small value δ to the diagonal elements of the covariance matrices, following (Vilnis and McCallum, 2015); we used 0.01. This is needed, as for rare words, the covariance matrix may otherwise be singular. Finally, to measure the degree to which w entails w , we use KL-divergence, again in accordance with (Vilnis and McCallum, 2015). Experiments In this section we evaluate both the vector and region representations produced by our model. In our experiments, we have used the Wikipedia dump from November 2nd, 2015 consisting of 1,335,766,618 tokens. We used a basic text preprocessing strategy, which involved removing punctuations, removing HTML/XML tags and lowercasing all tokens. We have removed words with less than 10 occurrences in the entire corpus. We used the Apache sentence segmentation tool 2 to detect sentence boundaries. In all our experiments, we have set the number of dimensions as 300, which was found to be a good choice in previous work, e.g. (Pennington et al., 2014). We use a context window of 10 words before and after the target word, but without crossing sentence boundaries. The number of iterations for SGD was set to 20. The results of all baseline models have been obtained using their publicly available implementations. We have used 10 negative samples in the word2vec code, which gave better results than the default value of 5. For the baseline models, we have used the default settings, apart from the D-GloVe model for which no default values were provided by the authors. For D-GloVe, we have therefore tuned the parameters using the ranges discussed in (Jameel and Schockaert, 2016). Specifically we have used the parameters that gave the best results on the Google Analogy Test Set (see below). As baselines we have used the following standard word embedding models: the Skip-gram (SG) and Continuous Bag-of-Words (CBOW) models 3 , proposed in (Mikolov et al., 2013a), the GloVe model 4 , proposed in (Pennington et al., 2014), and the D-GloVe model 5 proposed in (Jameel and Schockaert, 2016). We have also compared against the Gaussian word embedding model 6 from (Vilnis and McCallum, 2015), using the means of the Gaussians as vector representations, and the Gaussians themselves as region representations. As in (Vilnis and McCallum, 2015), we consider two variants: one with diagonal covariance matrices (Gauss-D) and one with spherical covariance matrices (Gauss-S). For our model, we will consider the following configurations: Reg-li-cos word vectors, obtained using linear kernel, compared using cosine similarity; Reg-li-eucl word vectors, obtained using linear kernel, compared using Euclidean distance; Reg-qu-cos word vectors, obtained using quadratic kernel, compared using cosine similarity; Reg-qu-eucl word vectors, obtained using quadratic kernel, compared using Euclidean distance; Reg-li-prod Gaussian word regions, obtained using linear kernel, compared using the inner product E; Reg-li-wprod Gaussian word regions estimated using the weighted variant, obtained using linear kernel, compared using the inner product E; Reg-li-JS Gaussian word regions, obtained using linear kernel, compared using the Jensen-Shannon divergence; Reg-li-wJS Gaussian word regions estimated using the weighted variant, obtained using linear kernel, compared using Jensen-Shannon divergence. Analogy Completion Analogy completion is a standard evaluation task for word embeddings. Given a pair (w 1 , w 2 ) and a word w 3 the goal is to find the word w 4 such that w 3 and w 4 are related in the same way as w 1 and w 2 . To solve this task, we predict the word w 4 which is most similar to w 2 − w 1 + w 3 , either in terms of cosine similarity or Euclidean distance. The evaluation metric is accuracy. We use two popular benchmark data sets: the Google Analogy Test Set 7 and the Microsoft Research Syntactic Analogies Dataset 8 . The former contains both semantic and syntactic relations, for which we show the results separately, respectively referred to as Gsem and Gsyn; the latter only contains syntactic relations and will be referred to as MSR. The results are shown in Table 1. Recall that the parameters of D-GloVe were tuned on the Google Analogy Test Set, hence the results reported for this model for Gsem and Gsyn might be slightly higher than what would normally be obtained. Note that for our model, we can only use word vectors for this task. We outperform SG and CBOW for Gsem and Gsyn but not for MSR, and we outperform GloVe and D-GloVe for Gsyn and MSR but not for Gsem. The vectors from the Gaussian embedding model are not competitive for this task. For our model, using Euclidean distance slightly outperforms using cosine. For GloVe, SG and CBOW, we only show results for cosine, as this led to the best results. For D-GloVe, we used the likelihood-based similarity measure proposed in the original paper, which was found to outperform both cosine and Euclidean distance for that model. For our model, the quadratic kernel leads to better results than the linear kernel, which is somewhat surprising since this task evaluates a kind of linear regularity. This suggests that the additional flexibility that results from the quadratic kernel leads to more faithful context word representations, which in turn improves the quality of the target word vectors. Similarity Estimation To evaluate our model's ability to measure similarity we use 12 standard evaluation sets 9 , for which we will use the following abbreviations: S1: MTurk-287, S2:RG-65, S3:MC-30, S4:WS-353-REL, S5:WS-353-ALL, S6:RW-STANFORD, S7: YP-130, S8:SIMLEX-999, S9:VERB-143, S10: WS-353-SIM, S11:MTurk-771, S12:MEN-TR-3K. Each of these datasets contains similarity judgements for a number of word pairs. The task evaluates to what extent the similarity scores produced by a given word embedding model lead to the same ordering of the word pairs as the provided ground truth judgments. The evaluation metric is the Spearman ρ rank correlation coefficient. For this task, we can either use word vectors or word regions. The results are shown in Table 2. For our model, the best results are obtained when using word vectors and the Euclidean distance (Reg-qu-eucl), although the differences with the word regions (Reg-qu-wprod) are small. We use prod to refer to the configuration where similarity is estimated using the inner product, whereas we write JS for the configurations that use Jensen-Shannon divergence. Moreover, we use wprod and wJS to refer to the weighted variant for estimating the Gaussians. We can again observe that using a quadratic kernel leads to better results than using a linear kernel. As the weighted versions for estimating the Gaussians do not lead to a clear improvement, for the remainder of this paper we will only consider the unweighted variant. With the exception of S9, our model substantially outperforms the Gaussian word embedding model. Of the standard models SG and D-GloVe obtain the strongest performance. Compared to our model, these baseline models achieve similar results for S2, S10, S11 and S12, worse results for S1, S3, S4, S5, S6 and better results for S7, S8 and S9. Two general trends can be observed. First, the data sets where our model performs better tend to be datasets which describe semantic relatedness rather than pure synonymy. Second, the standard models appear to perform better on data sets that contain verbs and adjectives, as opposed to nouns. Modeling properties In (Rubinstein et al., 2015), it was analysed to what extent word embeddings can be used to identify concepts that satisfy a given attribute. While good results were obtained for taxonomic properties, attributive properties such as 'dangerous', 'round', or 'blue' proved to be considerably more problematic. We may expect region-based models to perform well on this task, since each of these attributes then explicitly corresponds to a region in space. To test this hypothesis, Table 3 shows the results for the same 7 taxonomic properties and 13 attributive properties as in (Rubinstein et al., 2015), where the positive and negative examples for all 20 properties are obtained from the McRae feature norms data (McRae et al., 2005). Following (Rubinstein et al., 2015), we use Table 2: Results for similarity estimation (Spearman ρ). Reg-li-* and Reg-qu-* are our models with a linear and quadratic kernel. S1 S2 S3 S4 S5 S6 S7 S8 S9 S10 S11 S12 5-fold cross-validation to train a binary SVM for each property and compute the average F-score due to unbalanced class label distribution. We separately present results for SVMs with a linear and a quadratic kernel. The results indeed support the hypothesis that region-based models are wellsuited for this task, as both the Gaussian embedding model and our model outperform the standard word embedding models. Hypernym Detection For hypernym detection, we have used the following 5 benchmark data sets 10 : H1 (Baroni et al., 2012), H2 (Baroni and Lenci, 2011), H3 (Kotler-10 https://github.com/stephenroller/ emnlp2016 man et al., 2010), H4 and H5 (Turney and Mohammad, 2015). Each of the data sets contains positive and negative examples, i.e. word pairs that are in a hypernym relation and word pairs that are not. Rather than treating this problem as a classification task, which would require selecting a threshold in addition to producing a score, we treat it as a ranking problem. In other words, we evaluate to what extent the word pairs that are in a valid hypernym relation are the ones that receive the highest scores. We use average precision as our evaluation metric. Apart from our model, the Gaussian embedding model is the only word embedding model that can by design support unsupervised hyperynym detection. As an additional baseline, however, we also show how Skip-gram performs when using cosine similarity. While such a symmetric measure cannot faithfully model hypernyny, it was nonetheless found to be a strong baseline for hypernymy models (Vulić et al., 2016), due to the inherent difficulty of the task. We also compare with a number of standard bag-of-words based models for detecting hypernyms: WeedsPrec (Kotlerman et al., 2010), ClarkeDE (Clarke, 2009) and invCL (Lenci and Benotto, 2012). These latter models take as input the PPMI weighted co-occurrence counts. The results are shown in Table 4, where Reg-li-KL and Reg-qu-KL refer to variants of our model Table 4: Results for hypernym detection (AP). Reg-li-* and Reg-qu-* are our models with a linear and quadratic kernel. in which Kullback-Leibler divergence is used to compare word regions. Surprisingly, both for our model and for the Gaussian embedding model, we find that using cosine similarity between the word vectors outperforms using the word regions with KL-divergence. In general, our model outperforms the Gaussian embedding model and the other baselines. Given the effectiveness of the cosine similarity, we have also experimented with the following metric: hyp(w 1 , w 2 ) = (1 − cos(w 1 , w 2 )) · KL(f w 1 ||f w 2 ) The results are referred to as Reg-li-KLC and Regqu-KLC in Table 4. These results suggest that the word regions can indeed be useful for detecting hypernymy, when used in combination with cosine similarity. Intuitively, for w 2 to be a hypernym of w 1 , both words need to be similar and w 2 needs to be more general than w 1 . While word regions are not needed for measuring similarity, they seem essential for modeling generality (in an unsupervised setting). The datasets considered so far all treat hypernyms as a binary notion. In (Vulić et al., 2016) a evaluation set was introduced which contains graded hypernym pairs. The underlying intuition is that e.g. cat and dog are more typical/natural hyponyms of animal than dinosaur or amoeba. The results for this data set are shown in Table 5. In this case, we use Spearman ρ as an evaluation metric, measuring how well the rankings induced by different models correlate with the ground truth. Following (Vulić et al., 2016), we separately mention results for nouns and verbs. In the case of Table 4 Interesting, for verbs we find that Skip-gram substantially outperforms the region based models, which is in accordance with our findings in the word similarity experiments. Conclusions We have proposed a new word embedding model, which is based on ordinal regression. The input to our model consists of a number of rankings, capturing how strongly each word is related to each context word in a purely ordinal way. Word vectors are then obtained by embedding these rankings in a low-dimensional vector space. Despite the fact that all quantitative information is disregarded by our model (except for constructing the rankings), it is competitive with standard methods such as Skip-gram, and in fact outperforms them in several tasks. An important advantage of our model is that it can be used to learn region representations for words, by using a quadratic kernel. Our experimental results suggest that these regions can be useful for modeling hypernymy. Figure 1 : 1The (dark) green region covers words that are (strongly) related to a. Similarly, the (dark) blue region expresses relatedness to b. Table 1 : 1Results for the analogy completion task (accuracy). Reg-li-* and Reg-qu-* are our models with a linear and quadratic kernel.Gsem Gsyn MSR SG 71.5 64.2 68.6 CBOW 74.2 62.3 66.2 GloVe 80.2 58.0 50.3 D-GloVe 81.4 59.1 59.6 Gauss-D-cos 61.5 53.6 50.7 Gauss-D-eucl 61.5 53.6 50.7 Gauss-S-cos 61.2 53.2 49.8 Gauss-S-eucl 61.4 53.3 49.8 Reg-li-cos 77.8 62.4 62.6 Reg-li-eucl 77.9 62.6 62.6 Reg-qu-cos 78.6 65.7 63.5 Reg-qu-eucl 78.7 65.7 63.6 Table 3 : 3Results for McRae feature norms (F1).Reg-li and Reg-qu are our models with a linear and quadratic kernel.Taxonomic Attributive lin quad lin quad SG 0.781 0.784 0.365 0.378 CBOW 0.775 0.781 0.361 0.371 GloVe 0.785 0.786 0.364 0.377 D-GloVe 0.743 0.749 0.342 0.364 Gauss-D 0.787 0.789 0.406 0.414 Gauss-S 0.781 0.784 0.401 0.406 Reg-li 0.791 0.796 0.399 0.406 Reg-qu 0.795 0.799 0.411 0.421 Table 5 : 5Results for HyperLex (Spearman ρ). Regli-* and Reg-qu-* are our models with a linear and quadratic kernel. nouns, our findings here are broadly in agreement with those fromModel All Nouns Verbs WeedsPrec 0.166 0.153 0.201 ClarkeDE 0.165 0.151 0.189 invCL 0.168 0.154 0.198 SG 0.158 0.164 0.297 Gauss-D-KL 0.185 0.171 0.198 Gauss-S-KL 0.181 0.168 0.184 Gauss-D-Cos 0.179 0.158 0.161 Gauss-S-Cos 0.166 0.151 0.158 Gauss-D-KLC 0.191 0.177 0.199 Gauss-S-KLC 0.189 0.171 0.189 Reg-li-KL 0.181 0.165 0.179 Reg-qu-KL 0.188 0.169 0.191 Reg-li-Cos 0.184 0.168 0.181 Reg-qu-Cos 0.190 0.180 0.196 Reg-li-KLC 0.189 0.171 0.185 Reg-qu-KLC 0.208 0.188 0.201 While it may seem at first glance that this constraint is redundant, this is not actually the case; see(Chu and Keerthi, 2005) for a counterexample in a closely related framework. https://opennlp.apache.org/ documentation/1.5.3/manual/opennlp.html# tools.sentdetect 3 https://code.google.com/archive/p/ word2vec/ 4 https://nlp.stanford.edu/projects/ glove/ 5 https://github.com/bashthebuilder/ pGlove 6 https://github.com/seomoz/word2gauss https://nlp.stanford.edu/projects/ glove/ 8 http://research.microsoft.com/en-us/ um/people/gzweig/Pubs/myz_naacl13_test_ set.tgz 9 https://github.com/mfaruqui/ eval-word-vectors AcknowledgmentsThis work was supported by ERC Starting Grant 637277. This work was performed using the computational facilities of the Advanced Research Computing@Cardiff (ARCCA) Division, Cardiff University. The authors would like to thank the anonymous reviewers for their insightful comments. Entailment above the word level in distributional semantics. Marco Baroni, Raffaella Bernardi, Ngoc-Quynh Do, Chung-Chieh Shan, Proceedings of the 13th Conference of the European Chapter of the Association for Computational Linguistics. the 13th Conference of the European Chapter of the Association for Computational LinguisticsMarco Baroni, Raffaella Bernardi, Ngoc-Quynh Do, and Chung-chieh Shan. 2012. Entailment above the word level in distributional semantics. In Proceed- ings of the 13th Conference of the European Chap- ter of the Association for Computational Linguistics. pages 23-32. How we blessed distributional semantic evaluation. Marco Baroni, Alessandro Lenci, Proceedings of the GEMS 2011 Workshop on GEometrical Models of Natural Language Semantics. Association for Computational Linguistics. the GEMS 2011 Workshop on GEometrical Models of Natural Language Semantics. Association for Computational LinguisticsMarco Baroni and Alessandro Lenci. 2011. How we blessed distributional semantic evaluation. In Pro- ceedings of the GEMS 2011 Workshop on GEomet- rical Models of Natural Language Semantics. Asso- ciation for Computational Linguistics, pages 1-10. New approaches to support vector ordinal regression. Wei Chu, Sathiya Keerthi, ICML. Wei Chu and S Sathiya Keerthi. 2005. New approaches to support vector ordinal regression. In ICML. pages 145-152. Context-theoretic semantics for natural language: an overview. Daoud Clarke, Proceedings of the Workshop on Geometrical Models of Natural Language Semantics. the Workshop on Geometrical Models of Natural Language SemanticsDaoud Clarke. 2009. Context-theoretic semantics for natural language: an overview. In Proceedings of the Workshop on Geometrical Models of Natural Language Semantics. pages 112-119. Representing words as regions in vector space. Katrin Erk, Proceedings of the Thirteenth Conference on Computational Natural Language Learning. the Thirteenth Conference on Computational Natural Language LearningKatrin Erk. 2009. Representing words as regions in vector space. In Proceedings of the Thirteenth Con- ference on Computational Natural Language Learn- ing. pages 57-65. Improving vector space word representations using multilingual correlation. Manaal Faruqui, Chris Dyer, Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics. the 14th Conference of the European Chapter of the Association for Computational LinguisticsManaal Faruqui and Chris Dyer. 2014. Improving vec- tor space word representations using multilingual correlation. In Proceedings of the 14th Conference of the European Chapter of the Association for Com- putational Linguistics. pages 462-471. Devise: A deep visualsemantic embedding model. Andrea Frome, Gregory S Corrado, Jonathon Shlens, Samy Bengio, Jeffrey Dean, Marc&apos;aurelio Ranzato, Tomas Mikolov, Proc. NIPS. NIPSAndrea Frome, Gregory S. Corrado, Jonathon Shlens, Samy Bengio, Jeffrey Dean, Marc'Aurelio Ranzato, and Tomas Mikolov. 2013. Devise: A deep visual- semantic embedding model. In Proc. NIPS. pages 2121-2129. A primer on neural network models for natural language processing. Yoav Goldberg, Journal of Artificial Intelligence Research. 57Yoav Goldberg. 2016. A primer on neural network models for natural language processing. Journal of Artificial Intelligence Research 57:345-420. word2vec explained: Deriving mikolov et al.'s negativesampling word-embedding method. Yoav Goldberg, Omer Levy, arXiv:1402.3722arXiv preprintYoav Goldberg and Omer Levy. 2014. word2vec explained: Deriving mikolov et al.'s negative- sampling word-embedding method. arXiv preprint arXiv:1402.3722 . Distributional vectors encode referential attributes. Abhijeet Gupta, Gemma Boleda, Marco Baroni, Sebastian Padó, Proc. EMNLP. EMNLPAbhijeet Gupta, Gemma Boleda, Marco Baroni, and Sebastian Padó. 2015. Distributional vectors encode referential attributes. In Proc. EMNLP. pages 12- 21. D-glove: A feasible least squares model for estimating word embedding densities. Shoaib Jameel, Steven Schockaert, Proceedings of the 26th International Conference on Computational Linguistics. the 26th International Conference on Computational LinguisticsShoaib Jameel and Steven Schockaert. 2016. D-glove: A feasible least squares model for estimating word embedding densities. In Proceedings of the 26th In- ternational Conference on Computational Linguis- tics. pages 1849-1860. Deriving adjectival scales from continuous space word representations. Joo-Kyung Kim, Marie-Catherine De Marneffe, Proc. EMNLP. EMNLPJoo-Kyung Kim and Marie-Catherine de Marneffe. 2013. Deriving adjectival scales from continuous space word representations. In Proc. EMNLP. pages 1625-1630. Directional distributional similarity for lexical inference. Lili Kotlerman, Ido Dagan, Idan Szpektor, Maayan Zhitomirsky-Geffet, Natural Language Engineering. 16Lili Kotlerman, Ido Dagan, Idan Szpektor, and Maayan Zhitomirsky-Geffet. 2010. Directional distribu- tional similarity for lexical inference. Natural Lan- guage Engineering 16:359-389. Identifying hypernyms in distributional semantic spaces. Alessandro Lenci, Giulia Benotto, Proceedings of *SEM. *SEMAlessandro Lenci and Giulia Benotto. 2012. Identify- ing hypernyms in distributional semantic spaces. In Proceedings of *SEM. pages 75-79. Linguistic regularities in sparse and explicit word representations. Omer Levy, Yoav Goldberg, Israel Ramat-Gan, Proc. CoNLL. CoNLLOmer Levy, Yoav Goldberg, and Israel Ramat-Gan. 2014. Linguistic regularities in sparse and explicit word representations. In Proc. CoNLL. pages 171- 180. Learning to rank for information retrieval. Tie-Yan Liu, Foundations and Trends in Information Retrieval. 3Tie-Yan Liu. 2009. Learning to rank for information retrieval. Foundations and Trends in Information Retrieval 3:225-331. Semantic feature production norms for a large set of living and nonliving things. Ken Mcrae, S George, Cree, S Mark, Chris Seidenberg, Mcnorgan, Behavior Research Methods. 37Ken McRae, George S Cree, Mark S Seidenberg, and Chris McNorgan. 2005. Semantic feature produc- tion norms for a large set of living and nonliving things. Behavior Research Methods 37:547-559. Efficient estimation of word representations in vector space. Tomas Mikolov, Kai Chen, Greg Corrado, Jeffrey Dean, International Conference on Learning Representations. Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013a. Efficient estimation of word represen- tations in vector space. In International Conference on Learning Representations. Distributed representations of words and phrases and their compositionality. Tomas Mikolov, Ilya Sutskever, Kai Chen, Gregory S Corrado, Jeffrey Dean, Proceedings of the 27th Annual Conference on Neural Information Processing Systems. the 27th Annual Conference on Neural Information Processing SystemsTomas Mikolov, Ilya Sutskever, Kai Chen, Gregory S. Corrado, and Jeffrey Dean. 2013b. Distributed rep- resentations of words and phrases and their compo- sitionality. In Proceedings of the 27th Annual Con- ference on Neural Information Processing Systems. pages 3111-3119. Neha Nayak, learning hyperonyms over word embeddings. Technical reportStudent technical reportNeha Nayak. 2015. In learning hyperonyms over word embeddings. Technical report, Student technical re- port. Glove: Global vectors for word representation. Jeffrey Pennington, Richard Socher, Christopher D Manning, Proc. EMNLP. EMNLPJeffrey Pennington, Richard Socher, and Christo- pher D. Manning. 2014. Glove: Global vectors for word representation. In Proc. EMNLP. pages 1532- 1543. Word embedding calculus in meaningful ultradense subspaces. Sascha Rothe, Hinrich Schütze, Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. the 54th Annual Meeting of the Association for Computational LinguisticsSascha Rothe and Hinrich Schütze. 2016. Word embedding calculus in meaningful ultradense sub- spaces. In Proceedings of the 54th Annual Meet- ing of the Association for Computational Linguis- tics. pages 512-517. How well do distributional models capture different types of semantic knowledge?. Dana Rubinstein, Effi Levi, Roy Schwartz, Ari Rappoport, Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics. the 53rd Annual Meeting of the Association for Computational LinguisticsDana Rubinstein, Effi Levi, Roy Schwartz, and Ari Rappoport. 2015. How well do distributional mod- els capture different types of semantic knowledge? In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics. pages 726-730. Qualitative reasoning about directions in semantic spaces. Steven Schockaert, Jae Hee Lee, Proceedings of the International Joint Conference on Artificial Intelligence. the International Joint Conference on Artificial IntelligenceSteven Schockaert and Jae Hee Lee. 2015. Qualita- tive reasoning about directions in semantic spaces. In Proceedings of the International Joint Conference on Artificial Intelligence. pages 3207-3213. Ranking with large margin principle: Two approaches. Amnon Shashua, Anat Levin, NIPS. Amnon Shashua and Anat Levin. 2002. Ranking with large margin principle: Two approaches. In NIPS. pages 937-944. From frequency to meaning: Vector space models of semantics. P D Turney, P Pantel, Journal of Artificial Intelligence Research. 37P. D. Turney and P. Pantel. 2010. From frequency to meaning: Vector space models of semantics. Jour- nal of Artificial Intelligence Research 37:141-188. Experiments with three approaches to recognizing lexical entailment. D Peter, Turney, M Saif, Mohammad, Natural Language Engineering. 2103Peter D Turney and Saif M Mohammad. 2015. Ex- periments with three approaches to recognizing lex- ical entailment. Natural Language Engineering 21(03):437-476. Order-embeddings of images and language. Ivan Vendrov, Ryan Kiros, Sanja Fidler, Raquel Urtasun, International Conference on Learning Representations. Ivan Vendrov, Ryan Kiros, Sanja Fidler, and Raquel Urtasun. 2016. Order-embeddings of images and language. In International Conference on Learning Representations. Word representations via gaussian embedding. Luke Vilnis, Andrew Mccallum, Proceedings of the International Conference on Learning Representations. the International Conference on Learning RepresentationsLuke Vilnis and Andrew McCallum. 2015. Word rep- resentations via gaussian embedding. In Proceed- ings of the International Conference on Learning Representations. Hyperlex: A large-scale evaluation of graded lexical entailment. Ivan Vulić, Daniela Gerz, Douwe Kiela, Felix Hill, Anna Korhonen, arXivIvan Vulić, Daniela Gerz, Douwe Kiela, Felix Hill, and Anna Korhonen. 2016. Hyperlex: A large-scale evaluation of graded lexical entailment. arXiv . Bilingual word embeddings for phrase-based machine translation. Y Will, Richard Zou, Socher, M Daniel, Christopher D Cer, Manning, Proc. EMNLP. EMNLPWill Y Zou, Richard Socher, Daniel M Cer, and Christopher D Manning. 2013. Bilingual word em- beddings for phrase-based machine translation. In Proc. EMNLP. pages 1393-1398.
6,260,053
DKIE: Open Source Information Extraction for Danish
Danish is a major Scandinavian language spoken daily by around six million people. However, it lacks a unified, open set of NLP tools. This demonstration will introduce DKIE, an extensible open-source toolkit for processing Danish text. We implement an information extraction architecture for Danish within GATE, including integrated third-party tools. This implementation includes the creation of a substantial set of corpus annotations for dataintensive named entity recognition. The final application and dataset is made are openly available, and the part-of-speech tagger and NER model also operate independently or with the Stanford NLP toolkit.
[ 7314668, 2571277, 17367372, 1452591, 10977241, 485850 ]
DKIE: Open Source Information Extraction for Danish April 26-30 Leon Derczynski Camilla Vilhelmsen Field Kenneth S Bøgh University of Sheffield University of Southern Denmark Aarhus University DKIE: Open Source Information Extraction for Danish Proceedings of the Demonstrations at the 14th Conference of the European Chapter of the Association for Computational Linguistics the Demonstrations at the 14th Conference of the European Chapter of the Association for Computational LinguisticsGothenburg, SwedenApril 26-30 Danish is a major Scandinavian language spoken daily by around six million people. However, it lacks a unified, open set of NLP tools. This demonstration will introduce DKIE, an extensible open-source toolkit for processing Danish text. We implement an information extraction architecture for Danish within GATE, including integrated third-party tools. This implementation includes the creation of a substantial set of corpus annotations for dataintensive named entity recognition. The final application and dataset is made are openly available, and the part-of-speech tagger and NER model also operate independently or with the Stanford NLP toolkit. Introduction Danish is primarily spoken in the northern hemisphere: in Denmark, on the Faroe islands, and on Greenland. Having roots in Old Norse, Danish bears similarities to other Scandinavian languages, and shares features with English and German. Previous tools and language resources for Danish have suffered from license restrictions, or from using small or non-reusable datasets. As a result, it is often difficult to use Danish language technologies, if anything is available at all. In cases where quality tools are available, they often have disparate APIs and input/output formats, making integration time-consuming and prone to error. To remedy this, this paper presents an opensource information extraction toolkit for Danish, using the established and flexible GATE text processing platform (Cunningham et al., 2013). To this end, there are three main goals: Adaptation: The application adapts to colloquial and formal Danish. Interoperability: DKIE is internally consistent and adopts unified, well-grounded solutions to the problems of processing Danish. Where possible, DKIE re-uses existing components, and strives for compatibility with major text processing architectures. Portability: It is preferable for developed components to be readily movable within the chosen architecture, GATE, and without, usable independently. Openness: The resultant application, and corpora and annotations developed in its creation, are as freely-available as possible. The remainder of this paper first discusses considerations specific to the language and prior work, then introduces the information extraction pipeline, followed by an evaluation of the tools provided. Processing Danish There are a few representational issues for Danish that are not solved in a unified fashion across existing technological issues. DKIE builds upon major standards in general linguistic annotation and in Danish to unify these solutions. Danish is written using the Latin alphabet, with the addition of three vowels: ae, ø andå, which may be transliterated as ae, oe and aa respectively. It is similar to English in terms of capitalisation rules and character set. Over time, the orthography of Danish has shifted. Among other things, a spelling reform in 1948 removed the capitalisation of nouns, and introduced the three vowel characters to represent existing vowel digraphs. There were also spelling shifts in this reform (e.g. kjaerlighed to kaerlighed). In addition, some towns and municipalities have changed the spelling of their name. For example, Denmarks second-largest city Aarhus changed its name toÅrhus with the 1948 Figure 1: The ANNIE-based information extraction pipeline for Danish reform, although Aalborg and Aabenraa did not. Later, in 2011, the city reverted fromÅrhus to Aarhus. The city's university retained the Aarhus spelling throughout this period. The effect of these relatively recent changes is that there exist digitised texts using a variety of orthographies not only to represent the same sound, as also in English, but also the same actual word. A language processing toolkit for Danish must exhibit sensitivity to these variances. In addition, Danish has some word boundary considerations. Compound nouns are common (e.g. kvindehåndboldlandsholdet for "the women's national handball team"), as are hyphenated constructions (fugle-fotografering for "bird photography") which are often treated as single tokens. Finally, abbreviations are common in Danish, and its acronyms can be difficult to disambiguate without the right context and language resource (e.g. OB for Odense Boldklub, a football club). Background The state of the art in Danish information extraction is not very interoperable or open compared to that for e.g. English. Previous work, while highperformance, is not available freely (Bick, 2004), or domain-restricted. 1 This makes results difficult to reproduce (Fokkens et al., 2013), and leads to sub-optimal interoperability (Lee et al., 2010). Even recent books focusing on the topic are heavily licensed and difficult for the average academic to access. Further, prior tools are often in the form of discrete components, hard to extend or to integrate with other systems. Some good corpus resources are available, most recently the Copenhagen Dependency Treebank (CDT) (Buch- Kromann and Korzen, 2010), which built on and included previously-released corpora for Danish. This 200K-token corpus is taken from news articles and editorials, and includes document structure, tokenisation, lemma, part-ofspeech and dependency relation information. The application demonstrated, DKIE, draws only on open corpus resources for annotation, and the annotations over these corpora are released openly. Further, the application is also made opensource, with each component having similar or better performance when compared with the stateof-the-art. Information Extraction Pipeline This section details each step in the DKIE pipeline. A screenshot of the tool is shown in Figure 1. Tokeniser We adopt the PAROLE tokenisation scheme (Keson and Norling-Christensen, 1998). This makes different decisions from Penn Treebank in some cases, concatenating particular expressions as single tokens. For example, the two word phrase i alt -meaning in total -is converted to the single token i alt. A set list of these group formations is given in the Danish PAROLE guidelines. Another key difference is in the treatment of quoted phrases and hyphenation. Phrases connected in this way are often treated as single tokens. For example, the phrase "Se og hør"laeserne (readers of "See and Hear", a magazine) is treated as a single token under this scheme. Part-of-Speech tagger We use a machine-learning based tagger (Toutanova et al., 2003) for Danish partof-speech labelling. The original PAROLE Table 1: Part-of-speech labelling accuracy in DKIE scheme introduces a set of around 120 tags, many of which are used only rarely. The scheme comprises tags built up of up to nine features. These features are used to describe information such as case, degree, gender, number, possessivity, reflexivity, mood, tense and so on (Keson and Norling-Christensen, 1998). The PAROLE data includes morphological encoding in tags. We separate this data out in our corpus, adding morphological features distinct from part-of-speech data. This data may then be used by later work to train a morphological analyser, or by other tools that rely on morphological information. We combine PAROLE annotations with the reduced tagset employed by the Danish Dependency Treebank (DDT) (Kromann, 2003). This has 25 tags. We adapted the tagger to Danish by including internal automatic mapping of ae, ø and a to two-letter diphthongs when both training and labelling, by adding extra sets of features for handling words and adjusting our unknown word threshold to compensate for the small corpus (as in Derczynski et al. (2013)), and by specifying the closed-class tags for this set and language. We also prefer a CRF-based classifier in order to get better whole-sequence accuracy, providing greater opportunities for later-stage tools such as dependency parsers to accurately process more of the corpus. Results are given in Table 1, comparing tokenand sentence-level accuracy to other work using the DDT and the TnT tagger (Brants, 2000). Stateof-the-art performance is achieved, with wholesentence tagging accuracy comparable to that of leading English taggers. Gazetteers High precision entity recognition can be achieved with gazetteer-based named entity recognition. This is a low-cost way of quickly getting decent performance out of existing toolkits. We include two special kinds of gazetteer for Danish. Firstly, it is important to annotation the names of entities specific to Denmark (e.g. Danish towns). Secondly, entities outside of Denmark sometimes have different names specific to the Danish language (e.g. Lissabon for Lisboa / Lisbon). As well as a standard strict-matching gazetteer, we include a "fuzzy" gazetteer specific to Danish that tolerates vowel orthography variation and the other changes introduced in the 1948 spelling reform. For locations, we extracted data for names of Danish towns from DBpedia and a local gazetteer, and from Wikipedia the Danishlanguage versions of the world's 1 000 most populous cities. For organisations, we used Wikipedia cross-language links to map the international organisations deemed notable in Wikipedia to their Danish translation and acroynm (e.g. the United Nations is referred to as FN). The major Danish political parties were also added to this gazetteer. For person names, we build lists of both notable people, 2 and also populated GATE's first and last name lists with common choices in Denmark. Temporal Expression Annotation We include temporal annotation for Danish in this pipeline, making DKIE the first temporal annotation tool for Danish. We follow the TimeML temporal annotation standard (Pustejovsky et al., 2004), completing just the TIMEX3 part. Danish is interesting in that it permits flexible temporal anchors outside of reference time (Reichenbach, 1947) and the default structure of a calendar. For example, while in English one may use numbers to express a distance in days (two days from now) or into a month (the second of March), Danish permits these offsets from any agreed time. As a result, it is common to see expressions of the form 2. juledag, which in this case is the second christmas day and refers to 26 th December. For this pipeline, we use finite state transducers to define how Danish timexes may be recognised. We then use the general-purpose TIMEN (Llorens et al., 2012) timex normalisation tool to provide calendar or TIMEX3 values for these expressions. Example rules are shown in Figure 2. Named entities In addition to gazetteers, we present a machine learning-based approach to entity recognition and classification in Danish. We annotated the Copenhagen Dependency Treebank for person, location and organisation entities, according to the ACE guidelines (or as close as possible). This led to a total of 100 000 extra tokens annotated for NEs in Danish, doubling the previously-available amount. We used three annotators, achieving inter-annotator agreement of 0.89 on the first 100 000 tokens; annotation is an ongoing effort. The data was used to learn a model tuned to Danish with an existing NER tool (Finkel et al., 2005). We removed word shape conjunctions features from the default configuration in an effort to reduced sensitivities introduced by the group noun tokenisation issue. This model, and the Stanford NER tool, were then wrapped as a GATE processing resource, contributing general-purpose Danish NER to the toolkit. Conclusion We will demonstrate a modern, interoperable, open-source NLP toolkit for information extraction in Danish. The released resources are: a GATE pipeline for Danish; tools for temporal expression recognition and normalisation for Danish; part-of-speech and named entity recognition models for Danish, that also work in the Stanford NLP architecture; and named entity corpus annotations over the Copenhagen Dependency Treebank. Figure 2 : 2Example normalisation rules in TIMEN. "DCT" refers to the document creation time. E.g. CST's non-commercial-only anonymisation tool, at http://cst.dk/online/navnegenkender/ See https://en.wikipedia.org/wiki/List of Danes, minus musicians due to stage names AcknowledgmentsThis work was supported by EU funding under grant FP7-ICT-2013-10-611233, Pheme, and grant agreement No. 296322, AnnoMarket. We are grateful to Anders Søgaard of Copenhagen University for comments on an earlier draft and kind help with gazetteers. The first author would also like to thank Aarhus University for their kind provision of research facilities. A named entity recognizer for Danish. E Bick, Proceedings of LREC. LRECE. Bick. 2004. A named entity recognizer for Danish. In Proceedings of LREC. TnT: a statistical part-of-speech tagger. T Brants, Proceedings of the sixth conference on Applied natural language processing. the sixth conference on Applied natural language processingACLT. Brants. 2000. TnT: a statistical part-of-speech tag- ger. In Proceedings of the sixth conference on Ap- plied natural language processing, pages 224-231. ACL. The unified annotation of syntax and discourse in the Copenhagen Dependency Treebanks. M Buch-Kromann, I Korzen, Proceedings of the Fourth Linguistic Annotation Workshop. the Fourth Linguistic Annotation WorkshopACLM. Buch-Kromann and I. Korzen. 2010. The unified annotation of syntax and discourse in the Copen- hagen Dependency Treebanks. In Proceedings of the Fourth Linguistic Annotation Workshop, pages 127-131. ACL. Getting More Out of Biomedical Documents with GATE's Full Lifecycle Open Source Text Analytics. H Cunningham, V Tablan, A Roberts, K Bontcheva, PLoS computational biology. 921002854H. Cunningham, V. Tablan, A. Roberts, and K. Bontcheva. 2013. Getting More Out of Biomedical Documents with GATE's Full Lifecycle Open Source Text Analytics. PLoS computational biology, 9(2):e1002854. Twitter Part-of-Speech Tagging for All: Overcoming Sparse and Noisy Data. L Derczynski, A Ritter, S Clark, K Bontcheva, Proceedings of Recent Advances in Natural Language Processing. Recent Advances in Natural Language ProcessingAssociation for Computational LinguisticsL. Derczynski, A. Ritter, S. Clark, and K. Bontcheva. 2013. Twitter Part-of-Speech Tagging for All: Over- coming Sparse and Noisy Data. In Proceedings of Recent Advances in Natural Language Processing. Association for Computational Linguistics. Incorporating non-local information into information extraction systems by Gibbs sampling. J R Finkel, T Grenager, C Manning, Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics. the 43rd Annual Meeting on Association for Computational LinguisticsACLJ. R. Finkel, T. Grenager, and C. Manning. 2005. In- corporating non-local information into information extraction systems by Gibbs sampling. In Proceed- ings of the 43rd Annual Meeting on Association for Computational Linguistics, pages 363-370. ACL. Offspring from reproduction problems: What replication failure teaches us. A Fokkens, M Van Erp, M Postma, T Pedersen, P Vossen, N Freire, Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics. the 51st Annual Meeting of the Association for Computational LinguisticsAssociation for Computational LinguisticsA. Fokkens, M. van Erp, M. Postma, T. Pedersen, P. Vossen, and N. Freire. 2013. Offspring from reproduction problems: What replication failure teaches us. In Proceedings of the 51st Annual Meet- ing of the Association for Computational Linguis- tics, pages 1691-1701. Association for Computa- tional Linguistics. PAROLE-DK. The Danish Society for Language and Literature. B Keson, O Norling-Christensen, B. Keson and O. Norling-Christensen. 1998. PAROLE-DK. The Danish Society for Language and Literature. The Danish Dependency Treebank and the DTAG treebank tool. M T Kromann, Proceedings of the Second Workshop on Treebanks and Linguistic Theories. the Second Workshop on Treebanks and Linguistic Theories217M. T. Kromann. 2003. The Danish Dependency Tree- bank and the DTAG treebank tool. In Proceedings of the Second Workshop on Treebanks and Linguistic Theories, page 217. Towards interoperability of ISO standards for Language Resource Management. K Lee, L Romary, Proc. ICGL. ICGLK. Lee, L. Romary, et al. 2010. Towards interoperabil- ity of ISO standards for Language Resource Man- agement. Proc. ICGL 2010. TIMEN: An Open Temporal Expression Normalisation Resource. H Llorens, L Derczynski, R J Gaizauskas, E Saquete, LREC. H. Llorens, L. Derczynski, R. J. Gaizauskas, and E. Sa- quete. 2012. TIMEN: An Open Temporal Ex- pression Normalisation Resource. In LREC, pages 3044-3051. The Specification Language TimeML. J Pustejovsky, B Ingria, R Sauri, J Castano, J Littman, R Gaizauskas, The Language of Time: A Reader. Oxford University PressJ. Pustejovsky, B. Ingria, R. Sauri, J. Castano, J. Littman, and R. Gaizauskas. 2004. The Specifica- tion Language TimeML. In The Language of Time: A Reader, pages 545-557. Oxford University Press. The tenses of verbs. H Reichenbach, Elements of Symbolic Logic. MacmillanH. Reichenbach. 1947. The tenses of verbs. In Ele- ments of Symbolic Logic. Macmillan. Feature-rich part-of-speech tagging with a cyclic dependency network. K Toutanova, D Klein, C D Manning, Y Singer, Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics. the 2003 Conference of the North American Chapter of the Association for Computational LinguisticsACLK. Toutanova, D. Klein, C. D. Manning, and Y. Singer. 2003. Feature-rich part-of-speech tagging with a cyclic dependency network. In Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics, pages 173-180. ACL.
207,925,480
[]
Character-Based Models for Adversarial Phone Number Extraction: Preventing Human Sex Trafficking Nov 4 Nathanael Chambers nchamber@usna.edu Department of Computer Science United States Naval Academy Timothy Forman Department of Computer Science United States Naval Academy Catherine Griswold Department of Computer Science United States Naval Academy Yogaish Khastgir Department of Computer Science United States Naval Academy Kevin Lu Department of Computer Science United States Naval Academy Stephen Steckler Department of Computer Science United States Naval Academy Character-Based Models for Adversarial Phone Number Extraction: Preventing Human Sex Trafficking Proceedings of the 2019 EMNLP Workshop W-NUT: The 5th Workshop on Noisy User-generated Text the 2019 EMNLP Workshop W-NUT: The 5th Workshop on Noisy User-generated TextHong KongNov 448 Illicit activity on the Web often uses noisy text to obscure information between client and seller, such as the seller's phone number. This presents an interesting challenge to language understanding systems; how do we model adversarial noise in a text extraction system? This paper addresses the sex trafficking domain, and proposes some of the first neural network architectures to learn and extract phone numbers from noisy text. We create a new adversarial advertisement dataset, propose several RNN-based models to solve the problem, and most notably propose a visual character language model to interpret unseen unicode characters. We train a CRF jointly with a CNN to improve number recognition by 89% over just a CRF. Through data augmentation in this unique model, we present the first results on characters never seen in training. Introduction One reason people intentionally obscure textual content is to evade automatic extraction systems. There are good reasons for wanting to do this, privacy being at the forefront. However, illicit activity is another reason, and human sex trafficking is one of the most egregious uses. We draw inspiration from this domain, but extracting information from adversarial noisy text is a more general challenge for the NLP community. It is a language understanding task that humans can easily do, but which presents difficulty for automated methods. This paper presents the first deep learning models for adversarial phone number extraction, and releases new datasets for future experimentation. An obscured example number is shown here: (9I4) Too.46-callme-ÖÖ1/4 The true phone number is 914-246-0014, but this breaks even the most comprehensive rule-based extractors. It contains examples of visual substitution (I for 1 and unicode for 0), word substitution ("Too" for 2), and character confounders (separators '.', '-', '/' and other words). Any one challenge might be solvable in isolation, but they often combine together: n1ne0one 7n1ne3 n1ne351 Rather than swapping letters for digits (I for 1), this example swaps digits for letters (1 for i) which are also part of a word swap ('nine' for 9). There are four '1' characters in the string, but only one of them maps to one of the two 1 digits in the number 901-793-9351. Beyond this, the most challenging noise occurs when unicode is injected, thus rendering finite character models ineffective since they've never seen these characters in training. This paper proposes to model all of this noise with several neural network architectures. The domain of focus for our study is human sex trafficking, although our proposed models apply to any domain with obscured information (social media, for instance, often mixes unusual characters, confounding normal language models). This topic is important in terms of global need, but it also has attractive language properties for research. Since our datasets come from people who need to post contact information, they can't obscure the text too much, or nobody could call them. This results in an interesting cognitive challenge that humans can solve, but state-of-the-art extraction struggles. The main contributions in this paper are (1) the first neural models for noisy phone number extraction, (2) a visual language model over images of characters, (3) a combined CRF with CNN input, (4) a data augmentation technique for training that helps recognize unseen unicode, and (5) state-ofthe-art extraction results on new datasets. Previous Work A number of papers have looked into the sex trafficking domain. Some focus on classifying entire ads as trafficking or not (Alvari et al., 2016(Alvari et al., , 2017, while others build knowledge graphs of mentioned entities (Szekely et al., 2015) or focus on normalizing attributes like geolocations (Kapoor et al., 2017;. Most of these use phone numbers as features, and several found them to be among the most important input (Dubrawski et al., 2015;Nagpal et al., 2017;Li et al., 2018). In fact, phone numbers are used as gold truth to connect similar ads or link traffickers (Rabbany et al., 2018;Li et al., 2018). Phone numbers have also been shown to be some of the most stable links to entities (Costin et al., 2013), so are important for entity linking tasks. Almost all of these threads assume correct phone extraction and ignore the difficulty of ads with obscured numbers. Although sometimes unspecified, they all appear to use rulebased extractors. Most relevant to this paper is TJBatchExtractor, a rule-based regular expression system (Dubrawski et al., 2015) which is still state-ofthe-art for extraction, and is used by other work on trafficking ID (Nagpal et al., 2017). We employ TJBatchExtractor to identify the ads with obscured text from which it fails to extract a number. Our paper thus focuses on only the difficult ads with noisy phone numbers. Most language models use words or characters as their base inputs. One of our contributions is a visual model of characters. We use an image database of 65k unicode characters developed by BBVA Next Security Lab 1 for phishing prevention. Most similar is Liu et al. (2017) who use CNNs for Asian-language classification. They aren't addressing noise like our paper, but rather the semantics inherent to their visual characters. Finally, we employ data augmentation (Ding et al., 2016;Xu et al., 2016) during training of our visual character model. This is commonly used in the visual community (Salamon and Bello, 2017;Zhong et al., 2017) and we adopt their overall idea to randomly perturb our character images to learn a robust character recognizer. 1 https://github.com/next-security-lab 3 Data and Attributes Noisy and Obscured Data We begin by highlighting the main methods people use for adversarial noise in written text. This is not an exhaustive list, but it covers the vast majority of cases observed in this paper's datasets. 1. Digits as Lexemes. The most basic approach to obscuring numbers is to substitute lexemes (words) for digits. These are often easy to identify, and regular expressions with a dictionary are usually sufficient for detection. Words might be capitalized (FOUR) or camel case (foUr), such as in the text, "threeoh2FOUR070six22". 2. Homophones. This method replaces digits with homophones or near-rhymes, thereby confusing dictionary approaches as in "337 9twennyfo 06juan9". Tokens "twenny" and "juan" share phonological similarities with the digit pronunciation. Regular expressions cannot capture these without complex phoneme modeling. 3. Letters as Digits. This method substitutes ASCII letters for their digit lookalikes (e.g., 6I5 093 93B6). The 'I' and 'B' are representing 1 and 8 respectively. These substitutions can grow more complicated with things like '()' for 0 and what was popularized as leetspeak in the 1980's with 'E' for '3' and other such inversions. Visual Deception and Unicode. This is a variant of 'Letters as Digits' above, but goes beyond ASCII substitution to use Unicode characters. Unicode presents a huge challenge to extraction as these rely entirely on visual similarities in the character images. Below are just some unicode options that resemble the ASCII character '8': 8 ! ! Ȣ ȣ A rule-based approach would have to manually map all possible characters to their digits, an impossible task for 138k current unicode characters (with future room for 1mil). This would also fail on the larger problem of capturing visually ambiguous close-matches. For instance, an emoticon smiley face can be used for the ASCII letter 'o': (4 2) 456 9412 We are the first to our knowledge to model visual noise with a language model architecture. 5. Confounding Separators. Another common noise tactic is to insert arbitrary characters as separators. For example: -270**1tree&&822==31-. The noise in this obscured text is meant to confuse a pattern matcher as to when a digit's substring begins and ends. Other difficult versions of this method uses digit characters themselves as the separators: 111 410 111 897 111 3245 111 6. Human Reasoning. The most difficult class of obscured text is that which requires reasoning to solve. For instance, including arithmetic (3+1) or instructions to invert digits. This type is a small minority in obscured phone numbers, but they prove most challenging. Some of these challenges have rule-based solutions in isolation, but combined together, they overlap and build on each other for an exponential number of noisy combinations. This paper addresses all of these challenges except for homophones and human reasoning. We leave phoneme modeling to future work, and reasoning requires a different approach than discriminative classifiers. The most significant challenge this paper addresses is that of the visual deceptions (letters as digits, unicode, and visual sim). We propose the first neural model for visual similarity detection with a unique visual model based on a CNN. Corpora Real-World Noisy Advertisements Our initial corpus started from a 250k advertisement crawl of Backpage and Craigslist escort sections, shared with us by the Global Emancipation Network. The majority of these ads (180k) are one line with a standard phone number and no actual text. We filtered these out to focus on ads with written descriptions. After removing one-liners, we ran the state-ofthe-art extractor (Dubrawski et al., 2015) to identify all ads where the extractor failed to extract anything. This remaining subset contains ads that either don't have a phone number, or they contain an obscured number that fooled the rule-based extractor. Figure 1 shows one such explicit ad. Undergraduate volunteers manually inspected the remaining ads, removed those without numbers, and identified minimal text spans that encompassed any obscured phone numbers. These annotations resulted in approximately 200 realworld obscured ads with their obscured text spans. Ad for Phone 555-584-4630 Sexy Slim 555 Ready for fun let me 584 satisfy your 4630 every desire no disappointments..!! **IF YOUR NOT SERIOUS PLEASE DON'T CALL ME..!!Kik Me-censored ****CAR PLAY ONLY**** Figure 1: An example advertisement from the escort section of Backpage. Phone and username changed for anonymity. This ad illustrates an obscured number with normal digits, but text is interspersed in between. Desiring a larger test set for evaluation, we created an adversarial data collection environment for undergrads to try and "beat" the TJBatchExtractor. This small-scale collection resulted in about 200 more obscured phone examples. Merging the crawl with these adversarial obscured numbers, we had 390 real-world examples. We split into 250 test numbers and 140 for development (dev). The dev set was used for model improvement and parameter tuning, and the test set only for final results. Two examples from the dev set are given here: Gold Phone Ad Text 3189481720 tree1ate nein 48-one7 twenty 4177015067 4!7 70! fifty6svn Due to the nature of this domain, training data is difficult to obtain so neural models are stymied. We instead chose to "fake" the training data, creating our own computer-based adversarial dataset. Though training data is artificial, all experiments use the above real-world data annotations. Artificial Noisy Adversarial Data A core research question is now whether artificial training data can train this real-world task. This section describes our approach. The generation algorithm starts with a 10 digit number string (randomly selected 2 ), and then transforms the string with a sequence of obfuscation operations. Space prevents a full description of this process and its details, but we will release the code upon publication. Example transformations are as follows: 1. Insert separator ASCII chars between digits. 2. Replace a digit with an ASCII lookalike. Artificial Obscured Phone Numbers 2 1tree\6-zero0###33\˜15 778cinco7five688 PaRtyGiRL 6 *forejuan*for 55!826ate 5 1290 si&te4˜˜˜˜˜˜˜˜˜˜135 ate0 5 ***2 08-88 8nine 3. Replace a digit with its English, Spanish, or homonym (2 to 'two') 4. Capitalize letters or replace with an ASCII lookalike (C to '(') 5. Replace two digits with its English word ('18' to 'eighteen') 6. Insert random English words as separators These occur in sequence, each with random chance, so the original digit '2' might become 'too' which then becomes 'To0' after character conversion. The output of this process is arguably more difficult than many real-world examples. See Figure 2 for generated examples. We ultimately trained on 100k of these. Models for Obscured Extraction Baseline Models We use two baselines: one from prior work and another with a basic RNN model. Rule-Based Baseline The state-of-the-art for phone number extraction is the TJBatchExtractor from Debrawski et al. (2015). This is a large set of regular expressions designed to capture phone numbers even with variation and noise, mostly focused on what we've named "Digits as Lexemes" and "Letters as Digits". Their previous results showed 99% extraction accuracy, however, we found that 72% of ads are one line with just unobscured digits, so their result masks a more challenging subset. RNN Baseline Our baseline neural architecture is a characterbased bi-directional LSTM. Input is a 70 character span of obscured text, and each character is mapped to its embedding vector. The embeddings are randomly initialized and learned during training. Each embedding is fed into the biLSTM, and the final hidden state of the biLSTM is treated as the representation of the obscured text. The hidden state is then passed to 10 independent dense layers, one for each of the 10 digits in the phone number. A softmax is then used on the output of each dense layer to predict the digit in that position of the 10-digit phone number. We also tested GRUs instead of LSTMs, but performance did not significantly change. Obscured Models RNN with Positional Attention The RNN baseline transforms the input text to a single vector from the biLSTM, and then predicts the digits in the phone number from this vector. We found that the model quickly learns to predict the first digits and the last digits, but learning for the middle digits is hindered. This intuitively makes sense because the vector represents the entire text without directed guidance on identifying where in the text the digits exist. How does the final dense layer know where the 4th and 5th digits begin? The initial digit, in contrast, is easier to identify because it leads the string. Our solution to this deficiency was to add positional attention to the LSTM. Instead of using its final LSTM state, the vector is a weighted sum of all hidden states. The weight vector α is the learned positional attention. Formally, the ith digit in the 10 digit phone number is predicted by a dense layer over context vector input W i : W i = N j=0 α ij * V j(1) where N is the length of the LSTM, V j is the jth LSTM hidden state, i is the ith digit in the phone, and α i is the ith digit's positional attention vector. This allows the network to learn which part of the text is relevant for each digit. The first digit in the number should learn a weight vector α 0 that weights the front of the LSTM more than the end, and vice versa for α 9 . Figure 3 shows this model. We experimented with individual attention (each digit i has its own learned α i ) and a single shared attention (all digits use the same learned α). We only report on individual attention since it outperformed shared attention. We also tested multiple stacked LSTM layers. Stacking showed no further improvement. RNN with Conditioned Prediction One characteristic of our task is that each digit prediction is mostly independent from the previous digit. Unlike many domains in NLP, this is not a sequence modeling problem where knowing the previous digit semantically assists in guessing the next. For instance, a 5 is not more likely to be followed by a 4. 3 Despite position attention, the model still had difficulty distinguishing which portion of the context vector was relevant to a middle digit. It sometimes repeats an inner digit because the 4th and 5th positions were too nearby in the obscured text. Observe these 2 examples: 41093four 2830 4109threeefour tooo830 The seventh digit is a 2, but it starts five characters later in the second string. We observed repeated digit predictions like: 4109344830. It would predict the same number twice, and then skip over the next due to the shifting positions. Our quick solution to avoiding repeats was to pass the predictions forward. We added a simple conditional dependency that feeds the softmax output of the previous digit to the current digit. The dotted lines in Figure 3 illustrate this new link. This removed many of our repeated digits, and also increased accuracy in other examples that weren't even repeated but just made mistakes. Conditional Random Field Model Given that providing the previous digit prediction showed slight improvements on the development set, we wanted to formalize the sequence predictions with proper transition probabilities. If a digit prediction leads to an unlikely next prediction (according to the model), then perhaps the previous digit should switch to its 2nd most likely in order to maximize the joint prediction. 3 There are exceptions and phone numbers do have some constraints, such as having a limited set of 3 leading digits. However, the remaining 7 digits are mostly random in the US. The other RNN problem is that input varies in length and noise. Some input is only about digits: 4treeTOO564ateSVN33 Others contain varying complex separators: -4**tree**TOO sms 564ate+SVN+33 RNNs must learn to ignore separators in ways that don't confuse the subsequent dense layers. The network is remarkably adept at this, but we hypothesized that a better model should make a prediction on each and every input character rather than merging all into the same hidden state. Conditional Random Fields (Lafferty et al., 2001) are a natural way of modeling the above. A CRF tags each character as it goes, and performs both training and inference, using viterbi search to find the most likely output prediction sequence. Figure 4 shows this model. We used the CRF implementation in Keras inspired by (Huang et al., 2015) to overlay a CRF on top of the RNN-based models (see also Ma and Hovy (2016)). The output of a CRF is different since it must output a label for every character (rather than just 10 phone digits). We use the standard CRF labels to mark the beginning (B) and included (I) characters. This means that instead of a single label for each possible phone digit (e.g., 8), we now have two labels which represent a character that begins a digit (B8) and a character in the middle or end of a digit (I8). We additionally use an Other label 'O' to label the noisy separator characters that aren't part of any digit's substring. The following is an example: B2 I2 I2 B4 B7 O B6 I6 I6 B9 B9 T O O 4 7 -s i x 9 9 The mapping from CRF labels (B2,I2,I2) to actual digits (2) is deterministic. Evaluation metrics for the previous RNNs also apply to the CRF output after it is mapped. However, training for the CRF is done entirely on the CRF label loss. Visual Characters with CNNs As with most NLP tasks, out of vocabulary (OOV) input is an issue. Our adversarial task is even more severe because visual substitutions are intentional, and often OOV as there are 138k current unicode options. If the character is unseen in training, only context can try to guess the digit. Below are examples of such replacement: Digits ASCII Unicode 410 41o 41 Why are these easy for humans to decipher? It's purely due to visual similarity. In a "normal" NLP neural model, each character (or token) is mapped to an embedding, so unseen characters have no representation. We might use the default approach of mapping all unknowns to a shared 'UNK' embedding, but this loses the different visual characteristics of each character. All of this motivates our new Visual-Based Character RNN. Our model does not learn a dictionary of character embeddings, but instead uses a series of CNN layers that transform 34x34 images of the characters. The transformations then feed into our above models. This is now a model that can interpret unseen (in training) characters. Figure 5 shows the CNN combined with our positional attention RNN. We use two 3x3 convolution layers with 4 and 8 filters respectively. Each layer is followed by a relu layer and a batch nor-malization layer (not shown in the figure). The convolutions are followed by a max pooling layer and then flattened. A dense layer with softmax then reduces the flattened vector. We experimented with up to 3 convolution layers, up to 32 filters, and varied the size of the dense layer. Visual input changes the model significantly. It is no longer learning an NLP-style character embedding, but rather learning CNN parameters to transform an image input into that embedding. Our first models ran into problems because they simply memorized each 34x34 image. Since all ASCII '3' characters map to the same flattened representation, the model memorizes it, and unicode variations fail no matter how similar. We thus introduced data augmentation during training. Each 34x34 input is 'jiggled' with random transformations: (1) translation of the image up/down or right/left, (2) darken/lighten the image, (3) stretch or widen, and (4) rotate up to 20 degrees. This provided different inputs for the same ASCII chars, so the CNN was encouraged to learn key visual features across all variants. Data augmentation led to our most significant improvements on unseen unicode character input. Experiments All models were trained on the 100k artificial obscured phone dataset (Section 4.2). 90k was used for training and 10k to determine convergence. The RNNs were set to N = 70 in length, and inputs were padded to that length. The rare input text longer than 70 is cropped. Embedding size N=100 and LSTM internal dimensions M=200 were chosen for all RNNs based on dev set performance. The CRFs performed best at N=200. We also applied dropout of 0.2 for the LSTMs and 0.5 CRF. We report results with three metrics: digit accuracy, Levenshtein edit distance, and perfect accuracy. Digit accuracy is the simple alignment of predicted digit with gold digit (# correct / 10). If a predicted phone number is longer than 10 digits (CRFs are not bound to strictly 10 predictions), digit accuracy is computed only over the first 10 predicted digits. Digit accuracy is flawed because a model might insert one extra digit, but still guess correct for the remainder. For example: Gold: 4109342309 Guess: 41109342309 The CRF inserted an extra 1 digit, just one mistake, but digit accuracy is now a very low 0.2. We thus use the Levenshtein edit distance to better evaluate performance. Levenshtein's measure judges string similarity based on the minimum number of "edits" required to transform the prediction into the gold: (1.0 − edits/10). In the above case, one deletion is required to make the strings the same, so the score is (1 − 1/10) = 0.9. Finally, perfect accuracy is the number of perfect phone numbers (all 10 digits) that were correctly guessed, divided by the size of the test set. Real-world Test: We report results only on the real-world test set from Section 4.1. The artificial data was solely used for training. We did not run models on the test set until the very end after choosing our best settings (on the dev set). Real-world Challenge Test: To further illustrate the challenge of noisy text, we enhanced the realworld test set with unicode injections. Using a hand-created character lookup of visually similar unicode characters, we replaced 10% of the characters with randomly chosen unicode lookalikes not in the training data. This results in a very challenging test set to further benchmark the models. Finally, all results in the next section are the average of 4 train/test runs of the same model. For CNN results, Table 2 shows test set performance. Adding just the CNNs does not improve recognition, but in fact are slightly worse. However, more compelling is the challenge set with injected unicode confounders. Recall the importance of data augmentation during training so the models learn real visual features. These "+aug" results show why it is needed with a 89% relative improvement in perfect phone accuracy (from 17.6% to 33.3%). The non-CNN LSTM and CRF struggle at 17-22%. They simply cannnot represent unseen characters. Results Our new CRF model (no CNN) outperforms the RNNs on the test set by 10% absolute. When comparing challenge test performance, the best CRF-CNN outperforms the best non-CNN LSTM by 11% absolute. To further illustrate the effect of unicode confounders, we varied how much we injected and graphed performance in Figure 3. The CNN models consistently outperform. Full Ad Extraction We wrapup with a pilot for full ad extraction. The models presented so far extract from one span of text (it assumes a phone number exists). This for- mulation is a well-defined task for research, but we also propose how one might apply these extractors to the more difficult task of full document extraction when the location of the phone number is unknown. We briefly describe initial tests. The most straightforward way to extract from a document is to split it into text windows (spans) and try all possible extractions. Since these are probabilistic models, we can compute P (phone|span), and find the window span that maximizes the probability. best = max span P (phone|span) (2) P (phone|span) = 9 i=0 maxjP (di = j|span)(3) The phone number extracted from the best span is the phone number in the text. We collected a set of real advertisements that don't have phone numbers, and artificially inserted an obscured number from our artificial dataset. This allows us to track which span contains the phone number, and then evaluate an extractor. The difficulty with this task is that our models are trained on precise text spans, whereas this full document dataset contains lots of non-phonerelated text. To address this difference, we stopped padding our snippet input with null values (up to the length of the RNN), and instead pad with randomly selected text snippets from real ads. The models are exactly the same, we just change how padding works when the training text is shorter than length 70. We refer to this as the "ad pad". Datum: 6I5 093 93B6 Null-Pad: 6I5 093 93B6 Ad-Pad: 6I5 093 93B6always in town call To be clear, no models were changed, just how training input is padded. Can the models identify the correct text span that contains a phone number? Table 4 shows these results for standard Table 4: Results of choosing text spans with the full phone number, or a partial match. Partial matches contained on average 7-8 of the 10 digits. null-padding versus ad-padding, as well as crossdomain tests. We trained on Craigslist and Backpage separately, then tested on only Backpage ads. Window identification works very well as long as training padded its input with real ad text. This is encouraging in that it seems these models can reliably identify where a phone number is present. Finally, we tested how the models also extract from these spans after identifying them. Extraction showed 80% accuracy on full numbers, compared to 98% when train/test only on artificial phone snippets. We attribute the drop to the difficult task -window spans contain more noise than a precise text span. Future work will focus on this full document task with real-world numbers. Discussion This is the first work to model noisy phone number extraction with neural models. Most notably, our CNNs explore how to use visual characteristics of the characters, rather than standard NLPstyle models with trained embeddings. To the best of our knowledge, this is the first proposal for a visual language model in an extraction architecture. We showed results on new challenge datasets with injected unicode. These results illustrate the challenge for extractors, but also the usefulness of CNN recognizers. In fact, current rule-based extractors cannot extract any of the numbers in our test sets. Our CRF outperformed an LSTM-only model by 10% absolute, and data augmentation improved on unicode tests by a relative 89% gain. Possible future work could investigate a Generative Adversarial Network (GAN) (Goodfellow et al., 2014). GANs have become popular in vision tasks, but the normal GAN setup requires training data to start from, and this sparse domain prohibits its straightforward use. Data from this work's training and evaluation are available online 4 , and we hope this spurs further work on this important societal challenge. This work would not be possible without the help of the Global Emancipation Network. Many thanks also to Jeff Kosseff for bringing this issue to our attention in the first place. We recognize and appreciate the support of the DoD HPC Modernization Office for enhancing our undergraduate education and research. Finally, thanks to Rebecca Hwa for helpful conversations early on in this work. Figure 2 : 2Examples from the artificial phone number training set. Figure 3 : 3LSTM with position attention. Dotted lines included with conditioned prediction (Sec 5.2.2). Figure 4 : 4Neural architecture with a CRF top layer. Figure 5 : 5CNN architecture for visual image input to the LSTM model. Table 1 1contains results without CNNs for the baselines, RNNs, and CRF. The models listed are those that showed consistent improvement on development, and the test set columns were run only at the end for publication results. Adding position attention and conditional dependence each showed improvements of 1-2% Levenshtein. Stacking two LSTMs showed little gain. The CRF excelled with a 11% relative gain (on test) for perfect prediction over the best LSTM setup.CNN Comparison (Perfect Acc) Test Challenge Lev Perf Lev Perf Best LSTM (no CNN) 81.2 48.3 72.9 22.1 CNN-LSTM 77.3 42.1 65.5 15.6 CNN-LSTM +aug 79.7 39.8 75.2 27.3 Best CRF (no CNN) 84.0 58.1 74.9 17.6 CNN-CRF 82.8 54.2 73.4 14.6 CNN-CRF +aug 83.3 56.1 79.7 33.3 Table 2 : 2Results of the CNN models. Challenge has 10% unseen unicode injected. +aug used visual data augmentation during training. Table 3 : 3Phone accuracy as a higher % of unicode substitutions are made for lookalike ASCII characters. We used a US area code dictionary, and followed the constraint that the 4th digit must be [2-9] whereas the 5th to 10th digits are [0-9]. Numbers were then chosen randomly. www.usna.edu/Users/cs/nchamber/data/phone/ Semi-supervised learning for detecting human trafficking. Hamidreza Alvari, Paulo Shakarian, J E Kelly, Snyder, Semi-supervised learning for detecting human trafficking. Hamidreza Alvari, Paulo Shakarian, and J. E. Kelly Snyder. 2017. Semi-supervised learning for detect- ing human trafficking. In Semi-supervised learning for detecting human trafficking. A non-parametric learning approach to identify online human trafficking. Hamidreza Alvari, Paulo Shakarian, J E Kelly, Snyder, IEEE Conference on Intelligence and Security Informatics (ISI). Hamidreza Alvari, Paulo Shakarian, and J.E. Kelly Snyder. 2016. A non-parametric learning approach to identify online human trafficking. In IEEE Con- ference on Intelligence and Security Informatics (ISI). The role of phone numbers in understanding cybercrime schemes. Andrei Costin, Jelena Isacenkova, Marco Balduzzi, Aurélien Francillon, Davide Balzarotti, Eleventh Annual Conference on Privacy, Security and Trust. IEEEAndrei Costin, Jelena Isacenkova, Marco Balduzzi, Aurélien Francillon, and Davide Balzarotti. 2013. The role of phone numbers in understanding cyber- crime schemes. In 2013 Eleventh Annual Confer- ence on Privacy, Security and Trust, pages 213-220. IEEE. Convolutional neural network with data augmentation for sar target recognition. IEEE Geoscience and remote sensing letters. Jun Ding, Bo Chen, Hongwei Liu, Mengyuan Huang, 13Jun Ding, Bo Chen, Hongwei Liu, and Mengyuan Huang. 2016. Convolutional neural network with data augmentation for sar target recognition. IEEE Geoscience and remote sensing letters, 13(3):364- 368. Leveraging publicly available data to discern patterns of human-trafficking activity. Artur Dubrawski, Kyle Miller, Matthew Barnes, Benedikt Boecking, Emily Kennedy, Journal of Human Trafficking. 1Artur Dubrawski, Kyle Miller, Matthew Barnes, Benedikt Boecking, and Emily Kennedy. 2015. Leveraging publicly available data to discern pat- terns of human-trafficking activity. Journal of Hu- man Trafficking, 1. Generative adversarial nets. Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, Yoshua Bengio, Advances in neural information processing systems. Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014. Generative ad- versarial nets. In Advances in neural information processing systems, pages 2672-2680. Bidirectional lstm-crf models for sequence tagging. Zhiheng Huang, Wei Xu, Kai Yu, arXiv:1508.01991arXiv preprintZhiheng Huang, Wei Xu, and Kai Yu. 2015. Bidirec- tional lstm-crf models for sequence tagging. arXiv preprint arXiv:1508.01991. Using contexts and constraints for improved geotagging of human trafficking webpages. Rahul Kapoor, Mayank Kejriwal, Pedro Szekely, Proceedings of the Fourth International ACM Workshop on Managing and Mining Enriched Geo-Spatial Data. the Fourth International ACM Workshop on Managing and Mining Enriched Geo-Spatial DataRahul Kapoor, Mayank Kejriwal, and Pedro Szekely. 2017. Using contexts and constraints for improved geotagging of human trafficking webpages. In Pro- ceedings of the Fourth International ACM Workshop on Managing and Mining Enriched Geo-Spatial Data. Flagit: A system for minimally supervised human trafficking indicator mining. Mayank Kejriwal, Jiayuan Ding, Runqi Shao, Anoop Kumar, Pedro Szekely, Mayank Kejriwal, Jiayuan Ding, Runqi Shao, Anoop Kumar, and Pedro Szekely. 2017. Flagit: A system for minimally supervised human trafficking indica- tor mining. Information extraction in illicit web domains. Mayank Kejriwal, Pedro Szekely, WWWMayank Kejriwal and Pedro Szekely. 2017. Informa- tion extraction in illicit web domains. In WWW. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. John Lafferty, Andrew Mccallum, Fernando Cn Pereira, John Lafferty, Andrew McCallum, and Fernando CN Pereira. 2001. Conditional random fields: Prob- abilistic models for segmenting and labeling se- quence data. Detection and characterization of human trafficking networks using unsupervised scalable text template matching. Lin Li, Olga Simek, Angela Lai, Matthew P Daggett, Charlie K Dagli, Cara Jones, IEEE International Conference on Big Data (Big Data). Lin Li, Olga Simek, Angela Lai, Matthew P. Daggett, Charlie K. Dagli, and Cara Jones. 2018. Detection and characterization of human trafficking networks using unsupervised scalable text template matching. In IEEE International Conference on Big Data (Big Data). Learning character-level compositionality with visual features. Frederick Liu, Han Lu, Chieh Lo, Graham Neu, ACL. Frederick Liu, Han Lu, Chieh Lo, and Graham Neu- big. 2017. Learning character-level compositional- ity with visual features. In ACL. End-to-end sequence labeling via bi-directional lstm-cnns-crf. Xuezhe Ma, Eduard Hovy, arXiv:1603.01354arXiv preprintXuezhe Ma and Eduard Hovy. 2016. End-to-end sequence labeling via bi-directional lstm-cnns-crf. arXiv preprint arXiv:1603.01354. An entity resolution approach to isolate instances of human trafficking online. C Nagpal, K Miller, B Boecking, A Dubrawski, C. Nagpal, K. Miller, B. Boecking, and A. Dubrawski. 2017. An entity resolution approach to isolate in- stances of human trafficking online. Active search of connections for case building and combating human trafficking. Reihaneh Rabbany, David Bayani, Artur Dubrawski, KDD. Reihaneh Rabbany, David Bayani, and Artur Dubrawski. 2018. Active search of connec- tions for case building and combating human trafficking. In KDD. Deep convolutional neural networks and data augmentation for environmental sound classification. Justin Salamon, Juan Pablo Bello, IEEE Signal Processing Letters. 243Justin Salamon and Juan Pablo Bello. 2017. Deep con- volutional neural networks and data augmentation for environmental sound classification. IEEE Signal Processing Letters, 24(3):279-283. Building and using a knowledge graph to combat human trafficking. Pedro Szekely, Craig Knoblock, Jason Slepickz, Andrew Philpot, International Conference on Semantic Web (ICSW). Pedro Szekely, Craig Knoblock, Jason Slepickz, An- drew Philpot, et al. 2015. Building and using a knowledge graph to combat human trafficking. In International Conference on Semantic Web (ICSW). Improved relation classification by deep recurrent neural networks with data augmentation. Yan Xu, Ran Jia, Lili Mou, Ge Li, Yunchuan Chen, Yangyang Lu, Zhi Jin, arXiv:1601.03651arXiv preprintYan Xu, Ran Jia, Lili Mou, Ge Li, Yunchuan Chen, Yangyang Lu, and Zhi Jin. 2016. Improved re- lation classification by deep recurrent neural net- works with data augmentation. arXiv preprint arXiv:1601.03651. . Zhun Zhong, Liang Zheng, Guoliang Kang, Shaozi Li, Yi Yang, arXiv:1708.04896Random erasing data augmentation. arXiv preprintZhun Zhong, Liang Zheng, Guoliang Kang, Shaozi Li, and Yi Yang. 2017. Random erasing data augmen- tation. arXiv preprint arXiv:1708.04896.