sentences
sequence | labels
sequence |
---|---|
[
"Several recent studies have shown that strong natural language understanding (NLU) models are prone to relying on unwanted dataset biases without learning the underlying task, resulting in models that fail to generalize to out-of-domain datasets and are likely to perform poorly in real-world scenarios.",
"We propose two learning strategies to train neural models, which are more robust to such biases and transfer better to out-of-domain datasets.",
"The biases are specified in terms of one or more bias-only models , which learn to leverage the dataset biases.",
"During training, the bias-only models' predictions are used to adjust the loss of the base model to reduce its reliance on biases by down-weighting the biased examples and focusing training on the hard examples.",
"We experiment on large-scale natural language inference and fact verification benchmarks, evaluating on out-of-domain datasets that are specifically designed to assess the robustness of models against known biases in the training data.",
"Results show that our debiasing methods greatly improve robustness in all settings and better transfer to other textual entailment datasets.",
"Our code and data are publicly available in https: //github.com/rabeehk/robust-nli .",
"Recent neural models (Devlin et al., 2019; Radford et al., 2018; Chen et al., 2017) have achieved high and even near human-performance on several large-scale natural language understanding benchmarks.",
"However, it has been demonstrated that neural models tend to rely on existing idiosyncratic biases in the datasets, and leverage superficial correlations between the label and existing shortcuts in the training dataset to perform surprisingly well, 1 without learning the underlying task (Kaushik and Lipton, 2018; Gururangan et al., 2018; Poliak et al., 2018; Schuster et al., 2019; 1 We use biases, heuristics or shortcuts interchangeably. McCoy et al., 2019b).",
"For instance, natural language inference (NLI) is supposed to test the ability of a model to determine whether a hypothesis sentence ( There is no teacher in the room ) can be inferred from a premise sentence ( Kids work at computers with a teacher's help ) (Dagan et al., 2006).",
"2 However, recent work has demonstrated that large-scale NLI benchmarks contain annotation artifacts; certain words in the hypothesis that are highly indicative of inference class and allow models that do not consider the premise to perform unexpectedly well (Poliak et al., 2018; Gururangan et al., 2018).",
"As an example, in some NLI benchmarks, negation words such as nobody, no, and not in the hypothesis are often highly correlated with the contradiction label.",
"As a result of the existence of such biases, models exploiting statistical shortcuts during training often perform poorly on out-of-domain datasets, especially if the datasets are carefully designed to limit the spurious cues.",
"To allow proper evaluation, recent studies have tried to create new evaluation datasets that do not contain such biases (Gururangan et al., 2018; Schuster et al., 2019; McCoy et al., 2019b).",
"Unfortunately, it is hard to avoid spurious statistical cues in the construction of large-scale benchmarks, and collecting new datasets is costly (Sharma et al., 2018).",
"It is, therefore, crucial to develop techniques to reduce the reliance on biases during the training of the neural models.",
"We propose two end-to-end debiasing techniques that can be used when the existing bias patterns are identified.",
"These methods work by adjusting the cross-entropy loss to reduce the biases learned from the training dataset, down-weighting the biased examples so that the model focuses on learning the hard examples.",
"Figure 1 illustrates an example of applying our strategy to prevent an NLI model from predicting the labels using existing biases in the hypotheses, where the bias-only model only sees the hypothesis.",
"Our strat-2 The given sentences are in the contradictory relation, and the hypothesis cannot be inferred from the premise.",
"egy involves adding this bias-only branch f B on top of the base model f M during training.",
"We then compute the combination of the two models f C in a way that motivates the base model to learn different strategies than the ones used by the bias-only branch f B .",
"At the end of the training, we remove the bias-only classifier and use the predictions of the base model.",
"In our first proposed method, Product of Experts, the training loss is computed on an ensemble of the base model and the bias-only model, which reduces the base model's loss for the examples that the bias-only model classifies correctly.",
"For the second method, Debiased Focal Loss, the bias-only predictions are used to directly weight the loss of the base model, explicitly modulating the loss depending on the accuracy of the bias-only model.",
"We also extend these methods to be robust against multiple sources of bias by training multiple bias-only models.",
"Our approaches are simple and highly effective.",
"They require training only a simple model on top of the base model.",
"They are model agnostic and general enough to be applicable for addressing common biases seen in many datasets in different domains.",
"We evaluate our models on challenging benchmarks in textual entailment and fact verification, including HANS (Heuristic Analysis for NLI Systems) (McCoy et al., 2019b), hard NLI sets (Gururangan et al., 2018) of Stanford Natural Language Inference (SNLI) (Bowman et al., 2015) and MultiNLI (MNLI) (Williams et al., 2018), and FEVER Symmetric test set (Schuster et al., 2019).",
"The selected datasets are highly challenging and have been carefully designed to be unbiased to allow proper evaluation of the out-of-domain performance of the models.",
"We additionally construct hard MNLI datasets from MNLI development sets to facilitate the out-of-domain evaluation on this dataset.",
"3 We show that including our strategies on training baseline models, including BERT (Devlin et al., 2019), provides a substantial gain on out-of-domain performance in all the experiments.",
"In summary, we make the following contributions:",
"1) Proposing two debiasing strategies to train neural models robust to dataset bias.",
"2) An empirical evaluation of the methods on two large-scale NLI datasets and a fact verification benchmark; obtaining a substantial gain on their challenging out-of-domain data, including 7.4 points on HANS, 4.8 points on SNLI hard set, and 9.8 points on FEVER symmetric test set, setting a new state-of-the-art.",
"3) Proposing debiasing strategies capable of combating multiple sources of bias.",
"4) Evaluating the transfer performance of the debiased models on 12 NLI datasets and demonstrating improved transfer to other NLI benchmarks.",
"To facilitate future work, we release our datasets and code.",
"To address dataset biases, researchers have proposed to augment datasets by balancing the existing cues (Schuster et al., 2019) or to create an adversarial dataset (Jia and Liang, 2017).",
"However, collecting new datasets, especially at a large scale, is costly, and thus remains an unsatisfactory solution.",
"It is, therefore, crucial to develop strategies to allow models to be trained on the existing biased datasets.",
"3 Removing the need to submit to an online evaluation system for MNLI hard test sets.",
"Schuster et al. (2019) propose to first compute the n-grams in the dataset's claims that are the most associated with each fact-verification label.",
"They then solve an optimization problem to assign a balancing weight to each training sample to alleviate the biases.",
"In contrast, we propose several end-to-end debiasing strategies.",
"Additionally, Belinkov et al. (2019a) propose adversarial techniques to remove from the NLI sentence encoder the features that allow a hypothesis-only model to succeed.",
"However, we believe that in general, the features used by the hypothesis-only model can include some information necessary to perform the NLI task, and removing such information from the sentence representation can hurt the performance of the full model.",
"Their approach consequently degrades the performance on the hard SNLI set, which is expected to be less biased.",
"In contrast, we propose to train a bias-only model to use its predictions to dynamically adapt the classification loss to reduce the importance of the most biased examples.",
"Concurrently to our work, Clark et al. (2019) and He et al. (2019) have also proposed to use the product of experts (PoE) models for avoiding biases.",
"They train their models in two stages, first training a bias-only model and then using it to train a robust model.",
"In contrast, our methods are trained in an end-to-end manner, which is convenient in practice.",
"We additionally show that our proposed Debiased Focal Loss model is an effective method to reduce biases, sometimes superior to PoE.",
"We have evaluated on new domains of NLI hard sets and fact verification.",
"Moreover, we have included an analysis showing that our debiased models indeed have lower correlations with the bias-only models, and have extended our methods to guard against multiple bias patterns simultaneously.",
"We furthermore study transfer performance to other NLI datasets.",
"Problem formulation We consider a general multi-class classification problem.",
"Given a dataset D = { x i ,y i } Ni =1 consisting of the input data x i X , and labels y i Y , the goal of the base model is to learn a mapping f M parameterized by M that computes the predictions over the label space given the input data, shown as f M : X R |Y| .",
"Our goal is to optimize M parameters such that we build a model that is more resistant to benchmark dataset biases, to improve its robustness to domain changes where the biases typically observed in the training data do not exist in the evaluation dataset.",
"The key idea of our approach, depicted in Figure 1, is first to identify the dataset biases that the base model is susceptible to relying on, and define a bias-only model to capture them.",
"We then propose two strategies to incorporate this bias-only knowledge into the training of the base model to make it robust against the biases.",
"After training, we remove the bias-only model and use the predictions of the base model.",
"We assume that we do not have access to any data from the out-of-domain dataset, so we need to know a priori about the possible types of shortcuts we would like the base model to avoid relying on.",
"Once these patterns are identified, we train a bias-only model designed to capture the identified shortcuts that only uses biased features .",
"For instance, a hypothesis-only model in the large-scale NLI datasets can correctly classify the majority of samples using annotation artifacts (Poliak et al., 2018; Gururangan et al., 2018).",
"Motivated by this work, our bias-only model for NLI only uses hypothesis sentences.",
"Note that the bias-only model can, in general, have any form, and is not limited to models using only a part of the input data.",
"For instance, on the HANS dataset, our bias-only model makes use of syntactic heuristics and similarity features (see Section 4.3).",
"Let x bi X b be biased features of x i that are predictive of y i .",
"We then formalize this bias-only model as a mapping f B : X b R |Y| , parameterized by B and trained using cross-entropy (CE) loss LB : LB ( B )= 1 NN (cid:88) i =1 log( ( f y i B ( x bi ; B ))) , (1) where f jB ( x bi , B ) is the j th element of f B ( . ) , and ( u j )= e u j / (cid:80) |Y| k =1 e u k is the softmax function.",
"We propose two strategies to incorporate the bias-only f B knowledge into the training of the base model f M .",
"In our strategies, the predictions of the bias-only model are combined with either the predictions of the base model or its error, to down-weight the loss for the examples that the bias-only model can predict correctly.",
"We then update parameters of the base model M based on this modified loss LC .",
"Our learning strategies are end-to-end.",
"Therefore, to prevent the base model from learning the biases, the bias-only loss LB is not back-propagated to any shared parameters of the base model, such as a shared sentence encoder.",
"Our first approach is based on the product of experts (PoE) method (Hinton, 2002).",
"Here, we use this method to combine the bias-only and base model's predictions by computing the element-wise product (cid:12) between their predictions as ( f B ( x bi )) (cid:12) ( f M ( x i )) .",
"We compute this combination in the logarithmic space, making it appropriate for the normalized exponential below: f C ( x i , x bi )=log( ( f B ( x bi )))+log( ( f M ( x i ))) , The key intuition behind this model is to combine the probability distributions of the bias-only and the base model to allow them to make predictions based on different characteristics of the input; the bias-only branch covers prediction based on biases, and the base model focuses on learning the actual task.",
"Then the base model parameters M are trained using the cross-entropy loss LC of the combined classifier f C : LC ( M ; B )= 1 NN (cid:88) i =1 log( ( f y i C ( x i , x bi ))) .",
"the updates for examples that it can accurately predict.",
"Justification: Probability of label y i for the example x i in the PoE model is computed as: ( f y i C ( x i , x bi ))= ( f y i B ( x bi )) ( f y i M ( x i )) (cid:80) |Y| k =1 ( f kB ( x bi )) ( f kM ( x i )) Then the gradient of cross-entropy loss of the combined classifier (2) w.r.t M is (Hinton, 2002): MLC ( M ; B )= 1 NN (cid:88) i =1 |Y| (cid:88) k =1 (cid:20) (cid:16) y i k ( f kC ( x i , x bi )) (cid:17) M log( ( f kM ( x i ))) (cid:21) , where y i k is 1 when k = y i and 0 otherwise.",
"Generally, the closer the ensemble's prediction ( f kC ( . )) is to the target y i k , the more the gradient is decreased through the modulating term, which only happens when the bias-only and base models are both capturing biases.",
"In the extreme case, when the bias-only model correctly classifies the sample, ( f y i C ( x i , x bi )) = 1 and therefore MLC ( M ; B ) = 0 , the biased examples are ignored during training.",
"Conversely, when the example is fully unbiased, the bias-only classifier predicts the uniform distribution over all labels ( f kB ( x bi )) = 1 |Y| for k Y , therefore ( f y i C ( x i , x bi )) = ( f y i M ( x i )) and the gradient of ensemble classifier remains the same as the CE loss.",
"Focal loss was originally proposed in Lin et al. (2017) to improve a single classifier by down-weighting the well-classified points.",
"We propose a novel variant of this loss that leverages the bias-only branch's predictions to reduce the relative importance of the most biased examples and allows the model to focus on learning the hard examples.",
"We define Debiased Focal Loss (DFL) as: LC ( M ; B )= (3) 1 NN (cid:88) i =1 (cid:16) 1 ( f y i B ( x bi )) (cid:17) log( ( f y i M ( x i ))) where is the focusing parameter, which impacts the down-weighting rate.",
"When is set to 0, DFL is equivalent to the cross-entropy loss.",
"For > 0 , as the value of is increased, the effect of down-weighting is increased.",
"We set =2 through all experiments, which works well in practice, and avoid fine-tuning it further.",
"We note the properties of this loss: (1) When the example x i is unbiased, and the bias-only branch does not do well, ( f y i B ( x bi )) is small, therefore the scaling factor is close to 1 , and the loss remains unaffected.",
"(2) As the sample is more biased and ( f y i B ( x bi )) is closer to 1, the modulating factor approaches 0 and the loss for the most biased examples is down-weighted.",
"We compare our models to RUBi (Cadene et al., 2019), a recently proposed model to alleviate unimodal biases learned by Visual Question Answering (VQA) models.",
"Cadene et al. (2019)'s study is limited to VQA datasets.",
"We, however, evaluate the effectiveness of their formulation on multiple challenging NLU benchmarks.",
"RUBi consists in first applying a sigmoid function to the bias-only model's predictions to obtain a mask containing an importance weight between 0 and 1 for each label.",
"It then computes the element-wise product between the obtained mask and the base model's predictions: f C ( x i , x bi )= f M ( x i ) (cid:12) ( f B ( x bi )) , The main intuition is to dynamically adjust the predictions of the base model to prevent it from leveraging the shortcuts.",
"Then the parameters of the base model M are updated by back-propagating the cross-entropy loss LC of the combined classifier.",
"Neural models can, in practice, be prone to multiple types of biases in the datasets.",
"We, therefore, propose methods for combining several bias-only models.",
"To avoid learning relations between biased features, we do not consider training a classifier on top of their concatenation.",
"Instead, let { x b j i } Kj =1 be different sets of biased features of x i that are predictive of y i , and let f B j be an individual bias-only model capturing x b j i .",
"Next, we extend our debiasing strategies to handle multiple bias patterns.",
"Method 1: Joint Product of Experts We extend our proposed PoE model to multiple bias-only models by computing the element-wise product between the predictions of bias-only models and the base model as: ( f B 1 ( x b 1 i )) (cid:12)(cid:12) ( f BK ( x b K i )) (cid:12) ( f M ( x i )) , computed in the logarithmic space: f C ( x i , { x b j i } Kj =1 )= K (cid:88) j =1 log( ( f B j ( x b j i ))) +log( ( f M ( x i ))) .",
"Then the base model parameters M are trained using the cross-entropy loss of the combined classifier f C .",
"Method 2: Joint Debiased Focal Loss To extend DFL to handle multiple bias patterns, we first compute the element-wise average of the predictions of the multiple bias-only models: f B ( { x b j i } Kj =1 ) = 1 K (cid:80) Kj =1 f B j ( x b j i ) , and then compute the DFL (3) using the computed joint bias-only model.",
"We provide experiments on a fact verification (FEVER) and two large-scale NLI datasets (SNLI and MNLI).",
"We evaluate the models' performance on recently-proposed challenging unbiased evaluation sets.",
"We use the BERT (Devlin et al., 2019) implementation of Wolf et al. (2019) as our main baseline, known to work well for these tasks.",
"In all the experiments, we use the default hyperparameters of the baselines.",
"Dataset: The FEVER dataset contains claim-evidence pairs generated from Wikipedia.",
"Schuster et al. (2019) collected a new evaluation set for the FEVER dataset to avoid the idiosyncrasies observed in the claims of this benchmark.",
"They made the original claim-evidence pairs of the FEVER evaluation dataset symmetric, by augmenting them and making each claim and evidence appear with each label.",
"Therefore, by balancing the artifacts, relying on statistical cues in claims to classify samples is equivalent to a random guess.",
"The collected dataset is challenging, and the performance of the models relying on biases evaluated on this dataset drops significantly.",
"Base models: We consider BERT as the base model, which works the best on this dataset (Schuster et al., 2019), and predicts the relations based on the concatenation of the claim and the evidence with a delimiter token (see Appendix A).",
"Results: Table 1 shows the results.",
"Our proposed debiasing methods, PoE and DFL, are highly effective, boosting the performance of the baseline by 9.8 and 7.5 points respectively, significantly surpassing the prior work of Schuster et al. (2019).",
"Datasets: We evaluate on hard datasets of SNLI and MNLI (Gururangan et al., 2018), which are the splits of these datasets where a hypothesis-only model cannot correctly predict the labels.",
"Gururangan et al. (2018) show that the success of the recent textual entailment models is attributed to the biased examples, and the performance of these models is substantially lower on the hard sets.",
"Base models: We consider BERT and InferSent (Conneau et al., 2017) as our base models.",
"We choose InferSent to be able to compare with the prior work of Belinkov et al. (2019b).",
"Bias-only model: The bias-only model predicts the labels using the hypothesis (Appendix B).",
"Results on SNLI: Table 2 shows the SNLI results.",
"With InferSent, DFL and PoE result in 4.1 and 4.8 points gain.",
"With BERT, DFL and PoE improve the results by 2.5 and 1.6 absolute points.",
"Compared to the prior work of Belinkov et al. (2019b) (AdvCls), our PoE model obtains a 7.4 points gain, setting a new state-of-the-art.",
"Results on MNLI: We construct hard sets from the validation sets of MNLI Matched and Mismatched (MNLI-M).",
"Following Gururangan et al. (2018), we train a fastText classifier (Joulin et al., 2017) that predicts the labels using only the hypothesis and consider the subset on which it fails as hard examples.",
"We report the results on MNLI mismatched sets in Table 3 (see Appendix B for similar results on MNLI matched).",
"With BERT, DFL and PoE obtain 1.4 and 1.7 points gain on the hard development set, while with InferSent, they improve the results by 2.5 and 2.6 points.",
"To comply with limited access to the MNLI submission system, we evaluate only the best result of the baselines and our models on the test sets.",
"Our PoE model improves the performance on the hard test set by 1.1 points while retaining in-domain accuracy.",
"Dataset: McCoy et al. (2019b) show that NLI models trained on MNLI can adopt superficial syntactic heuristics.",
"They introduce HANS, consisting of several examples on which the syntactic heuristics fail.",
"Base model: We use BERT as our base model and train it on the MNLI dataset.",
"Bias-only model: We consider the following features for the bias-only model.",
"The first four features are based on the syntactic heuristics proposed in McCoy et al. (2019b):",
"1) Whether all words in the hypothesis are included in the premise;",
"2) If the hypothesis is the contiguous subsequence of the premise;",
"3) If the hypothesis is a subtree in the premise's parse tree;",
"4) The number of tokens shared between premise and hypothesis normalized by the number of tokens in the premise.",
"We additionally include some similarity features:",
"5) The cosine similarity between premise and hypothesis's pooled token representations from BERT followed by min, mean, and max-pooling.",
"We consider the same weight for contradiction and neutral labels in the bias-only loss to allow the model to recognize entailment from not-entailment.",
"During the evaluation, we map the neutral and contradiction labels to not-entailment.",
"Results: McCoy et al. (2019a) observe large variability in the linguistic generalization of neural models.",
"We, therefore, report the averaged results across 4 runs with the standard deviation in Table 4.",
"PoE and DFL obtain 4.4 and 7.4 points gain (see Appendix C for accuracy on individual heuristics of HANS).",
"We compare our results with the concurrent work of Clark et al., who propose a PoE model similar to ours, which gets similar results.",
"The main difference is that our models are trained end-to-end, which is convenient in practice, while Clark et",
"al.'s method requires two steps, first training a bias-only model and then using this pre-trained model to train a robust model.",
"The Reweight baseline in Clark et al. is a special case of our DFL with =1 and performs similarly to our DFL method (using default =2 ).",
"Their Learned-Mixin+H method requires hyperparameter tuning.",
"Since the assumption is not having access to any out-of-domain test data, and there is no available dev set for HANS, it is challenging to perform hyper-parameter tuning.",
"Clark et al. follow prior work (Grand and Belinkov, 2019; Ramakrishnan et al., 2018) and perform model section on the test set.",
"To provide a fair comparison, we consequently also tuned in DFL by sweeping over { 0 .",
"5 , 1 , 2 , 3 , 4 } .",
"DFL (cid:68) is the selected model, with = 3 .",
"With this hyperparameter tuning, DFL is even more effective, and our best result performs 2.8 points better than Clark et al. (2019).",
"To evaluate combating multiple bias patterns, we jointly debias a base model on the hypothesis artifacts and syntactic biases.",
"Results: Table 5 shows the results.",
"Models trained to be robust to hypothesis biases ( (cid:168) ) do not generalize to HANS.",
"On the other hand, models trained to be robust on HANS ( (cid:170) ) use a powerful bias-only model resulting in a slight improvement on MNLI mismatched hard dev set.",
"We expect a slight degradation when debiasing for both biases since models need to select samples accommodating both debiasing needs.",
"The jointly debiased models successfully obtain improvements on both datasets, which are close to the improvements on each dataset by the individually debiased models.",
"To evaluate how well the baseline and proposed models generalize to solving textual entailment in domains that do not share the same annotation biases as the large NLI training sets, we take trained NLI models and test them on several NLI datasets.",
"Datasets: We consider a total of 12 different NLI datasets.",
"We use the 11 datasets studied by Poliak et al. (2018).",
"These datasets include MNLI, SNLI, SciTail (Khot et al., 2018), AddOneRTE (ADD1) (Pavlick and Callison-Burch, 2016), Johns Hopkins Ordinal Commonsense Inference (JOCI) (Zhang et al., 2017), Multiple Premise Entailment (MPE) (Lai et al., 2017), Sentences Involving Compositional Knowledge (SICK) (Marelli et al., 2014), and three datasets from White et al. (2017) which are automatically generated from existing datasets for other NLP tasks including: Semantic Proto-Roles (SPR) (Reisinger et al., 2015), Definite Pronoun Resolution (DPR) (Rahman and Ng, 2012), FrameNet Plus (FN+) (Pavlick et al., 2015), and the GLUE benchmark's diagnostic test (Wang et al., 2019).",
"We additionally consider the Quora Question Pairs (QQP) dataset, where the task is to determine whether two given questions are semantically matching (duplicate) or not.",
"As in Gong et al. (2017), we interpret duplicate question pairs as an entailment relation and neutral otherwise.",
"We use the same split ratio mentioned by Wang et al. (2017).",
"Since the datasets considered have different label spaces, when evaluating on each target dataset, we map the model's labels to the corresponding target dataset's space.",
"See Appendix D for more details.",
"We strictly refrained from using any out-of-domain data when evaluating on the unbiased split of the same benchmark in Section 4.",
"However, as shown by prior work (Belinkov et al., 2019a), since different NLI target datasets contain different amounts of the bias found in the large-scale NLI dataset, we need to adjust the amount of debiasing according to each target dataset.",
"We consequently introduce a hyperparameter for PoE to modulate the strength of the bias-only model in ensembling.",
"We follow prior work (Belinkov et al., 2019a) and perform model selection on the dev set of each target dataset Data CE DFL PoE SICK 57.05 57.91 +0.9 57.28 +0.2 ADD1 87.34 88.89 +1.5 87.86 +0.5 DPR 49.50 50.68 +1.2 50.14 +0.6 SPR 59.85 61.41 +1.6 62.45 +2.6 FN+ 53.16 54.77 +1.6 53.51 +0.4 JOCI 50.06 51.13 +1.1 50.85 +0.8 MPE 69.50 70.2 +0.7 70.1 +0.6 SCITAIL 67.64 69.33 +1.7 71.40 +3.8 GLUE 54.08 54.80 +0.7 54.71 +0.6 QQP 67.78 69.28 +1.5 68.61 +0.8 MNLI 74.40 73.58 -0.8 73.61 -0.8 MNLI-M 73.98 74.0 0.0 73.49 -0.5 Table 6: Accuracy results of models with BERT transferring to new target datasets.",
"Results: Table 6 shows the results of the debiased models and baseline with BERT.",
"As shown in prior work (Belinkov et al., 2019a), the MNLI datasets have very similar biases to SNLI, which the models are trained on, so we do not expect any improvement in the relative performance of our models and the baseline for MNLI and MNLI-M.",
"On all the remaining datasets, our proposed models perform better than the baseline, showing a substantial improvement in generalization by using our debasing techniques.",
"We additionally compare with Belinkov et al. (2019a) in Appendix D and show that our methods substantially surpass their results.",
"4 Since the test sets are not available for MNLI, we tune on the matched dev set and evaluate on the mismatched dev set or vice versa.",
"For GLUE, we tune on MNLI mismatched dev set.",
"Analysis of Debiased Focal Loss: As expected, improving the out-of-domain performance could come at the expense of decreased in-domain performance since the removed biases are useful for performing the in-domain task.",
"This happens especially for DFL, in which there is a trade-off between in-domain and out-of-domain performance that depends on the parameter , and when the baseline model is not very powerful like InferSent.",
"To understand the impact of in DFL, we train an InferSent model using DFL for different values of on the SNLI dataset and evaluate its performance on SNLI test and SNLI hard sets.",
"As illustrated in Figure 2, increasing increases debiasing and thus hurts in-domain accuracy on SNLI, but out-of-domain accuracy on the SNLI hard set is increased within a wide range of values (see a similar plot for BERT in Appendix E).",
"Correlation Analysis: In contrast to Belinkov et al. (2019a), who encourage only the encoder to not capture the unwanted biases, our learning strategies influence the parameters of the full model to reduce the reliance on unwanted patterns more effectively.",
"To test this assumption, in Figure 3, we report the correlation between the element-wise loss of the debiased models and the loss of a bias-only model on the considered datasets.",
"The results show that compared to the baselines, our debiasing methods, DFL and PoE, reduce the correlation to the bias-only model, confirming that our models are effective at reducing biases.",
"Interestingly, on MNLI, PoE has less correlation with the bias-only model than DFL and also has better performance on the unbiased split of this dataset.",
"On the other hand, on the HANS dataset, DFL loss is less correlated with the bias-only model than PoE and also obtains higher performance on the HANS dataset.",
"We propose two novel techniques, product-of-experts and debiased focal loss, to reduce biases learned by neural models, which are applicable whenever one can specify the biases in the form of one or more bias-only models.",
"The bias-only models are designed to leverage biases and shortcuts in the datasets.",
"Our debiasing strategies then work by adjusting the cross-entropy loss based on the performance of these bias-only models, to focus learning on the hard examples and downweight the importance of the biased examples.",
"Additionally, we extend our methods to combat multiple bias patterns simultaneously.",
"Our proposed debiasing techniques are model agnostic, simple, and highly effective.",
"Extensive experiments show that our methods substantially improve the model robustness to domain-shift, including 9.8 points gain on FEVER symmetric test set, 7.4 on HANS dataset, and 4.8 points on SNLI hard set.",
"Furthermore, we show that our debiasing techniques result in better generalization to other NLI datasets.",
"Future work may include developing debiasing strategies that do not require prior knowledge of bias patterns and can automatically identify them.",
"We would like to thank Daniel Andor and Suraj Srinivas for their helpful comments.",
"We additionally would like to thank the authors of Schuster et al. (2019); Cadene et al. (2019); McCoy et al. (2019b); Belinkov et al. (2019a) for their support to reproduce their results.",
"This research was supported by the Swiss National Science Foundation under the project Learning Representations of Abstraction for Opinion Summarization (LAOS), grant number FNS-30216.",
"Y.B. was supported by the Harvard Mind, Brain, and Behavior Initiative."
] | [
"abstain",
"objective",
"abstain",
"abstain",
"method",
"result",
"other",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"result",
"abstain",
"abstain",
"objective",
"method",
"objective",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"result",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"method",
"other",
"objective",
"abstain",
"other",
"objective",
"objective",
"objective",
"objective",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"other",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"other",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"objective",
"abstain",
"method",
"objective",
"objective",
"result",
"result",
"abstain",
"other",
"other",
"other",
"other"
] |
[
"Multi-modal neural machine translation (NMT) aims to translate source sentences into a target language paired with images.",
"However, dominant multi-modal NMT models do not fully exploit fine-grained semantic correspondences between semantic units of different modalities, which have potential to refine multi-modal representation learning.",
"To deal with this issue, in this paper, we propose a novel graph-based multi-modal fusion encoder for NMT.",
"Specifically, we first represent the input sentence and image using a unified multi-modal graph, which captures various semantic relationships between multi-modal semantic units (words and visual objects).",
"We then stack multiple graph-based multi-modal fusion layers that iteratively perform semantic interactions to learn node representations.",
"Finally, these representations provide an attention-based context vector for the decoder.",
"We evaluate our proposed encoder on the Multi30K datasets.",
"Experimental results and in-depth analysis show the superiority of our multi-modal NMT model.",
"Multi-modal neural machine translation (NMT) (Huang et al., 2016; Calixto et al., 2017) has become an important research direction in machine translation, due to its research significance in multimodal deep learning and wide applications, such as translating multimedia news and web product information (Zhou et al., 2018).",
"It significantly extends the conventional text-based machine translation by taking images as additional inputs.",
"The assumption behind this is that the translation is expected to be more accurate compared to purely text-based (cid:3) This work is done when Yongjing Yin was interning at Pattern Recognition Center, WeChat AI, Tencent Inc, China.",
"translation, since the visual context helps to resolve ambiguous multi-sense words (Ive et al., 2019).",
"Apparently, how to fully exploit visual information is one of the core issues in multi-modal NMT, which directly impacts the model performance.",
"To this end, a lot of efforts have been made, roughly consisting of: (1) encoding each input image into a global feature vector, which can be used to initialize different components of multi-modal NMT models, or as additional source tokens (Huang et al., 2016; Calixto et al., 2017), or to learn the joint multi-modal representation (Zhou et al., 2018; Calixto et al., 2019); (2) extracting object-based image features to initialize the model, or supplement source sequences, or generate attention-based visual context (Huang et al., 2016; Ive et al., 2019); and (3) representing each image as spatial features, which can be exploited as extra context (Calixto et al., 2017; Delbrouck and Dupont, 2017a; Ive et al., 2019), or a supplement to source semantics (Delbrouck and Dupont, 2017b) via an attention mechanism.",
"Despite their success, the above studies do not fully exploit the fine-grained semantic correspondences between semantic units within an input sentence-image pair.",
"For example, as shown in Figure 1, the noun phrase a toy car semantically corresponds to the blue dashed region.",
"The neglect of this important clue may be due to two big challenges: 1) how to construct a unified representation to bridge the semantic gap between two different modalities, and 2) how to achieve semantic interactions based on the unified representation.",
"However, we believe that such semantic correspondences can be exploited to refine multimodal representation learning, since they enable the representations within one modality to incorporate cross-modal information as supplement during multi-modal semantic interactions (Lee et al., 2018; Tan and Bansal, 2019).",
"In this paper, we propose a novel graph-based multi-modal fusion encoder for NMT.",
"We first represent the input sentence and image with a unified multi-modal graph.",
"In this graph, each node indicates a semantic unit: textual word or visual object , and two types of edges are introduced to model semantic relationships between semantic units within the same modality ( intra-modal edges ) and semantic correspondences between semantic units of different modalities ( inter-modal edges ) respectively.",
"Based on the graph, we then stack multiple graph-based multi-modal fusion layers that iteratively perform semantic interactions among the nodes to conduct graph encoding.",
"Particularly, during this process, we distinguish the parameters of two modalities, and sequentially conduct intra-and inter-modal fusions to learn multi-modal node representations.",
"Finally, these representations can be exploited by the decoder via an attention mechanism.",
"Compared with previous models, ours is able to fully exploit semantic interactions among multimodal semantic units for NMT.",
"Overall, the major contributions of our work are listed as follows: (cid:15) We propose a unified graph to represent the input sentence and image, where various semantic relationships between multi-modal semantic units can be captured for NMT.",
"(cid:15)",
"We propose a graph-based multi-modal fusion encoder to conduct graph encoding based on the above graph.",
"To the best of our knowledge, our work is the first attempt to explore multimodal graph neural network (GNN) for NMT.",
"(cid:15)",
"We conduct extensive experiments on Multi30k datasets of two language pairs.",
"Experimental results and in-depth analysis indicate that our encoder is effective to fuse multi-modal information for NMT.",
"Particularly, our multi-modal NMT model significantly outperforms several competitive baselines.",
"(cid:15)",
"We release the code at https://github.com/ DeepLearnXMU/GMNMT.",
"Our multi-modal NMT model is based on attentional encoder-decoder framework with maximizing the log likelihood of training data as the objective function.",
"Essentially, our encoder can be regarded as a multimodal extension of GNN.",
"To construct our encoder, we first represent the input sentence-image pair as a unified multi-modal graph.",
"Then, based on this graph, we stack multiple multi-modal fusion layers to learn node representations, which provides the attention-based context vector to the decoder.",
"In this section, we take the sentence and the image shown in Figure 1 as an example, and describe how to use a multi-modal graph to represent them.",
"Formally, our graph is undirected and can be formalized as G =( V , E ), which is constructed as follows: In the node set V , each node represents either a textual word or a visual object.",
"Specifically, we adopt the following strategies to construct these two kinds of nodes: (1) We include all words as separate textual nodes in order to fully exploit textual 3027 Multi-modal Graph Embedding Layer Cross-modal Gating Visual FFN Textual FFN Cross-modal Gating Intra-modal Fusion Inter-modal Fusion Target Inputs Embedding Layer Textual Self-Attention Visual Self-Attention Softmax Layer Target Outputs Self-Attention Encoder-DecoderAttention FFN Encoder Decoder Figure 2: The architecture of our NMT model with the graph-based multi-modal fusion encoder.",
"information.",
"For example, in Figure 1, the multimodal graph contains totally eight textual nodes, each of which corresponds to a word in the input sentence; (2) We employ the Stanford parser to identify all noun phrases in the input sentence, and then apply a visual grounding toolkit (Yang et al., 2019) to detect bounding boxes (visual objects) for each noun phrase.",
"Subsequently, all detected visual objects are included as independent visual nodes .",
"In this way, we can effectively reduce the negative impact of abundant unrelated visual objects.",
"Let us revisit the example in Figure 1, where we can identify two noun phrases Two boys and a toy car from the input sentence, and then include three visual objects into the multi-modal graph.",
"To capture various semantic relationships between multi-modal semantic units for NMT, we consider two kinds of edges in the edge set E : (1) Any two nodes in the same modality are connected by an intra-modal edge ; and (2) Each textual node representing any noun phrase and the corresponding visual node are connected by an inter-modal edge .",
"Back to Figure 1, we can observe that all visual nodes are connected to each other, and all textual nodes are fully-connected.",
"However, only nodes v o 1 and v x 1 , v o 1 and v x 2 , v o 2 and v x 1 , v o 2 and v x 2 , v o 3 and v x 6 , v o 3 and v x 7 , v o 3 and v x 8 are connected by inter-modal edges.",
"Before inputting the multi-modal graph into the stacked fusion layers, we introduce an embedding",
"layer to initialize the node states.",
"Specifically, for each textual node v x i , we define its initial state H (0) x i as the sum of its word embedding and position encoding (Vaswani et al., 2017).",
"To obtain the initial state H (0) o j of the visual node v o j , we first extract visual features from the fully-connected layer that follows the ROI pooling layer in Faster-RCNN (Ren et al., 2015), and then employ a multilayer perceptron with ReLU activation function to project these features onto the same space as textual representations.",
"As shown in the left part of Figure 2, on the top of embedding layer, we stack L e graph-based multimodal fusion layers to encode the above-mentioned multi-modal graph.",
"At each fusion layer, we sequentially conduct intraand inter-modal fusions to update all node states.",
"In this way, the final node states encode both the context within the same modality and the cross-modal semantic information simultaneously.",
"Particularly, since visual nodes and textual nodes are two types of semantic units containing the information of different modalities, we apply similar operations but with different parameters to model their state update process, respectively.",
"Specifically, in the l -th fusion layer, both updates of textual node states H ( l ) x = f H ( l ) x i g and visual node states H ( l ) o = f H ( l ) o j g mainly involve the following steps: 3028 Step1: Intra-modal fusion .",
"At this step, we employ self-attention to generate the contextual representation of each node by collecting the message from its neighbors of the same modality.",
"Formally, the contextual representations C ( l ) x of all textual nodes are calculated as follows: 1 C ( l ) x = MultiHead ( H ( l (cid:0) 1) x ; H ( l (cid:0) 1) x ; H ( l (cid:0) 1) x ) ; (1) where MultiHead( Q , K , V ) is a multi-head self-attention function taking a query matrix Q , a key matrix K , and a value matrix V as inputs.",
"Similarly, we generate the contextual representations C ( l ) o of all visual nodes as C ( l ) o = MultiHead ( H ( l (cid:0) 1) o ; H ( l (cid:0) 1) o ; H ( l (cid:0) 1) o ) : (2) In particular, since the initial representations of visual objects are extracted from deep CNNs, we apply a simplified multi-head self-attention to preserve the initial representations of visual objects, where the learned linear projects of values and final outputs are removed.",
"Step2: Inter-modal fusion .",
"Inspired by studies in multi-modal feature fusion (Teney et al., 2018; Kim et al., 2018), we apply a cross-modal gating mechanism with an element-wise operation to gather the semantic information of the cross-modal neighbours of each node.",
"Concretely, we generate the representation M ( l ) x i of a text node v x i in the following way: M ( l ) x i = X j 2 A ( v xi ) (cid:11) i;j (cid:12) C ( l ) o j ; (3) (cid:11) i;j = Sigmoid ( W ( l ) 1 C ( l ) x i + W ( l ) 2 C ( l ) o j ) ; (4) where A ( v x i ) is the set of neighboring visual nodes of v x i , and W ( l ) 1 and W ( l ) 2 are parameter matrices.",
"Likewise, we produce the representation M ( l ) o j of a visual node v o j as follows: M ( l ) o j = X i 2 A ( v oj ) (cid:12) j;i (cid:12) C ( l ) x i ; (5) (cid:12) j;i = Sigmoid ( W ( l ) 3 C ( l ) o j + W ( l ) 4 C ( l ) x i ) ; (6) where A ( v o j ) is the set of adjacent textual nodes of v o j , and W ( l ) 3 and W ( l ) 4 are also parameter matrices.",
"The advantage is that the above fusion approach can better determine the degree of inter-modal fusion according to the contextual representations of 1 For simplicity, we omit the descriptions of layer normalization and residual connection.",
"each modality.",
"Finally, we adopt position-wise feed forward networks FFN ( (cid:3) ) to generate the textual node states H ( l ) x and visual node states H ( l ) o : H ( l ) x = FFN ( M ( l ) x ) ; (7) H ( l ) o = FFN ( M ( l ) o ) ; (8) where M ( l ) x = f M ( l ) x i g , M ( l ) o = f M ( l ) o j g denote the above updated representations of all textual nodes and visual nodes respectively.",
"Our decoder is similar to the conventional Transformer decoder.",
"Since visual information has been incorporated into all textual nodes via multiple graph-based multi-modal fusion layers, we allow the decoder to dynamically exploit the multi-modal context by only attending to textual node states.",
"As shown in the right part of Figure 2, we follow Vaswani et al. (2017) to stack L d identical layers to generate target-side hidden states, where each layer l is composed of three sub-layers.",
"Concretely, the first two sub-layers are a masked self-attention and an encoder-decoder attention to integrate target-and source-side contexts respectively: E ( l ) = MultiHead ( S ( l (cid:0) 1) ; S ( l (cid:0) 1) ; S ( l (cid:0) 1) ) ; (9) T ( l ) = MultiHead ( E ( l ) ; H ( L e ) x ; H ( L e ) x ) ; (10) where S ( l (cid:0) 1) denotes the target-side hidden states in the l 1 -th layer.",
"In particular, S (0) are the embed-dings of input target words.",
"Then, a position-wise fully-connected forward neural network is uesd to produce S ( l ) as follows: S ( l ) = FFN ( T ( l ) ) : (11) Finally, the probability distribution of generating the target sentence is defined by using a softmax layer, which takes the hidden states in the top layer as input: P ( Y j X; I ) = Y t Softmax ( WS ( L d ) t + b ) ; (12) where X is the input sentence, I is the input image, Y is the target sentence, and W and b are the parameters of the softmax layer.",
"We carry out experiments on multi-modal English ) German (En ) De) and English ) French (En ) Fr) translation tasks.",
"Datasets We use the Multi30K dataset (Elliott et al., 2016), where each image is paired with one English description and human translations into German and French.",
"Training, validation and test sets contain 29,000, 1,014 and 1,000 instances respectively.",
"In addition, we evaluate various models on the WMT17 test set and the ambiguous MSCOCO test set, which contain 1,000 and 461 instances respectively.",
"Here, we directly use the preprocessed sentences 2 and segment words into subwords via byte pair encoding (Sennrich et al., 2016) with 10,000 merge operations.",
"Visual Features We first apply the Stanford parser to identify noun phrases from each source sentence, and then employ the visual ground toolkit released by Yang et al. (2019) to detect associated visual objects of the identified noun phrases.",
"For each phrase, we keep the visual object with the highest prediction probability, so as to reduce negative effects of abundant visual objects.",
"In each sentence, the average numbers of objects and words are around 3.5 and 15.0 respectively.",
"3 Finally, we compute 2,048-dimensional features for these objects with the pre-trained ResNet-100 Faster-RCNN (Ren et al., 2015).",
"Settings We use Transformer (Vaswani et al., 2017) as our baseline.",
"Since the size of training corpus is small and the trained model tends to be over-fitting, we first perform a small grid search to obtain a set of hyper-parameters on the En ) De validation set.",
"Specifically, the word embedding dimension and hidden size are 128 and 256 respectively.",
"The decoder has L d =4 layers 4 and the number of attention heads is",
"4. The dropout is set to 0.5.",
"Each batch consists of approximately 2,000 source and target tokens.",
"We apply the Adam optimizer with a scheduled learning rate to optimize various models, and we use other same settings as (Vaswani et al., 2017).",
"Finally, we use the metrics BLEU (Pa-pineni et al., 2002) and METEOR (Denkowski and Lavie, 2014) to evaluate the quality of translations.",
"Particularly, we run all models three times for each experiment and report the average results.",
"2 http://www.statmt.org/wmt18/multimodal-task.html 3 There is no parsing failure for this dataset.",
"If no noun is detected for a sentence, the object representations will be set to zero vectors and the model will degenerate to Transformer.",
"4 The encoder of the text-based Transformer also has 4 layers.",
"Baseline Models In addition to the text-based Transformer (Vaswani et al., 2017), we adapt several effective approaches to Transformer using our visual features, and compare our model with them 5 : (cid:15) ObjectAsToken(TF) (Huang et al., 2016).",
"It is a variant of the Transformer, where all visual objects are regarded as extra source tokens and placed at the front of the input sentence.",
"(cid:15)",
"Enc-att(TF) (Delbrouck and Dupont, 2017b).",
"An encoder-based image attention mechanism is incorporated into Transformer, which augments each source annotation with an attention-based visual feature vector.",
"(cid:15)",
"Doubly-att(TF) (Helcl et al., 2018).",
"It is a doubly attentive Transformer.",
"In each decoder layer, a cross-modal multi-head attention sublayer is inserted before the fully connected feed-forward layer to generate the visual context vector from visual features.",
"We also display the performance of several dominant multi-modal NMT models such as Doubly-att(RNN) (Calixto et al., 2017), Soft-att(RNN) (Delbrouck and Dupont, 2017a), Stochastic-att(RNN) (Delbrouck and Dupont, 2017a), Fusion-conv(RNN) (Caglayan et al., 2017), Trg-mul(RNN) (Caglayan et al., 2017), VMMT(RNN) (Calixto et al., 2019) and Deliberation Network(TF) (Ive et al., 2019) on the same datasets.",
"The number L e of multi-modal fusion layer is an important hyper-parameter that directly determines",
"5 We use suffixes ( RNN ) and ( TF ) to represent RNN-and Transformer-style NMT models, respectively.",
"the degree of fine-grained semantic fusion in our encoder.",
"Thus, we first inspect its impact on the EN ) DE validation set.",
"Figure 3 provides the experimental results using different L e and our model achieves the best performance when L e is",
"3. Hence, we use L e =3 in all subsequent experiments.",
"Table 1 shows the main results on the En ) De translation task.",
"Ours outperforms most of the existing models and all baselines, and is comparable to Fusion-conv(RNN) and Trg-mul(RNN) on METEOR.",
"The two results are from the state-of-the-art system on the WMT2017 test set, which is selected based on METEOR.",
"Comparing the baseline models, we draw the following interesting conclusions: First , our model outperforms ObjectAsTo-ken(TF), which concatenates regional visual features with text to form attendable sequences and employs self-attention mechanism to conduct inter-modal fusion.",
"The underlying reasons consist of two aspects: explicitly modeling semantic correspondences between semantic units of different modalities, and distinguishing model parameters for different modalities.",
"Second , our model also significantly outperforms Enc-att(TF).",
"Note that Enc-att(TF) can be considered as a single-layer semantic fusion encoder.",
"In addition to the advantage of explicitly modeling semantic correspondences, we conjecture that multi-layer multi-modal semantic interactions are also beneficial to NMT.",
"Third , compared with Doubly-att(TF) simply using an attention mechanism to exploit visual in-15 20 25 30 35 40 [5,10) [10,15) [15,20) [20,25) [25,...) BLEU Sentence Length TransformerObjectAsToken(TF)Enc-att(TF) Doubly-att(TF) Our model Figure 4: BLEU scores on different translation groups divided according to source sentence lengths.",
"formation, our model achieves a significant improvement, because of sufficient multi-modal fusion in our encoder.",
"Besides, we divide our test sets into different groups based on the lengths of source sentences and the numbers of noun phrases, and then compare the performance of different models in each group.",
"Figures 4 and 5 report the BLEU scores on these groups.",
"Overall, our model still consistently achieves the best performance in all groups.",
"Thus, we confirm again the effectiveness and gen-3031 Model En ) De Test2016 Test2017 MSCOCO BLEU METEOR BLEU METEOR BLEU METEOR Our model 39.8 57.6 32.2 51.9 28.7 47.6 w/o inter-modal fusion 38.7 56.7 30.7 50.6 27.0 46.7 visual grounding ) fully-connected 36.4 53.4 28.3 47.0 24.4 42.9 different parameters ) unified parameters 39.2 57.3 31.9 51.4 27.7 47.4 w/ attending to visual nodes 39.6 57.3 32.0 51.3 27.9 46.8 attending to textual nodes ) attending to visual nodes 30.9 48.6 22.3 41.5 20.4 38.7 Table 2: Ablation study of our model on the EN ) DE translation task.",
"erality of our proposed model.",
"Note that in the sentences with more phrases, which are usually long sentences, the improvements of our model over baselines are more significant.",
"We speculate that long sentences often contain more ambiguous words.",
"Thus compared with short sentences, long sentences may require visual information to be better exploited as supplementary information, which can be achieved by the multi-modal semantic interaction of our model.",
"We also show the training and decoding speed of our model and the baselines in Table",
"4. During training, our model can process approximately 1.1K tokens per second, which is comparable to other multi-modal baselines.",
"When it comes to decoding procedure, our model translates about 16.7 sentences per second and the speed drops slightly compared to Transformer.",
"Moreover, our model only introduces a small number of extra parameters and achieves better performance.",
"To investigate the effectiveness of different components, we further conduct experiments to compare our model with the following variants in Table 2:",
"(1) w/o inter-modal fusion .",
"In this variant, we apply two separate Transformer encoders to learn the semantic representations of words and visual objects, respectively, and then use the doubly-attentive decoder (Helcl et al., 2018) to incorporate textual and visual contexts into the decoder.",
"The result in line 3 indicates that removing the inter-modal fusion leads to a significant performance drop.",
"It suggests that semantic interactions among multi-modal semantic units are indeed useful for multi-modal representation learning.",
"(2) visual grounding ) fully-connected .",
"We make the words and visual objects fully-connected to establish the inter-modal correspondences.",
"The result in line 4 shows that this change causes a significant performance decline.",
"The underlying reason is the fully-connected semantic correspondences introduce much noise to our model.",
"(3) different parameters ) unified parameters .",
"When constructing this variant, we assign unified parameters to update node states in different modalities.",
"Apparently, the performance drop reported in line 5 also demonstrates the validity of our ap-3032 proach using different parameters.",
"(4) w/ attending to visual nodes .",
"Different from our model attending to only textual nodes, we allow our decoder of this variant to consider both two types of nodes using doubly-attentive decoder.",
"From line 6, we can observe that considering all nodes does not bring further improvement.",
"The result confirms our previous assumption that visual information has been fully incorporated into textual nodes in our encoder.",
"(5) attending to textual nodes ) attending to visual nodes .",
"However, when only considering visual nodes, the model performance drops drastically (line 7).",
"This is because the number of visual nodes is far fewer than that of textual nodes, which is unable to produce sufficient context for translation.",
"Figure 6 displays the 1-best translations of a sampled test sentence generated by different models.",
"The phrase a skateboarding ramp is not translated correctly by all baselines, while our model correctly translates it.",
"This reveals that our encoder is able to learn more accurate representations.",
"We also conduct experiments on the EN ) Fr dataset.",
"From Table 3, our model still achieves better performance compared to all baselines, which demonstrates again that our model is effective and general to different language pairs in multi-modal NMT.",
"Multi-modal NMT Huang et al. (2016) first incorporate global or regional visual features into attention-based NMT.",
"Calixto and Liu (2017) also study the effects of incorporating global visual features into different NMT components.",
"Elliott and K adar (2017) share an encoder between a translation model and an image prediction model to learn visually grounded representations.",
"Besides, the most common practice is to use attention mechanisms to extract visual contexts for multimodal NMT (Caglayan et al., 2016; Calixto et al., 2017; Delbrouck and Dupont, 2017a,b; Barrault et al., 2018).",
"Recently, Ive et al. (2019) propose a translate-and-refine approach and Calixto et al. (2019) employ a latent variable model to capture the multi-modal interactions for multi-modal NMT.",
"Apart from model design, Elliott (2018) reveal that visual information seems to be ignored by the multimodal NMT models.",
"Caglayan et al. (2019) conduct a systematic analysis and show that visual information can be better leveraged under limited textual context.",
"Different from the above-mentioned studies, we first represent the input sentence-image pair as a unified graph, where various semantic relationships between multi-modal semantic units can be effectively captured for multi-modal NMT.",
"Benefiting from the multi-modal graph, we further introduce an extended GNN to conduct graph encoding via multi-modal semantic interactions.",
"Note that if we directly adapt the approach proposed by Huang et al. (2016) into Transformer, the model (ObjectAsToken(TF)) also involves multimodal fusion.",
"However, ours is different from it in following aspects: (1) We first learn the contextual representation of each node within the same modality, so that it can better determine the degree of inter-modal fusion according to its own context.",
"(2) We assign different encoding parameters to different modalities, which has been shown effective in our experiments.",
"Additionally, the recent study LXMERT (Tan and Bansal, 2019) also models relationships between vision and language, which differs from ours in following aspects: (1) Tan and Bansal (2019) first apply two transformer encoders for two modalities, and then stack two cross-modality encoders to conduct multi-modal fusion.",
"In contrast, we sequentially conduct self-attention and cross-modal gating at each layer.",
"(2) Tan and Bansal (2019) leverage an attention mechanism to implicitly establish cross-modal relationships via large-scale pretraining, while we utilize visual grounding to capture explicit cross-modal correspondences.",
"(3) We focus on multi-modal NMT rather than vision-and-language reasoning in (Tan and Bansal, 2019).",
"Graph Neural Networks Recently, GNNs (Marco Gori and Scarselli, 2005) including gated graph neural network (Li et al., 2016), graph convolutional network (Duvenaud et al., 2015; Kipf and Welling, 2017) and graph attention network (Velickovic et al., 2018) have been shown effective in many tasks such as VQA (Teney et al., 2017; Norcliffe-Brown et al., 2018; Li et al., 2019), text generation (Gildea et al., 2018; Becky et al., 2018; Song et al., 2018b, 2019) and text representation (Zhang et al., 2018; Yin et al., 2019; Song et al., 3033 Source : A boy riding a skateboard on a skateboarding ramp .",
"In this work, we mainly focus on how to extend GNN to fuse multi-modal information in NMT.",
"Close to our work, Teney et al. (2017) introduce GNN for VQA.",
"The main difference between their work and ours is that they build an individual graph for each modality, while we use a unified multimodal graph.",
"In this paper, we have proposed a novel graph-based multi-modal fusion encoder, which exploits various semantic relationships between multimodal semantic units for NMT.",
"Experiment results and analysis on the Multi30K dataset demonstrate the effectiveness of our model.",
"In the future, we plan to incorporate attributes of visual objects and dependency trees to enrich the multi-modal graphs.",
"Besides, how to introduce scene graphs into multi-modal NMT is a worthy problem to explore.",
"Finally, we will apply our model into other multi-modal tasks such as multimodal sentiment analysis.",
"This work was supported by the Beijing Advanced Innovation Center for Language Resources (No. TYR17002), the National Natural Science Foundation of China (No. 61672440), and the Scientific Research Project of National Language Committee of China (No. YB135-49)."
] | [
"abstain",
"abstain",
"objective",
"objective",
"method",
"abstain",
"objective",
"result",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"objective",
"abstain",
"method",
"method",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"objective",
"objective",
"method",
"result",
"result",
"objective",
"other",
"objective",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"other",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"other",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"other",
"abstain",
"result",
"abstain",
"result",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"method",
"abstain",
"objective",
"abstain",
"other",
"objective",
"method",
"method",
"other",
"abstain",
"abstain",
"method",
"objective",
"objective",
"abstain",
"abstain",
"method",
"other"
] |
[
"This paper introduces the Multi-Genre Natural Language Inference (MultiNLI) corpus, a dataset designed for use in the development and evaluation of machine learning models for sentence understanding.",
"At 433k examples, this resource is one of the largest corpora available for natural language inference (a.k.a. recognizing textual entailment ), improving upon available resources in both its coverage and difficulty.",
"MultiNLI accomplishes this by offering data from ten distinct genres of written and spoken English, making it possible to evaluate systems on nearly the full complexity of the language, while supplying an explicit setting for evaluating cross-genre domain adaptation.",
"In addition, an evaluation using existing machine learning models designed for the Stanford NLI corpus shows that it represents a substantially more difficult task than does that corpus, despite the two showing similar levels of inter-annotator agreement.",
"Many of the most actively studied problems in NLP, including question answering, translation, and dialog, depend in large part on natural language understanding (NLU) for success.",
"While there has been a great deal of work that uses representation learning techniques to pursue progress on these applied NLU problems directly, in order for a representation learning model to fully succeed at one of these problems, it must simultaneously succeed both at NLU, and at one or more additional hard machine learning problems like structured prediction or memory access.",
"This makes it difficult to accurately judge the degree to which current models extract reasonable representations of language meaning in these settings.",
"The task of natural language inference (NLI) is well positioned to serve as a benchmark task for research on NLU.",
"In this task, also known as recognizing textual entailment (Cooper et al., 1996; Fyodorov et al., 2000; Condoravdi et al., 2003; Bos and Markert, 2005; Dagan et al., 2006; MacCartney and Manning, 2009), a model is presented with a pair of sentenceslike one of those in Figure 1and asked to judge the relationship between their meanings by picking a label from a small set: typically ENTAILMENT , NEUTRAL , and CONTRADICTION .",
"Succeeding at NLI does not require a system to solve any difficult machine learning problems except, crucially, that of extracting effective and thorough representations for the meanings of sentences (i.e., their lexical and compositional semantics).",
"In particular, a model must handle phenomena like lexical entailment, quantification, coreference, tense, belief, modality, and lexical and syntactic ambiguity.",
"As the only large human-annotated corpus for NLI currently available, the Stanford NLI Corpus (SNLI; Bowman et al., 2015) has enabled a good deal of progress on NLU, serving as a major benchmark for machine learning work on sentence understanding and spurring work on core representation learning techniques for NLU, such as attention (Wang and Jiang, 2016; Parikh et al., 2016), memory (Munkhdalai and Yu, 2017), and the use of parse structure (Mou et al., 2016b; Bowman et al., 2016; Chen et al., 2017).",
"However, SNLI falls short of providing a sufficient testing ground for machine learning models in two ways.",
"First, the sentences in SNLI are derived from only a single text genreimage captionsand are thus limited to descriptions of concrete visual scenes, rendering the hypothesis sentences used to describe these scenes short and simple, and rendering many important phenomenalike temporal reasoning (e.g., yesterday ), belief (e.g., know ), and modality (e.g., should )rare enough to be irrelevant to task performance.",
"Second, because of these issues, SNLI is not sufficiently demanding to serve as an effective benchmark for NLU, with the best current model performance falling within a few percentage points of human accuracy and limited room left for fine-grained comparisons between strong models.",
"This paper introduces a new challenge dataset, the Multi-Genre NLI Corpus (MultiNLI), whose chief purpose is to remedy these limitations by making it possible to run large-scale NLI evaluations that capture more of the complexity of modern English.",
"While its size (433k pairs) and mode of collection are modeled closely on SNLI, unlike that corpus, MultiNLI represents both written and spoken speech in a wide range of styles, degrees of formality, and topics.",
"Our chief motivation in creating this corpus is to provide a benchmark for ambitious machine learning research on the core problems of NLU, but we are additionally interested in constructing a corpus that facilitates work on domain adaptation and cross-domain transfer learning.",
"These techniqueswhich use labeled training data for a source domain, and aim to train a model that performs well on test data from a target domain with a different distributionhave resulted in gains across many tasks (Daume III and Marcu, 2006; Ben-David et al., 2007), including sequence and part-of-speech tagging (Blitzer et al., 2006; Peng and Dredze, 2017).",
"Moreover, in application areas outside NLU, artificial neural network techniques have made it possible to train general-purpose feature extractors that, with no or minimal retraining, can extract useful features for a variety of styles of data (Krizhevsky et al., 2012; Zeiler and Fergus, 2014; Donahue et al., 2014).",
"However, attempts to bring this kind of general purpose representation learning to NLU have seen only very limited success (see, for example, Mou et al., 2016a).",
"Nearly all successful applications of representation learning to NLU have involved models that are trained on data closely resembling the target evaluation data in both task and style.",
"This fact limits the usefulness of these tools for problems involving styles of language not represented in large annotated training sets.",
"With this in mind, we construct MultiNLI so as to make it possible to explicitly evaluate models both on the quality of their sentence representations within the training domain and on their ability to derive reasonable representations in unfamiliar domains.",
"The corpus is derived from ten different genres of written and spoken English, which are collectively meant to approximate the full diversity of ways in which modern standard 1113 This task will involve reading a line from a non-fiction article and writing three sentences that relate to it.",
"The line will describe a situation or event.",
"Using only this description and what you know about the world: Write one sentence that is definitely correct about the situation or event in the line.",
"Write one sentence that might be correct about the situation or event in the line.",
"Write one sentence that is definitely incorrect about the situation or event in the line.",
"American English is used.",
"All of the genres appear in the test and development sets, but only five are included in the training set.",
"Models thus can be evaluated on both the matched test examples, which are derived from the same sources as those in the training set, and on the mismatched examples, which do not closely resemble any of those seen at training time.",
"The data collection methodology for MultiNLI is similar to that of SNLI: We create each sentence pair by selecting a premise sentence from a preexisting text source and asking a human annotator to compose a novel sentence to pair with it as a hypothesis.",
"This section discusses the sources of our premise sentences, our collection method for hypotheses, and our validation (relabeling) strategy.",
"Premise Text Sources The MultiNLI premise sentences are derived from ten sources of freely available text which are meant to be maximally diverse and roughly represent the full range of American English.",
"We selected nine sources from the second release of the Open American National Corpus (OANC; Fillmore et al., 1998; Macleod et al., 2000; Ide and Macleod, 2001; Ide and Su-derman, 2006, downloaded 12/2016 1 ), balancing the volume of source text roughly evenly across genres, and avoiding genres with content that would be too difficult for untrained annotators.",
"and Conversation Collection of two-sided, in-person conversations that took place in the early 2000s (FACE-TO-FACE ); reports, speeches, letters, and press releases from public domain government websites (GOVERNMENT ); letters from the Indiana Center for Intercultural Communication of Philanthropic Fundraising Discourse written in the late 1990searly 2000s (LETTERS ); the public report from the National Commission on Terrorist Attacks Upon the United States released on July 22, 2004 2 (9/11); five non-fiction works on the textile industry and child development published by the Oxford University Press (OUP); popular culture articles from the archives of Slate Magazine (SLATE ) written between 19962000; transcriptions from University of Pennsylvania's Linguistic Data Consortium Switchboard corpus of two-sided, telephone conversations that took place in 1990 or 1991 (TELEPHONE ); travel guides published by Berlitz Publishing in the early 2000s (TRAVEL ); and short posts about linguistics for non-specialists from the Verbatim archives written between 1990 and 1996 (VERBATIM ).",
"For our tenth genre, FICTION , we compile several freely available works of contemporary fiction written between 1912 and 2010, spanning various genres, including mystery ( The Mysterious Affair at Styles , 3 Christie, 1921; The Secret Adversary , 4 Christie, 1922; Murder in the Gun Room , 5 Piper, 1953), humor ( Password Incorrect , 6 Name, 2008), western ( Rebel Spurs , 7 Norton, 1962), science fiction ( Seven Swords , 8 Shea, 2008; Living History , 9 Essex, 2016; The Sky Is Falling , 10 Del Rey, 1973; Youth , 11 Asimov, May 1952), and adventure ( Captain Blood , 12 Sabatini, 1922).",
"We construct premise sentences from these ten source texts with minimal preprocessing; unique the sentences within genres, exclude very short 2 https://9-11commission.gov/ 3 gutenberg.org/files/863/863-0.txt 4 gutenberg.org/files/1155/1155-0.txt 5 gutenberg.org/files/17866/17866.txt 6 http://manybooks.net/pages/ namenother09password_incorrect/0.html 7 gutenberg.org/files/20840/20840-0.txt 8 http://mikeshea.net/stories/seven_ swords.html , shared with the author's permission.",
"9 manybooks.net/pages/ essexbother10living_history/0.html 10 gutenberg.org/cache/epub/18768/ pg18768.txt 11 gutenberg.org/cache/epub/31547/ pg31547.txt 12 gutenberg.org/files/1965/1965-0.txt 1114 sentences (under eight characters), and manually remove certain types of non-narrative writing, such as mathematical formulae, bibliographic references, and lists.",
"Although SNLI is collected in largely the same way as MultiNLI, and is also permissively licensed, we do not include SNLI in the MultiNLI corpus distribution.",
"SNLI can be appended and treated as an unusually large additional CAPTIONS genre, built on image captions from the Flickr30k corpus (Young et al., 2014).",
"Hypothesis Collection To collect a sentence pair, we present a crowdworker with a sentence from a source text and ask them to compose three novel sentences (the hypotheses): one which is necessarily true or appropriate whenever the premise is true (paired with the premise and labeled ENTAILMENT ), one which is necessarily false or inappropriate whenever the premise is true ( CONTRADICTION ), and one where neither condition applies ( NEUTRAL ).",
"This method of data collection ensures that the three classes will be represented equally in the raw corpus.",
"The prompts that surround each premise sentence during hypothesis collection are slightly tailored to fit the genre of that premise sentence.",
"We pilot these prompts prior to data collection to ensure that the instructions are clear and that they yield hypothesis sentences that fit the intended meanings of the three classes.",
"There are five unique prompts in total: one for written non-fiction genres (SLATE , OUP, GOVERNMENT , VERBATIM , TRAVEL ; Figure 1), one for spoken genres (TELEPHONE , FACE-TO-FACE ), one for each of the less formal written genres (FICTION , LETTERS ), and a specialized one for 9/11, tailored to fit its potentially emotional content.",
"Each prompt is accompanied by example premises and hypothesis that are specific to each genre.",
"Below the instructions, we present three text fieldsone for each labelfollowed by a field for reporting issues, and a link to the frequently asked questions (FAQ) page.",
"We provide one FAQ page per prompt.",
"FAQs are modeled on their SNLI counterparts (supplied by the authors of that work) and include additional curated examples, answers to genre-specific questions arising from our pilot phase, and information about logistical concerns like payment.",
"For both hypothesis collection and validation, we present prompts to annotators using Hybrid Statistic SNLI MultiNLI Pairs w/ unanimous gold label 58.3% 58.2% Individual label = gold label 89.0% 88.7% Individual label = author's label 85.8% 85.2% Gold label = author's label 91.2% 92.6% Gold label 6 = author's label 6.8% 5.6% No gold label (no 3 labels match) 2.0% 1.8% Table 2: Key validation statistics for SNLI (copied from Bowman et al., 2015) and MultiNLI.",
"( gethybrid.io ), a crowdsoucring platform similar to the Amazon Mechanical Turk platform used for SNLI.",
"We used this platform to hire an organized group of workers.",
"387 annotators contributed through this group, and at no point was any identifying information about them, including demographic information, available to the authors.",
"Validation We perform an additional round of annotation on test and development examples to ensure accurate labelling.",
"The validation phase follows the same procedure used for SICK (Marelli et al., 2014b) and SNLI: Workers are presented with pairs of sentences and asked to supply a single label ( ENTAILMENT , CONTRADICTION , NEUTRAL ) for the pair.",
"Each pair is relabeled by four workers, yielding a total of five labels per example.",
"Validation instructions are tailored by genre, based on the main data collection prompt (Figure 1); a single FAQ, modeled after the validation FAQ from SNLI, is provided for reference.",
"In order to encourage thoughtful labeling, we manually label one percent of the validation examples and offer a $1 bonus each time a worker selects a label that matches ours.",
"For each validated sentence pair, we assign a gold label representing a majority vote between the initial label assigned to the pair by the original annotator, and the four additional labels assigned by validation annotators.",
"A small number of examples did not receive a three-vote consensus on any one label.",
"These examples are included in the distributed corpus, but are marked with ' in the gold label field, and should not be used in standard evaluations.",
"Table 2 shows summary statistics capturing the results of validation, alongside corresponding figures for SNLI.",
"These statistics indicate that the labels included in MultiNLI are about as reliable as those included in SNLI, despite MultiNLI's more diverse text contents.",
"Table 1 shows randomly chosen development set examples from the collected corpus.",
"Hypotheses tend to be fluent and correctly spelled, though not all are complete sentences.",
"Punctuation is often omitted.",
"Hypotheses can rely heavily on knowledge about the world, and often don't correspond closely with their premises in syntactic structure.",
"Unlabeled test data is available on Kaggle for both matched and mismatched sets as competitions that will be open indefinitely; Evaluations on a subset of the test set have previously been conducted with different leaderboards through the RepEval 2017 Workshop (Nangia et al., 2017).",
"The corpus is available in two formatstab separated text and JSON Lines ( jsonl ), following SNLI.",
"For each example, premise and hypothesis strings, unique identifiers for the pair and prompt, and the following additional fields are specified: gold label : label used for classification.",
"In examples rejected during the validation process, the value of this field will be '.",
"sentence { 1,2 } parse : Each sentence as parsed by the Stanford PCFG Parser 3.5.2 (Klein and Manning, 2003).",
"sentence { 1,2 } binary parse : parses in unlabeled binary-branching format.",
"label[1] : The label assigned during the creation of the sentence pair.",
"In rare cases this may be different from gold label , if a consensus of annotators chose a different label during the validation phase.",
"label[2...5] : The four labels assigned during validation by individual annotators to each development and test example.",
"These fields will be empty for training examples.",
"The current version of the corpus is freely available at nyu.edu/projects/bowman/multinli/ for typical machine learning uses, and may be modified and redistributed.",
"The majority of the corpus is released under the OANC's license, which allows all content to be freely used, modified, and shared under permissive terms.",
"The data in the FICTION section falls under several permissive licenses; Seven Swords is available under a Creative Commons Share-Alike 3.0 Unported License, and with the explicit permission of the author, Living History and Password Incorrect are available under Creative Commons Attribution 3.0 Unported Licenses; the remaining works of fiction are in the public domain in the United States (but may be licensed differently elsewhere).",
"Partition The distributed corpus comes with an explicit train/test/development split.",
"The test and development sets contain 2,000 randomly selected examples each from each of the genres, resulting in a total of 20,000 examples per set.",
"No premise sentence occurs in more than one set.",
"Statistics Table 3 shows some additional statistics.",
"Premise sentences in MultiNLI tend to be longer (max 401 words, mean 22.3 words) than their hypotheses (max 70 words, mean 11.4 words), and much longer, on average, than premises in SNLI (mean 14.1 words); premises in MultiNLI also tend to be parsed as complete sentences at a much higher rate on average (91%) than their SNLI counterparts (74%).",
"We observe that the two spoken genres differ in thiswith FACE-TO-FACE showing more complete sentences (91%) than TELEPHONE (71%)and speculate that the lack of visual feedback in a telephone setting may result in a high incidence of interrupted or otherwise incomplete sentences.",
"Hypothesis sentences in MultiNLI generally cannot be derived from their premise sentences using only trivial editing strategies.",
"While 2 .",
"5 % of the hypotheses in SNLI differ from their premises by deletion, only 0 .",
"9 % of those in MultiNLI (170 examples total) are constructed in this way.",
"Similarly, in SNLI, 1 .",
"6 % of hypotheses differ from their premises by addition, substitution, or shuf-fling a single word, while in MultiNLI this only happens in 1 .",
"2 % of examples.",
"The percentage of hypothesis-premise pairs with high token overlap ( > 37%) was comparable between MultiNLI (30% of pairs) and SNLI (29%).",
"These statistics suggest that MultiNLI's annotations are comparable in quality to those of SNLI.",
"To test the difficulty of the corpus, we experiment with three neural network models.",
"The first is a simple continuous bag of words (CBOW) model in which each sentence is represented as the sum of the embedding representations of its words.",
"The second computes representations by averaging the states of a bidirectional LSTM RNN (BiL-STM; Hochreiter and Schmidhuber, 1997) over words.",
"For the third, we implement and evaluate Chen et",
"al.'s Enhanced Sequential Inference Model (ESIM), which is roughly tied for the state of the art on SNLI at the time of writing.",
"We use the base ESIM without ensembling with a TreeL-STM (as in the HIM' runs in that work).",
"The first two models produce separate vector representations for each sentence and compute label predictions for pairs of representations.",
"To do this, they concatenate the representations for premise and hypothesis, their difference, and their element-wise product, following Mou et al. (2016b), and pass the result to a single tanh layer followed by a three-way softmax classifier.",
"All models are initialized with 300D reference GloVe vectors (840B token version; Pennington et al., 2014).",
"Out-of-vocabulary (OOV) words are initialized randomly and word embeddings are fine-tuned during training.",
"The models use 300D hidden states, as in most prior work on SNLI.",
"We use Dropout (Srivastava et al., 2014) for regularization.",
"For ESIM, we use a dropout rate of 0.5, following the paper.",
"For CBOW and BiLSTM models, we tune Dropout on the SNLI development set and find that a drop rate of 0.1 works well.",
"We use the Adam (Kingma and Ba, 2015) optimizer with default parameters.",
"Code is available at github.com/nyu-mll/multiNLI/ .",
"We train models on SNLI, MultiNLI, and a mixture; Table 4 shows the results.",
"In the mixed setting, we use the full MultiNLI training set and randomly select 15% of the SNLI training set at each epoch, ensuring that each available genre is seen during training with roughly equal frequency.",
"We also train a separate CBOW model on each individual genre to establish the degree to which simple models already allow for effective transfer across genres, using a dropout rate of 0.2.",
"When training on SNLI, a single random sample of 15% of the original training set is used.",
"For each genre represented in the training set, the model that performs best on it was trained on that genre; a model trained only on SNLI performs worse on every genre than comparable models trained on any genre from MultiNLI.",
"Models trained on a single genre from MultiNLI perform well on similar genres; for example, the model trained on TELEPHONE attains the best accuracy (63%) on FACE-TO-FACE , which was nearly one point better than it received on itself.",
"SLATE seems to be a difficult and relatively unusual genre and performance on it is relatively poor in this setting; when averaging over runs trained on SNLI and all genres in the matched section of the training set, average performance on SLATE was only 57.5%.",
"Sentences in SLATE cover a wide range of topics and phenomena, making it hard to do well on, but also forcing models trained on it be broadly capable; the model trained on SLATE achieves the highest accuracy of any model on 9/11 (55.6%) and VERBATIM (57.2%), and relatively high accuracy on TRAVEL (57.4%) and GOVERNMENT (58.3%).",
"We also observe that our models perform similarly on both the matched and mismatched test sets of MultiNLI.",
"We expect genre mismatch issues to become more conspicuous as models are developed that can better fit MultiNLI's training genres.",
"To evaluate the contribution of sentence length to corpus difficulty, we binned premises and hypotheses by length in 25-word increments for premises and 10-word increments for hypotheses.",
"Using the ESIM model, our strong baseline, we find a small effect (stronger for matched than mismatched) of premise length on model accuracy: accuracy decreases slightly as premise sentences increase in length.",
"We find no effect of hypothesis length on accuracy.",
"In data collection for NLI, different annotator decisions about the coreference between entities and events across the two sentences in a pair can lead to very different assignments of pairs to labels (de Marneffe et al., 2008; Marelli et al., 2014a; Bowman et al., 2015).",
"Drawing an example from Bowman et al., the pair a boat sank in the Pacific Ocean and a boat sank in the Atlantic Ocean can be labeled either CONTRADICTION or NEUTRAL depending on (among other things) whether the two mentions of boats are assumed to refer to the same entity in the world.",
"This uncertainty can present a serious problem for inter-annotator agreement, since it is not clear that it is possible to define an explicit set of rules around coreference that would be easily intelligible to an untrained annotator (or any non-expert).",
"Bowman et al. attempt to avoid this problem by using an annotation prompt that is highly dependent on the concreteness of image descriptions; but, as we engage with the much more abstract writing that is found in, for example, government documents, there is no reason to assume a priori that any similar prompt and annotation strategy can work.",
"We are surprised to find that this is not a major issue.",
"Through a relatively straightforward trial-and-error piloting phase, followed by discussion with our annotators, we manage to design prompts for abstract genres that yield high inter-annotator agreement scores nearly identical to those of SNLI (see Table 2).",
"These high scores suggest that our annotators agreed on a single task definition, and were able to apply it consistently across genres.",
"As expected, both the increase in the diversity of linguistic phenomena in MultiNLI and its longer average sentence length conspire to make MultiNLI dramatically more difficult than SNLI.",
"Our three baseline models perform better on SNLI than MultiNLI by about 15% when trained on the respective datasets.",
"All three models achieve accuracy above 80% on the SNLI test set when trained only on SNLI.",
"However, when trained on MultiNLI, only ESIM surpasses 70% accuracy on MultiNLI's test sets.",
"When we train models on MultiNLI and downsampled SNLI, we see an expected significant improvement on SNLI, but no significant change in performance on the MultiNLI test sets, suggesting including SNLI in training doesn't drive substantial improvement.",
"These results attest to MultiNLI's difficulty, and with its relatively high inter-annotator agreement, suggest that it presents a problem with substantial headroom for future work.",
"To better understand the types of language understanding skills that MultiNLI tests, we analyze the collected corpus using a set of annotation tags chosen to reflect linguistic phenomena which are known to be potentially difficult.",
"We use two methods to assign tags to sentences.",
"First, we use the Penn Treebank (PTB; Marcus et al., 1993) part-of-speech tag set (via the included Stanford Parser parses) to automatically isolate sentences 1118 Dev.",
"containing a range of easily-identified phenomena like comparatives.",
"Second, we isolate sentences that contain hand-chosen key words indicative of additional interesting phenomena.",
"The hand-chosen tag set covers the following phenomena: QUANTIFIERS contains single words with quantificational force (see, for example, Heim and Kratzer, 1998; Szabolcsi, 2010, e.g., many, all, few, some ); BELIEFVERBS contains sentence-embedding verbs denoting mental states (e.g., know, believe, think ), including irregular past tense forms; TIME TERMS contains single words with abstract temporal interpretation, (e.g., then, today ) and month names and days of the week; DISCOURSE MARKERS contains words that facilitate discourse coherence (e.g., yet, however, but, thus, despite ); PRESUPPOSITIONTRIGGERS contains words with lexical presuppositions (Stal-naker, 1974; Schlenker, 2016, e.g., again, too, anymore 13 ); CONDITIONALS contains the word if .",
"Table 5 presents the frequency of the tags in SNLI and MultiNLI, and model accuracy on MultiNLI (trained only on MultiNLI).",
"The incidence of tags varies by genre; the percentage of sentence pairs containing a particular annotation tag differs by a maximum over 30% across genres.",
"Sentence pairs containing pronouns are predictably common for all genres, with 93% of Government and Face-to-face pairs including at 13 Because their high frequency in the corpus, extremely common triggers like the were excluded from this tag.",
"least one.",
"The Telephone genre has the highest percentage of sentence pairs containing one occurrence of negation, WH-words, belief -verbs and time terms, Verbatim has the highest percentage of pairs containing quantifiers and conversational pivots, and Letters has the highest percentage of pairs that contain one or more modals.",
"Pairs containing comparatives and/or superlatives, which is the tag that our baseline models perform worst on, are most common in the Oxford University Press genre.",
"Based on this, we conclude that the genres are sufficiently different, because they are not uniform with respect to the percentages of sentence pairs that contain each of the annotation tags.",
"The distributions of labels within each tagged subset of the corpus roughly mirrors the balanced overall distribution.",
"The most frequent class overall (in this case, ENTAILMENT ) occurs with a frequency of roughly one third (see Table",
"4) in most.",
"Only two annotation tags differ from the baseline percentage of the most frequent class in the corpus by at least 5%: sentences containing negation, and sentences exceeding 20 words.",
"Sentences that contain negation are slightly more likely than average to be labeled CONTRADICTION , reflecting a similar finding in SNLI, while long sentences are slightly more likely to be labeled ENTAILMENT .",
"None of the baseline models perform substantially better on any tagged set than they do on the corpus overall, with average model accuracies on sentences containing specific tags falling within 1119 about 3 points of overall averages.",
"Using baseline model test accuracy overall as a metric (see Table 4), our baseline models had the most trouble on sentences containing comparatives or superlatives (losing 3-4 points each).",
"Despite the fact that 17% of sentence pairs in the corpus contained at least one instance of comparative or superlative, our baseline models don't utilize the information present in these sentences to predict the correct label for the pair, although presence of a comparative or superlative is slightly more predictive of a NEUTRAL label.",
"Moreover, the baseline models perform below average on discourse markers, such as despite and however , losing roughly 2 to 3 points each.",
"Unsurprisingly, the attention-based ESIM model performs better than the other two on sentences with greater than 20 words.",
"Additionally, our baseline models do show slight improvements in accuracy on negation, suggesting that they may be tracking it as a predictor of CONTRADICTION .",
"Natural language inference makes it easy to judge the degree to which neural network models for sentence understanding capture the full meanings for natural language sentences.",
"Existing NLI datasets like SNLI have facilitated substantial advances in modeling, but have limited headroom and coverage of the full diversity of meanings expressed in English.",
"This paper presents a new dataset that offers dramatically greater linguistic difficulty and diversity, and also serves as a benchmark for cross-genre domain adaptation.",
"Our new corpus, MultiNLI, improves upon SNLI in its empirical coveragebecause it includes a representative sample of text and speech from ten different genres, as opposed to just simple image captionsand its difficulty, containing a much higher percentage of sentences tagged with one or more elements from our tag set of thirteen difficult linguistic phenomena.",
"This greater diversity is reflected in the dramatically lower baseline model performance on MultiNLI than on SNLI (see Table",
"5) and comparable inter-annotator agreement, suggesting that MultiNLI has a lot of headroom remaining for future work.",
"The MultiNLI corpus was first released in draft form in the first half of 2017, and in the time since its initial release, work by others (Conneau et al., 2017) has shown that NLI can also be an effective source task for pre-training and transfer learning in the context of sentence-to-vector models, with models trained on SNLI and MultiNLI substantially outperforming all prior models on a suite of established transfer learning benchmarks.",
"We hope that this corpus will continue to serve for many years as a resource for the development and evaluation of methods for sentence understanding.",
"This work was made possible by a Google Faculty Research Award.",
"SB also gratefully acknowledges support from Tencent Holdings and Samsung Research.",
"We also thank George Dahl, the organizers of the RepEval 2016 and RepEval 2017 workshops, Andrew Drozdov, Angeliki Lazaridou, and our other NYU colleagues for help and advice."
] | [
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"other",
"abstain",
"other",
"other",
"other"
] |
[
"Intelligent features in email service applications aim to increase productivity by helping people organize their folders, compose their emails and respond to pending tasks.",
"In this work, we explore a new application, Smart-To-Do, that helps users with task management over emails.",
"We introduce a new task and dataset for automatically generating To-Do items from emails where the sender has promised to perform an action.",
"We design a two-stage process leveraging recent advances in neural text generation and sequence-to-sequence learning, obtaining BLEU and ROUGE scores of 0 .",
"23 and 0 .",
"63 for this task.",
"To the best of our knowledge, this is the first work to address the problem of composing To-Do items from emails.",
"Email is one of the most used forms of communication especially in enterprise and work settings (Radicati and Levenstein, 2015).",
"With the growing number of users in email platforms, service providers are constantly seeking to improve user experience for a myriad of applications such as online retail, instant messaging and event management (Feddern-Bekcan, 2008).",
"Smart Reply (Kan-nan et al., 2016) and Smart Compose (Chen et al., 2019) are two recent features that provide contextual assistance to users aiming to reduce typing efforts.",
"Another line of work in this direction is for automated task management and scheduling.",
"For example.",
"the recent Nudge feature 1 in Gmail and Insights in Outlook 2 are designed to remind users to follow-up on an email or pay attention to pending tasks.",
"Smart To-Do takes a step further in task assistance and seeks to boost user productivity by automatically generating To-Do items from their email Work done as an intern at Microsoft Research.",
"1 Gmail Nudge 2 Outlook Insights From: Alice To: john@contoso.com Subject: Sales Report Hi John, From: John To: alice@contoso.com Subject: RE: Sales Report I am doing well.",
"context.",
"Text generation from emails, like creating To-Do items, is replete with complexities due to the diversity of conversations in email threads, heterogeneous structure of emails and various meta-deta involved.",
"As opposed to prior works in text generation like news headlines, email subject lines and email conversation summarization, To-Do items are action-focused , requiring the identification of a specific task to be performed.",
"In this work, we introduce the task of automatically generating To-Do items from email context and meta-data to assist users with following up on their promised actions (also referred to as commitments in this work).",
"Refer to Figure 1 for an illustration.",
"Given an email, its temporal context (i.e. thread), and associated meta-data like the name of the sender and recipient, we want to generate a short and succinct To-Do item for the task mentioned in the email.",
"This requires identifying the task sentence (also referred to as a query ), relevant sentences in the email that provide contextual information about the query along with the entities (e.g., people) associated with the task.",
"We utilize existing work to identify the task sentence via a commitment classifier that detects action intents in the emails.",
"Thereafter C Commitment Classifier D Does the email contain commitment ?",
"we use an unsupervised technique to extract key sentences in the email that are helpful in providing contextual information about the query.",
"These pieces of information are further combined to generate the To-Do item using a sequence-to-sequence architecture with deep neural networks.",
"Figure 2 shows a schematic diagram of the process.",
"Since there is no existing work or dataset on this problem, our first step is to collect annotated data for this task.",
"Overall, our contributions can be summarized as follows: We create a new dataset for To-Do item generation from emails containing action items based on the publicly available email corpus Avocado (Oard et al., 2015).",
"3 We develop a two-stage algorithm, based on unsupervised task-focused content selection and subsequent text generation combining contextual information and email meta-data.",
"We conduct experiments on this new dataset and show that our model performs at par with human judgments on multiple performance metrics.",
"Summarization of email threads has been the focus of multiple research works in the past (Rambow et al., 2004; Carenini et al., 2007; Dredze et al., 2008).",
"There has also been considerable research on identifying speech acts or tasks in emails (Car-valho and Cohen, 2005; Lampert et al., 2010; Scerri et al., 2010) and how it can be robustly adapted across diverse email corpora (Azarbonyad et al., 2019).",
"Recently, novel neural architectures have been explored for modeling action items in emails 3 We will release the code and data (in accordance with LDC and Avocado policy) at https://aka.ms/SmartToDo .",
"Email examples in this paper are similar to those in our dataset but are not reproducing text from the Avocado dataset.",
"(Lin et al., 2018) and identifying intents in email conversations (Wang et al., 2019).",
"However, there has been less focus on task-specific email summarization (Corston-Oliver et al., 2004).",
"The closest to our work is that of email subject line generation (Zhang and Tetreault, 2019).",
"But it focuses on a common email theme and uses a supervised approach for sentence selection, whereas our method relies on identifying the task-related context.",
"We build upon the Avocado dataset (Oard et al., 2015) 4 containing an anonymized version of the Outlook mailbox for 279 employees with various meta-data and 938 , 035 emails overall.",
"Emails contain various user intents including planning and scheduling meetings, requests for information, exchange of information, casual conversations, etc. (Wang et al., 2019).",
"For the purpose of this work, we first need to extract emails containing at least one sentence where the sender has promised to perform an action.",
"It could be performing a task, providing some information, keeping others informed about a topic and so on.",
"We use the term commitment to refer to such intent in an email and the term commitment sentence to refer to each sentence with that intent.",
"Commitment classifier: A commitment classifier C : S (cid:55) [0 , 1] takes as input an email sentence S and returns a probability of whether the sentence is a commitment or not.",
"The classifier is built using labels from an annotation task with 3 judges.",
"The Cohen's kappa value is 0 .",
"694 , depicting substantial agreement.",
"The final label is obtained from the majority vote, generating a total of 9076 instances (with 2586 positive/commitment labels and 6490 negative labels).",
"The classifier is an RNN-based model with word embeddings and self-attention geared for binary classification with the input being the entire email context (Wang et al., 2019).",
"The classifier has a precision of 86% and recall of 84% on sentences in the Avocado corpus.",
"Candidate emails: We extracted 500 k raw sentences from Avocado emails and passed them",
"4 Avocado is a more appropriate test bed than the Enron collection (Klimt and Yang, 2004) since it contains additional meta-data and it entered the public domain via the cooperation and consent of the legal owner of the corpus.",
"through the commitment classifier.",
"We threshold the commitment classifier confidence to 0 .",
"9 and obtained 29 k potential candidates for To-Do items.",
"Of these, a random subset of 12 k instances were selected for annotation.",
"Annotation guideline: For each candidate email e c and the previous email in the thread e p (if present), we obtained meta-data like From ', Sent-To ', Subject ' and Body '.",
"The commitment sentence in e c was highlighted and annotators were asked to write a To-Do item using all of the information in e c and e p .",
"We prepared a comprehensive guideline to help human annotators write To-Do Items containing the definition and structure of To-Do Items and commitment sentences, along with illustrative examples.",
"Annotators were instructed to use words and phrases from the email context as closely as possible and introduce new vocabulary only when required.",
"Each instance was annotated by 2 judges.",
"Analysis of human annotations: We obtained a total of 9349 email instances with To-Do items, each of which was annotated by two annotators.",
"To-Do items have a median token length of 9 and a mean length of 9 .",
"71 .",
"For 60 .",
"42% of the candidate emails, both annotators agreed that the subject line was helpful in writing the To-Do Item.",
"To further analyze the annotation quality, we randomly sampled 100 annotated To-Do items and asked a judge to rate them on",
"(a) fluency (grammat-ical and spelling correctness), and",
"(b) completeness (capturing all the action items in the email) on a 4 point scale ( 1 : Poor, 2 : Fair, 3 : Good, 4 : Excellent).",
"Overall, we obtained a mean rating of 3.1 and 2.9 respectively for fluency and completeness.",
"Table 1 shows a snapshot of the analysis.",
"In this section, we describe our two-stage approach to generate To-Do items.",
"In the first stage, we select sentences that are helpful in writing the To-Do item.",
"Emails contain generic sentences such as salutations, thanks and casual conversations not relevant to the commitment task.",
"The objective of the first stage is to select sentences containing informative concepts necessary to write the To-Do.",
"In the absence of reliable labels to extract helpful sentences in a supervised fashion, we resort to an unsupervised matching-based approach.",
"Let the commitment sentence in the email be denoted as H , and the rest of the sentences from the current email e c and previous email e p be denoted as { s 1 , s 2 , . . . s d } .",
"The unsupervised approach seeks to obtain a relevance score ( s i ) for each sentence.",
"The top K sentences with the highest scores will be selected as the extractive summary for the commitment sentence (also referred to as the query).",
"Enriched query context: We first extract top maximum frequency tokens from all the sentences in the given email, the commitment and the subject (i.e., { s 1 , s 2 , . . . s d } H Subject ).",
"Tokens are lemmatized and stop-words are removed.",
"We set = 10 in our experiments.",
"An enriched context for the query E is formed by concatenating the commitment sentence H , subject and top tokens.",
"Relevance score computation: Task-specific relevance score for a sentence s i is obtained by inner product in the embedding space with the enriched context.",
"Let h ( ) be the function denoting the embedding of a sentence with ( s i ) = h ( s i ) T h ( E ) .",
"Our objective is to find helpful sentences for the commitment given by semantic similarity between concepts in the enriched context and a target sentence.",
"In case of a short or less informative query, the subject and topic of the email provide useful information via the enriched context.",
"We experiment with three different embedding functions.",
"frequency vector is used to represent the sentence.",
"(2) FastText Word Embeddings We trained FastText embeddings (Bojanowski et al., 2017) of dimension 300 on all sentences in the Avocado corpus.",
"The embedding function h ( s j ) is given by taking the max (or mean) across the word-embedding dimension of all tokens in the sentence s j .",
"(3) Contextualized Word Embeddings We utilize recent advances in contextualized representations from pre-trained language models like BERT (Devlin et al., 2019).",
"We use the second last layer of pre-trained BERT for sentence embeddings.",
"We also fine-tuned BERT on the labeled dataset for commitment classifier.",
"The dataset is first made balanced ( 2586 positive and 2586 negative instances).",
"Uncased BERT is trained for 5 epochs for commitment classification, with the input being word-piece tokenized email sentences.",
"This model is denoted as BERT (fine-tuned) in Table 2. Evaluation of unsupervised approaches: Retrieving at-least one helpful sentence is crucial to obtain contextual information for the To-Do item.",
"Therefore, we evaluate our approaches based on the proportion of emails where at-least one helpful sentence is present in the top K retrieved sentences.",
"We manually annotated 100 email instances and labeled every sentence as helpful or not based on",
"(a) whether the sentence contains concepts appearing in the target To-Do item, and",
"(b) whether the sentence helps to understand the task context.",
"Inter-annotator agreement between 2 judgments for this task has a Cohen Kappa score of 0 .",
"69 .",
"This annotation task also demonstrates the importance of the previous email in a thread.",
"Out of 100 annotated instances, 44 have a replied-to email of which 31 contains a helpful sentence in the replied-to email body ( 70 . 4% ).",
"Table 2 shows the performance of the various unsupervised extractive algorithms.",
"FastText with max-pooling of embeddings performed the best and used in the subsequent generation stage.",
"The generation phase of our approach can be formulated as sequence-to-sequence (Seq2Seq) learning with attention (Sutskever et al., 2014; Bahdanau et al., 2014).",
"It consists of two neural networks, an encoder and a decoder.",
"The input to the encoder consists of concatenated tokens from different meta-data fields of the email like sent-to', subject', commitment sentence H and extracted sentences I separated by special markers.",
"For instance, the input to the encoder for the example in Figure 1 is given as: < to > alice < sub > hello ?",
"generation model as follows: Vanilla Seq2Seq : Input tokens { x 1 , x 2 , . . . x T } are passed through a word-embedding layer and a single layer LSTM to obtain encoded representations h t = f ( x t , h t 1 ) t for the input.",
"The decoder is another LSTM that makes use of the encoder state h t and prior decoder state s t 1 to generate the target words at every timestep t .",
"We consider Seq2Seq with attention mechanism where the decoder LSTM uses attention distribution a t over timesteps t to focus on important hidden states to generate the context vector h t .",
"This is the first baseline in our work.",
"e t,t (cid:48) = v T tanh ( W h h t + W s s t (cid:48) + b ) a t,t (cid:48) = softmax ( e t,t (cid:48) ) h t = (cid:80) t (cid:48) a t,t (cid:48) h t (cid:48) (1) Seq2Seq with copy mechanism : As the second model, we consider Seq2Seq with copy mechanism (See et al., 2017) to copy tokens from important email fields.",
"Copying is pivotal for To-Do item generation since every task involves named From: John Carter To: Helena Watson; Daniel Craig; Rupert Grint Subject: Thanks Thank you for helping me prepare the paper draft for ACL conference.",
"entities in terms of the persons involved, specific times and dates when the task has to be accomplished and other task-specific details present in the email context.",
"To understand the copy mechanism, consider the decoder input at each decoding step as y t and the context vector as h t .",
"The decoder at each timestep t has the choice of generating the output word from the vocabulary V with probability p gen = ( h t , s t , y t ) , or with probability 1 p gen it can copy the word from the input context.",
"To allow that, the vocabulary is extended as V (cid:48) = V { x 1 , x 2 , . . . x T } .",
"The model is trained end-to-end to maximize the log-likelihood of target words (To-Do items) given the email context.",
"Seq2Seq BiFocal : As a third model, we experimented with query-focused attention having two encoders one containing only tokens of the query and the other containing rest of the input context.",
"We use a bifocal copy mechanism that can copy tokens from either of the encoders.",
"We refer the reader to the Appendix for more details about training and hyper-parameters used in our models.",
"9349 email instances with To-Do items, we used 7349 for training and 1000 each for validation and testing.",
"For each instance, we chose the annotation with fewer tokens as ground-truth reference.",
"The median token length of the encoder input is 43 (including the helpful sentence).",
"Table 4 shows the performance comparison of various models.",
"We report BLEU-4 (Papineni et al., 2002) and the F1-scores for Rouge-1, Rouge-2 and Rouge-L (Lin, 2004).",
"We also report the human performance for this task in terms of the above metrics computed between annotations from the two judges.",
"A trivial baseline which concatenates tokens from the sent-to' and subject' fields and the commitment sentence is included for comparison.",
"The best performance is obtained with Seq2Seq using copying mechanism.",
"We observe our model to perform at par with human performance for writing To-Do items.",
"Table 3 shows some examples of To-Do item generation from our best model.",
"In this work, we study the problem of automatic To-Do item generation from email context and meta-data to provide smart contextual assistance in email applications.",
"To this end, we introduce a new task and dataset for action-focused text intelligence.",
"We design a two stage framework with deep neural networks for task-focused text generation.",
"There are several directions for future work including better architecture design for utilizing structured meta-data and replacing the two-stage framework with a multi-task generation model that can jointly identify helpful context for the task and perform corresponding text generation."
] | [
"abstain",
"objective",
"objective",
"result",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"objective",
"abstain",
"objective",
"objective",
"other",
"other",
"other",
"abstain",
"other",
"other",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"method",
"objective",
"method",
"abstain"
] |
[
"Semantic parsing using sequence-to-sequence models allows parsing of deeper representations compared to traditional word tagging based models.",
"In spite of these advantages, widespread adoption of these models for real-time conversational use cases has been stymied by higher compute requirements and thus higher latency.",
"In this work, we propose a non-autoregressive approach to predict semantic parse trees with an efficient seq2seq model architecture.",
"By combining non-autoregressive prediction with convolutional neural networks, we achieve significant latency gains and parameter size reduction compared to traditional RNN models.",
"Our novel architecture achieves up to an 81% reduction in latency on TOP dataset and retains competitive performance to non-pretrained models on three different semantic parsing datasets.",
"Our code is available at https://github.",
"com/facebookresearch/pytext .",
"Advances in conversational assistants have helped to improve the usability of smart speakers and consumer wearables for different tasks.",
"Semantic parsing is one of the fundamental components of these assistants and it helps to convert the user input in natural language to a structure representation that can be understood by downstream systems.",
"Majority of the semantic parsing systems deployed on various devices, rely on server-side inference because of the lower compute/memory available on these edge devices.",
"This poses a few drawbacks such as flaky user experience with spotty internet connectivity and compromised user data privacy due to the dependence on a centralized server to which all user interactions are sent to.",
"Thus, semantic parsing on-device has numerous advantages.",
"For the semantic parsing task, the meaning representation used decides the capabilities of the system built.",
"Limitations of the representation with one intent and slot labels were studied in the context of nested queries and multi turn utterances in Aghajanyan et al. (2020) and Gupta et al. (2018).",
"New representations were proposed to overcome these limitations and sequence-to-sequence models were proposed as the solution to model these complex forms.",
"But using these new models in real-time conversational assistants still remains a challenge due to higher latency requirements.",
"In our work, we propose a novel architecture and generation scheme to significantly improve the end2end latency of sequence-to-sequence models for the semantic parsing task.",
"Due to the autoregressive nature of generation in sequence-to-sequence semantic parsing models, the recurrence relationship between target tokens creates a limitation that decoding cannot be parallelized.",
"There are multiple works in machine translation which try to solve this problem.",
"These approaches relax the decoder token-by-token generation by allowing multiple target tokens to be generated at once.",
"Fully non-autoregressive models (Gu et al., 2017; Ma et al., 2019; Ghazvininejad et al., 2020a; Saharia et al., 2020) and conditional masked language models with iterative decoding (Ghazvinine-jad et al., 2019; Gu et al., 2019; Ghazvininejad et al., 2020b) are some of them.",
"To enable non-autoregressive generation in semantic parsing, we modify the objective of the standard seq2seq model to predict the entire target structure at once.",
"We build upon the CMLM (Con-ditional Masked Language Model) (Ghazvininejad et al., 2019) and condition the generation of the full target structure on the encoder representation.",
"By eliminating the recurrent relationship between individual target tokens, the decoding process can be parallelized.",
"While this drastically improves latency, the representation of each token is still dependent on previous tokens if we continue to use an RNN architecture.",
"Thus, we propose a novel model architecture for semantic parsing based on convolutional networks (Wu et al., 2019b) to solve this issue.",
"Our non-autoregressive model achieves up to an 81% reduction in latency on the TOP dataset (Gupta et al., 2018), while achieving 80.23% exact match accuracy.",
"We also achieve 88.16% exact match accuracy on DSTC2 (Henderson et al., 2014) and 80.86% on SNIPS (Coucke et al., 2018) which is competitive to prior work without pretraining.",
"To summarize, our two main contributions are: We propose a novel alternative to the traditional autoregressive generation scheme for semantic parsing using sequence-to-sequence models.",
"With a new model training strategy and generation approach, the semantic parse structure is predicted in one step improving parallelization and thus leading to significant reduction in model latency with minimal accuracy impact.",
"We also study the limitations of original CMLM (Ghazvininejad et al., 2019) when applied for conversational semantic parsing task and provide motivations for our simple yet critical modifications.",
"We propose LightConv Pointer, a model architecture for non-autoregressive semantic parsing, using convolutional neural networks which provides significant latency and model size improvements over RNN models.",
"Our novel model architecture is particularly suitable for limited compute use-cases like on-device conversational assistants.",
"In this section, we propose a novel, convolutional, non-autoregressive architecture for semantic parsing.",
"While non-autoregressive decoding has been previously explored in machine translation, we describe how it can be applied to semantic parsing with several critical modifications to retain performance.",
"We then describe our convolutional architecture.",
"By incorporating these advances together, our approach achieves both high accuracy and efficient decoding.",
"The task is to predict the semantic parse tree given the raw text.",
"We use the decoupled representation (Aghajanyan et al., 2020), an extension of the compositional form proposed in Gupta et al. (2018) for task oriented semantic parsing.",
"Decoupled representation is obtained by removing all text in the compositional form that does not appear in a leaf slot.",
"Efficient models require representations which are compact, with least number of tokens, to reduce number of floating point operations during inference.",
"Decoupled representation was found to be suitable due to this.",
"Figure 1 shows the semantic parse for a sample utterance.",
"Our model predicts the serialized representation of this tree which is [IN:CREATE_REMINDER [SL:PERSON_REMINDED me ] [SL:TODO [IN:CREATE_CALL [SL:METHOD call ] [SL:CONTACT John ] ] ] ] 2.1 Non-Autoregressive Decoding While autoregressive models (Figure 2), which predict a sequence token by token, have achieved strong results in various tasks including semantic parsing, they have a large downside.",
"The main challenge in practical applications is the slow decoding time.",
"We investigate how to incorporate recent advances in non-autoregressive decoding for efficient semantic parsing models.",
"We build upon the Conditional Masked Language Model (CMLM) proposed in Ghazvininejad et al. (2019) by applying it to the structured prediction task of semantic parsing for task-oriented dialog.",
"Ghazvininejad et al. (2019) uses CMLM to first predict a token-level representation for each source token and a target sequence length; then the model predicts and iterates on the target sequence prediction in a non-autoregressive fashion.",
"We describe our changes and the motivations for these changes below.",
"One of the main differences between our work and Ghazvininejad et al. (2019) is that target length prediction plays a more important role in semantic parsing.",
"For the translation task, if the target length is off by one or more, the model can slightly rephrase the sentence to still return a high quality translation.",
"In our case, if the length prediction is Figure 2: Traditional Sequence to Sequence architecture which uses autoregressive generation scheme for decoder.",
"To resolve this important challenge, we propose a specialized length prediction module that more accurately predicts the target sequence length.",
"While Ghazvininejad et al. (2019) uses a special CLS token in the source sequence to predict the target length, we have a separate module of multiple layers of CNNs with gated linear units to predict the target sequence length (Wu et al., 2019b).",
"We also use label smoothing and differently weighing losses as explained in section 2.3, to avoid the easy over-fitting in semantic parsing compared to translation.",
"As shown in Aghajanyan et al. (2020), transformers without pre-training perform poorly on TOP dataset.",
"The architectural changes that we propose to solve the data efficiency can be found in the section 2.2.1.",
"Further, we find that the random masking strategy proposed in Ghazvininejad et al. (2019) works poorly for semantic parsing.",
"When we use the same strategy for the semantic parsing task where the output has a structure, model is highly likely to see invalid trees during training as masking random tokens in the linearized representation of a tree mostly gives invalid tree representations.",
"This makes it hard for the model to learn the structure especially when the structure is complicated (in the case of trees, deep trees were harder to learn).",
"To remedy this problem, we propose a different strategy for model training where all the tokens in the target sequence are masked during training.",
"Our model architecture (Figure 3) is based on the classical seq2seq model (Sutskever et al., 2014) and follows the encoder-decoder architecture.",
"In order to optimize for efficient encoding and decoding, we look to leverage a fully parallel model architecture.",
"While transformer models are fully parallel and popular in machine translation (Vaswani et al., 2017), they are known to perform poorly in low resource settings and require careful tuning using techniques like Neural Architecture Search to get good performance (van Biljon et al., 2020; Murray et al., 2019).",
"Similarly, randomly initialized transformers performed poorly on TOP dataset achieving only 64.5 % accuracy when SOTA was above 80% (Aghajanyan et al., 2020).",
"We overcome this limitation by augmenting Transformers with Convolutional Neural Networks.",
"Details of our architecture are explained below.",
"For token representations, we use word embeddings concatenated with the sinusoidal positional embeddings (Vaswani et al., 2017).",
"Encoder and decoder consist of multiple layers with residual connections as shown in Figure 4.",
"First sub-block in each layer consists of MHA (Vaswani et al., 2017).",
"In decoder, we do not do masking of future tokens during model training.",
"This is needed for non-autoregressive generation of target tokens during inference.",
"Second sub-block consists of multiple convolutional layers.",
"We use depthwise convolutions with weight sharing (Wu et al., 2019b).",
"Convolution layer helps in learning representation for tokens for a fixed context size and multiple layers helps with bigger receptive fields.",
"We use non-causal convolutions for both encoder as well as decoder.",
"Third sub-block is the FFN (Vaswani et al., 2017; Wu et al., 2019b) which consists of two linear layers and relu.",
"The decoder has source-target attention after the convolution layer.",
"Pointer-Generator Projection layer The decoder has a final projection layer which generates the target tokens from the decoder/encoder representations.",
"Rongali et al. (2020) proposes an idea based Pointer Generator Network (See et al., 2017) to convert the decoder representation to target tokens using the encoder output.",
"Similarly, we use a pointer based projection head, which decides whether to copy tokens from the source-sequence or generate from the pre-defined ontology at every Figure 3: Sequence to Sequence model architecture which uses Non-Autoregressive strategy for generation decoding step (Aghajanyan et al., 2020).",
"Length Prediction Module Length prediction Module receives token level representations from the encoder as input.",
"It uses stacked CNNs with gated linear units and mean pooling to generation the length prediction.",
"Suppose the source sequence is of length L and source tokens in the raw text are s 1 , s 2 , s 3 . . . s L .",
"Encoder generates a representation of for each token in the source sequence.",
"Using the predicted length T, we create a target sequence of length T consisting of identical MASK tokens.",
"This sequence is passed through possibly multiple decoder layers and generates a representation for each token in the masked target sequence.",
"We make a strong assumption that each token in the target sentence is conditionally independent of each other given the source and the target length.",
"Thus, the individual probabilities for each token is P ( y i | X, T ) where X is the input sequence and T is the length of target sequence.",
"Beam Search During inference, length prediction module explained in 2.2.1 predicts top k lengths.",
"For each predicted length, we create a decoder input sequence of all masked tokens.",
"This is similar to the beam search with beam size k in autoregressive systems.",
"The main difference in our model architecture is that we expect only one candidate for each predicted length.",
"These all masked sequences are given as input to the model and the model predicts target tokens for each masked token.",
"Once we have predicted target sequences for k different lengths, they are ranked based on the ranking algorithm described in (5), where X is the input sequence and Y is the predicted output sequence, note the predicted token y i is conditioned on both the sequence ( X ) and the predicted target length T .",
"During training, we jointly optimize for two weighted losses.",
"The first loss is calculated for the predicted target tokens against the real target and the second loss is calculated for predicted target length against real target length.",
"During forward-pass, we replace all the tokens in the target sequence with a special <MASK> token and give this as an input to the decoder.",
"Decoder predicts the token for each masked token and the cross-entropy loss is calculated for each predicted token.",
"The length prediction module in the model predicts the target length using the encoder representation.",
"Similar to CMLMs in (Ghazvininejad et al., 2019), length prediction is modeled as a classifica-tion task with class labels for each possible length.",
"Cross entropy loss is calculated for length prediction.",
"For our semantic parsing task, label smoothing (Szegedy et al., 2015) was found to be very critical as the length prediction module tends to easily overfit and strong regularization methods are needed.",
"This was because length prediction was a much well-defined task compared to predicting all the tokens in the sequence.",
"Total loss was calculated by taking a weighted sum of cross entropy loss for labels and length, with lower weight for length loss.",
"As training progresses through different epochs, the best model is picked by comparing the exact match (EM) accuracy of different snapshots on validation set.",
"We use 3 datasets across various domains to evaluate our semantic parsing approach.",
"Length distribution of each dataset is described using median, 90th percentile and 99th percentile lengths.",
"TOP Dataset Task Oriented Parsing (Gupta et al., 2018) is a dataset for compositional utterances in the navigation and events domains.",
"The training set consists of 31 , 279 instances and the test set consists of 9 , 042 .",
"The test set has a median target length of 15, P90 27 and P99 39.",
"SNIPS The SNIPS (Coucke et al., 2018) dataset is a public dataset used for benchmarking semantic parsing intent slot models.",
"This dataset is considered flat, since it does not contain compositional queries and can be solved with word-tagging models.",
"Recently, however seq2seq models have started to out perform word-tagging models (Rongali et al., 2020; Aghajanyan et al., 2020).",
"The training set consists of 13 , 084 instances, the test set consists of 700 instances.",
"The test set has a median target length of 11, P90 17, P99 21.",
"DSTC2 Dialogue State Tracking Challenge 2 (Henderson et al., 2014), is a dataset for conversational understanding.",
"The dataset involves users searching for restaurants, by specifying constraints such as cuisine type and price range, we encode these constraints as slots and use this to formulate the decoupled representation.",
"The training set consists of 12 , 611 instances and a test set of 9890 .",
"The test set has a median target length of 6, P90 9 and P99 10.",
"Semantic Parsing Performance For all our datasets, we convert the representation of either the compositional form or flat intent slot form to the decoupled representation (Aghajanyan et al., 2020) .",
"We compare the model prediction with the serialized structure representation and look for exact match (EM).",
"Benchmarking Latency For the latency analysis for the models trained from scratch: AR LightConv Pointer, NAR LightConv Pointer, and BiLSTM.",
"We chose these 3 architectures, to compare NAR vs AR variants of LightConv Pointer, as well as the best performant baseline: Pointer BiLSTM (Aghajanyan et al., 2020).",
"We use Samsung Galaxy S8 with Android OS and Octa-core processor.",
"We chose to benchmark latency to be consistent with prior work on on-device modeling (Wu et al., 2019a; Howard et al., 2019).",
"All models are trained in PyTorch (Paszke et al., 2019) and exported using Torchscript.",
"We measure wall clock time as it is preferred instead of other options because it relates more to real world inference.",
"1 Latency results can be found in section 4.2.",
"For each of our datasets, we report accuracy metrics on the following models:",
"NAR LightConv Pointer : A non-autoregressive (NAR) variant of the above model to allow for parallel decoding.",
"We compare against the best reported numbers across datasets where the models don't use pretraining.",
"During training of our model we use the same base model across all datasets and sweep over hyper parameters for the length module and the batch size and learning rate, an equivalent sweep was done for the AR variant as well.",
"The base model we use for NAR LightConv Pointer model uses 5 encoder layers with convolutional kernel sizes [3,7,15,21,27], where each encoder layer has embedding and convolutional dimensions of 160, 1 self attenion head, and 2 decoder layers with kernel sizes [7,27], and embedding dimension of 160, 1 self-attention head and 2 encoder-attention heads.",
"Our length prediction module leverages a two convolution layers of 512 embedding dimensions and kernel sizes of 3 and 9.",
"and uses hidden dimension in [128,256,512] determined by hyper parameter sweeps.",
"We also use 8 attention heads for the decoupled projection head.",
"For the convolutional layer, we use lightweight convolutions (Wu et al., 2019b) with number of heads set to 2.",
"We train with the Adam (Kingma and Ba, 2014) optimizer, learning rate is selected to be between [0.00007, 0.0004].",
"If our evaluation accuracy has not increased in 10 epochs, we also reduce our learning rate by a factor of 10, and we employ early stopping if the accuracy has not changed in 20 epochs.",
"We train with our batch size fixed to be 8.",
"We optimize a joint loss for label prediction and length prediction.",
"Both losses consist of label smoothed cross entropy loss ( is the weight of the uniform distribution) (Pereyra et al., 2017), our label loss has = 0 .",
"1 and our length loss has = 0 .",
"5 , we also weight our length loss lower, = 0 .",
"25 .",
"For inference, we use a length beam size of k = 5 .",
"Our AR variant follows the same parameters however it does not have length prediction and self-attention in encoder and decoder.",
"We show that our proposed non-autoregressive convolutional architecture for semantic parsing is competitive with auto-regressive baselines and word tagging baselines without pre-training on three different benchmarks and reduces latency up to 81% on the TOP dataset.",
"We first compare accuracy and latency, then discuss model performance by analyzing errors by length, and the importance of knowledge distillation.",
"We do our analysis on the TOP dataset, due to its inherent compositional nature, however we expect our analysis to hold for other datasets as well.",
"Non-compositional datasets like DSTC2 and SNIPS can be modeled by word tagging models making seq2seq models more relevant in the case of compositional datasets.",
"In table 5a we show our NAR and AR variants for LightConv Pointer perform quite similarly across all datasets.",
"We can see that our proposed NAR LightConv Pointer is also competitive with state of the art models without pre-training: -0.66% TOP, -0.17% DSTC2, -4.57% SNIPS (-0.04% compared to word tagging models).",
"Following the prior work on Non-Autoregressive models, we also report our experiments with sequence-level knowledge distillation in subsection Knowledge Distillation under section.",
"4.3.",
"In figure 5b we show the latency of our model with different generation approaches (NAR vs AR) over increasing target sequence lengths on the TOP dataset.",
"Firstly, we show that our LightConv Pointer is significantly faster than the BiLSTM baseline (Aghajanyan et al., 2020), achieving up to a 54% reduction in median latency.",
"BiLSTM was used as baseline as that was the SOTA without pretraining for TOP and Transformers performed poorly.",
"By comparing our model with AR and NAR generation strategy, it can be seen that increase in latency with increase in target length is much smaller for NAR due to better parallelization of decoder, resulting in up to an 81% reduction in Length Bucket NAR (%) AR (%) Bucket Size < 10 82.80 83.13 2798 10-20 84.18 84.36 5167 20-30 62.50 65.72 992 30-40 21.25 41.25 80 > 40 0.00 20.00 5 Table 2: EM accuracy of the NAR LightConv Pointer (distilled) vs AR LightConv Pointer distilled across different target length buckets along with the number of instances in each bucket on the TOP dataset.",
"median latency compared to the BiLSTM model.",
"Also note that both the LightConv Pointer models are able to achieve parity in terms of EM Accuracy compared to the baseline BiLSTM model, while using many fewer parameters, the BiLSTM model uses 20M parameters, while the NAR LightConv Pointer uses 12M and the AR LightConv Pointer uses 10M.",
"Ablation experiments We compare the modifications proposed by this work (LightConv, Conv length prediction module and Mask everything strategy) with the original model proposed in Ghazvininejad et al. (2019) in table 1.",
"The motivations for each modification was already discussed in sub-section 2.1.",
"Our mean EM accuracy results based on 3 trials show the significance of techniques proposed in this paper especially for longer target sequences.",
"Errors by length It is known that non-autoregressive models have difficulty at larger sequence lengths (Ghazvininejad et al., 2019).",
"In table 2, we show our model's accuracy in each respective length bucket on the TOP dataset.",
"We see that the AR and NAR model follow a similar distribution of errors, however the NAR model seems to error at a higher rate for the longer lengths.",
"Knowledge Distillation Following prior work (Ghazvininejad et al., 2019; Zhou et al., 2020), we train our model with sequence-level knowledge distillation (Kim and Rush, 2016).",
"We train our system on data generated by the current SOTA autoregressive models BART (Lewis et al., 2019; Aghajanyan et al., 2020).",
"In table 3 we show the impact of knowledge distillation in our task on both the non-autoregressive and autoregressive variants of LightConv Pointer.",
"These results support prior work in machine translation for distillation of au-Figure 6: Distilled NAR LightConv Pointer Top-K accuracy for exact match (EM) accuracy (blue) and Top-K length accuracy (orange), as well as the EM accuracy with gold length (dotted red line) for the TOP dataset.",
"toregressive teachers to non-autoregressive models showing distillation improving our models on TOP and SNIPS, however we notice minimal changes on DSTC2.",
"The importance of length prediction An important part of our non-autoregressive model is length prediction.",
"In figure 6, we report exact match accuracy @ top k beams and length accuracy @ top k beams (where top K refers to whether the correct answer was in the top K predictions) for the TOP dataset.",
"We can see a tight correlation between our length accuracy and exact match accuracy, showing how our model is bottle necked by the length prediction.",
"Providing gold length as a feature, led to an exact match accuracy of 88.20% (shown in red on figure 6), an absolute 7.31 point improvement over our best result with our non-autoregressive LightConv Pointer.",
"Non-autoregressive Decoding Recent work in machine translation has made a lot of progress in fully non-autoregressive models (Gu et al., 2017; Ma et al., 2019; Ghazvininejad et al., 2020a; Saharia et al., 2020) and parallel decoding (Lee et al., 2018; Ghazvininejad et al., 2019; Gu et al., 2019; Ghazvininejad et al., 2020b; Kasai et al., 2020).",
"While many advancements have been made in machine translation, we believe we are the first to explore the non-autoregressive semantic parsing setting.",
"In our work, we extend the CMLM to work for semantic parsing.",
"We make two important adjustments: first, we use a different masking approach where we mask everything and do one-step generation.",
"Second, we note the importance of the length prediction task for parsing and improve the length prediction module in the CMLM.",
"Seq2Seq For Semantic Parsing Recent advances in language understanding have lead to increased reliance on seq2seq architectures.",
"Recent work by Rongali et al. 2020; Aghajanyan et al. 2020, showed the advantages from using a pointer generator architecture for resolving complex queries (e.g. composition and cross domain queries) that could not be handled by word tagging models.",
"Since we target the same task, we adapt their pointer decoder into our proposed architecture.",
"However, to optimize for latency and compression we train CNN based architectures (Desai et al. 2020 and Wu et al. 2019b) to leverage the inherent model parallelism compared to the BiLSTM model proposed in Aghajanyan et al. 2020 and more compression compared to the transformer seq2seq baseline proposed in Rongali et al. 2020.",
"To further improve latency we look at parallel decoding through non-autoregressive decoding compared to prior work leveraging autoregressive models.",
"This work introduces a novel alternative to autoregressive decoding and efficient encoder-decoder architecture for semantic parsing.",
"We show that in 3 semantic parsing datasets, we are able to speed up decoding significantly while minimizing accuracy regression.",
"Our model is able to generate parse trees competitive with state of the art autoregressive models with significant latency savings, allowing complex NLU systems to be delivered on edge devices.",
"There are a couple of limitations of our proposed model that naturally extend themselves to future work.",
"Primarily, we cannot support true beam decoding, we decode a single prediction for each length prediction however there may exist multiple beams for each length prediction.",
"Also for longer parse trees and more complex semantic parsing systems such as session based understanding, our NAR decoding scheme could benefit from multiple iterations.",
"Lastly, though we explored models without pre-training in this work, recent developments show the power of leveraging pre-trained models such as RoBERTa and BART.",
"We leave it to future work to extend our non-autoregressive decoding for pre-trained models.",
"We would like to thank Sandeep Subramanian (MILA), Karthik Prasad (Facebook AI), Arash Einolghozati (Facebook) and Yinhan Liu for the",
"helpful discussions.",
"References Armen Aghajanyan, Jean Maillard, Akshat Shrivastava, Keith Diedrick, Michael Haeger, Haoran Li, Yashar Mehdad, Veselin Stoyanov, Anuj Kumar, Mike Lewis, and Sonal Gupta.",
"2020.",
"Conversational semantic parsing.",
"In EMNLP/IJCNLP .",
"Alice Coucke, Alaa Saade, Adrien Ball, Thodore Bluche, Alexandre Caulier, David Leroy, Clment Doumouro, Thibault Gisselbrecht, Francesco Calta-girone, Thibaut Lavril, et al. 2018.",
"Snips voice platform: an embedded spoken language understanding system for private-by-design voice interfaces.",
"arXiv preprint arXiv:1805.10190 .",
"Shrey Desai, Geoffrey Goh, Arun Babu, and Ahmed Aly.",
"2020.",
"Lightweight convolutional representations for on-device natural language processing.",
"arXiv preprint arXiv:2002.01535 .",
"Arash Einolghozati, Panupong Pasupat, Sonal Gupta, Rushin Shah, Mrinal Mohit, Mike Lewis, and Luke Zettlemoyer.",
"2018.",
"Improving semantic parsing for task oriented dialog.",
"In Conversational AI Workshop at NeurIPS 2018 .",
"Marjan Ghazvininejad, Vladimir Karpukhin, Luke Zettlemoyer, and Omer Levy.",
"2020a.",
"Aligned cross entropy for non-autoregressive machine translation.",
"arXiv preprint arXiv:2004.01655 .",
"Marjan Ghazvininejad, Omer Levy, Yinhan Liu, and Luke Zettlemoyer.",
"2019.",
"Mask-predict: Parallel decoding of conditional masked language models.",
"Marjan Ghazvininejad, Omer Levy, and Luke Zettle-moyer.",
"2020b.",
"Semi-autoregressive training improves mask-predict decoding.",
"arXiv preprint arXiv:2001.08785 .",
"Chih-Wen Goo, Guang Gao, Yun-Kai Hsu, Chih-Li Huo, Tsung-Chieh Chen, Keng-Wei Hsu, and Yun-Nung Chen.",
"2018.",
"Slot-gated modeling for joint slot filling and intent prediction.",
"In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers) , pages 753757.",
"Victor OK Li, and Richard Socher.",
"2017.",
"Non-autoregressive neural machine translation.",
"arXiv preprint arXiv:1711.02281 .",
"Jiatao Gu, Changhan Wang, and Junbo Zhao.",
"Levenshtein transformer.",
"In Advances in Neural Information Processing Systems , pages 1117911189.",
"Jungo Kasai, James Cross, Marjan Ghazvininejad, and Jiatao Gu.",
"2020.",
"Parallel machine translation with disentangled context transformer.",
"arXiv preprint arXiv:2001.05136 .",
"Yoon Kim and Alexander M Rush.",
"2016.",
"Sequence-level knowledge distillation.",
"arXiv preprint arXiv:1606.07947 .",
"Jason D. Lee, Elman Mansimov, and Kyunghyun Cho.",
"2018.",
"Deterministic non-autoregressive neural sequence modeling by iterative refinement.",
"In Proc.",
"of EMNLP .",
"Xuezhe Ma, Chunting Zhou, Xian Li, Graham Neu-big, and Eduard Hovy.",
"2019.",
"Flowseq: Non-autoregressive conditional sequence generation with generative flow.",
"arXiv preprint arXiv:1909.02480 .",
"Sonal Gupta, Rushin Shah, Mrinal Mohit, Anuj Kumar, and Mike Lewis.",
"2018.",
"Semantic parsing for task oriented dialog using hierarchical representations.",
"In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing , pages 27872792, Brussels, Belgium.",
"Association for Computational Linguistics.",
"Dilek Hakkani-Tr, Gkhan Tr, Asli Celikyilmaz, Yun-Nung Chen, Jianfeng Gao, Li Deng, and Ye-Yi Wang.",
"2016.",
"Multi-domain joint semantic frame parsing using bi-directional rnn-lstm.",
"In Interspeech , pages 715719.",
"Matthew Henderson, Blaise Thomson, and Jason D. Williams.",
"2014.",
"The second dialog state tracking challenge.",
"In Proceedings of the 15th Annual Meeting of the Special Interest Group on Discourse and Dialogue (SIGDIAL) , pages 263272, Philadelphia, PA, U.S.A. Association for Computational Linguistics.",
"Andrew Howard, Mark Sandler, Grace Chu, Liang-Chieh Chen, Bo Chen, Mingxing Tan, Weijun Wang, Yukun Zhu, Ruoming Pang, Vijay Vasudevan, et al. 2019.",
"Searching for mobilenetv3.",
"In Proceedings of the IEEE/CVF International Conference on Computer Vision , pages 13141324.",
"Diederik P Kingma and Jimmy Ba.",
"2014.",
"Adam: A method for stochastic optimization.",
"arXiv preprint arXiv:1412.6980 .",
"Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov, and Luke Zettlemoyer.",
"2019.",
"Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension.",
"Kenton Murray, Jeffery Kinnison, Toan Q. Nguyen, Walter Scheirer, and David Chiang.",
"2019.",
"Auto-sizing the transformer network: Improving speed, efficiency, and performance for low-resource machine translation.",
"In Proceedings of the 3rd Workshop on Neural Generation and Translation , pages 231240, Hong Kong.",
"Association for Computational Linguistics.",
"Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Te-jani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala.",
"2019.",
"Pytorch: An imperative style, high-performance deep learning library.",
"In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alch-Buc, E. Fox, and R. Garnett, editors, Advances in Neural Information Processing Systems 32 , pages 80248035.",
"Curran Associates, Inc.",
"Gabriel Pereyra, George Tucker, Jan Chorowski, ukasz Kaiser, and Geoffrey Hinton.",
"2017.",
"Regularizing neural networks by penalizing confident output distributions.",
"arXiv preprint arXiv:1701.06548 .",
"Subendhu Rongali, Luca Soldaini, Emilio Monti, and Wael Hamza.",
"2020.",
"Don't parse, generate! a sequence to sequence architecture for task-oriented semantic parsing.",
"arXiv preprint arXiv:2001.11458 .",
"Chitwan Saharia, William Chan, Saurabh Saxena, and Mohammad Norouzi.",
"2020.",
"Non-autoregressive machine translation with latent alignments.",
"arXiv preprint arXiv:2004.07437 .",
"Abigail See, Peter J. Liu, and Christopher D. Manning.",
"2017.",
"Get to the point: Summarization with pointer-generator networks.",
"In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 1073 1083, Vancouver, Canada.",
"Association for Computational Linguistics.",
"Ilya Sutskever, Oriol Vinyals, and Quoc V. Le.",
"2014.",
"Sequence to sequence learning with neural networks.",
"Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jonathon Shlens, and Zbigniew Wojna.",
"2015.",
"Rethinking the inception architecture for computer vision.",
"Elan van Biljon, Arnu Pretorius, and Julia Kreutzer.",
"2020.",
"On optimal transformer depth for low-resource language translation.",
"Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, ukasz Kaiser, and Illia Polosukhin.",
"2017.",
"Attention is all you need.",
"In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Processing Systems 30 , pages 59986008.",
"Curran Associates, Inc.",
"Bichen Wu, Xiaoliang Dai, Peizhao Zhang, Yanghan Wang, Fei Sun, Yiming Wu, Yuandong Tian, Peter Vajda, Yangqing Jia, and Kurt Keutzer.",
"2019a.",
"Fb-net: Hardware-aware efficient convnet design via differentiable neural architecture search.",
"In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages 1073410742.",
"Victor Zhong, Caiming Xiong, and Richard Socher.",
"2018.",
"Global-locally self-attentive encoder for dialogue state tracking.",
"In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 1458 1467, Melbourne, Australia.",
"Association for Computational Linguistics.",
"Chunting Zhou, Jiatao Gu, and Graham Neubig.",
"2020.",
"Understanding knowledge distillation in non-autoregressive machine translation.",
"In International Conference on Learning Representations ."
] | [
"abstain",
"abstain",
"objective",
"result",
"objective",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"abstain",
"result",
"abstain",
"result",
"result",
"objective",
"objective",
"abstain",
"objective",
"objective",
"method",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"objective",
"objective",
"objective",
"abstain",
"other",
"other",
"objective",
"abstain",
"abstain",
"objective",
"result",
"result",
"objective",
"abstain",
"abstain",
"objective",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other"
] |
[
"Rationale-Centric Framework for Human-in-the-loop Machine Learning",
"{yanglinyi, zhangyue}@westlake.edu.cn Abstract",
"We present a novel rationale-centric framework with human-in-the-loop R ationales-centric D ouble-robustness L earning (RDL) to boost model out-of-distribution performance in few-shot learning scenarios.",
"By using static semi-factual generation and dynamic human-intervened correction, RDL exploits rationales (i.e. phrases that cause the prediction), human interventions and semi-factual augmentations to decouple spurious associations and bias models towards generally applicable underlying distributions, which enables fast and accurate generalisation.",
"Experimental results show that RDL leads to significant prediction benefits on both in-distribution and out-of-distribution tests compared to many state-of-the-art benchmarksespecially for few-shot learning scenarios.",
"We also perform extensive ablation studies to support in-depth analyses of each component in our framework.",
"Recent work finds that natural artefacts (Guru-rangan et al., 2018) or spurious patterns (Keith et al., 2020; Srivastava et al., 2020) in datasets can cause sub-optimal model performance for neural networks.",
"As shown in Figure 1, the bold phrases 100% bad and brain cell killing are underlying causes for a negative sentiment prediction that most human readers would recognise.",
"These are defined as rationales in this paper.",
"The underlined phraseacting and plot has been incorrectly recognised as a causal term by the model used fort this example, and is referred to as a spurious pattern .",
"Spurious patterns (or associations) are caused by natural artefacts or biases in training data (Lertvittayakumjorn and Toni, 2021), and are usually useless, or even harmful, at test time.",
"This issue can be severe in few-shot learning (FSL) * These authors contributed equally to this work.",
"scenarios.",
"For instance, Kulesza et al. (2010) suggests that when a model is trained with a small subset of labelled data, it is prone to exploiting spurious patterns leading to poor generalisability that is evident in the performance decay in out-of-distribution (OOD) datasets.",
"In spite of these issues, training deep neural networks using few labelled examples is a compelling scenario since unlabelled data may be abundant but labelled data is expensive to obtain in real-world applications (Lu and MacNamee, 2020; Lu et al., 2021).",
"There is a strand of research addressing this scenario that seeks to improve model performance by introducing methods and resources for training models less sensitive to spurious patterns",
"(Kaushik et al., 2020).",
"Most of this work relies on generating counterfactual augmented data",
"(CAD), either manually",
"(Kaushik et al., 2021)",
"or automatically",
"(Feng et al., 2021; Qian et al., 2021; Yang et al., 2021, 2020a; Delaney et al., 2021).",
"For example, Kaushik et al.",
"(2020)",
"proposed a human-in-the-loop framework where human annotators are required to make minimal changes to original movie reviews to produce sentiment-flipped counterfactual reviews, which enables models to learn useful associations between input texts and output labels",
"(Kaushik et al., 2021).",
"Generating manual counterfactuals, however, is expensive and time-consumingKaushik et al.",
"(2020)",
"report the cost of revising 2 .",
"5 k instances at over $10,000.",
"On the other hand, fully automatic methods are task-specific and therefore have weak robustness across domains and less reliabil-6986 Semi-factual Generation",
"ity compared to manual counterfactuals.",
"To address these issues, we propose R ationales-centric D ouble-robustness L earning",
"(RDL), a human-in-the-loop framework for data augmentation in a few-shot setting, which is efficient, robust, model-agnostic, and general across tasks.",
"Our main idea is a rationale-centric strategy for eliminating the effect of spurious patterns by leveraging human knowledge as shown in Figure",
"2. Our double-robustness framework consists of two main modules.",
"The first is a Static Semi-factual Generation module that generates a set of semifactual data automatically for a given instance by using human-identified rationales.",
"Such labelling requires less human input compared to fully manual counterfactual generation",
"(see Section 3.1).",
"In contrast with counterfactuals",
"(Roese, 1997)",
"that rely on what might have been different",
"(i.e. the label would be changed if certain terms have been changed), semi-factuals",
"(McCloy and Byrne, 2002; Kenny and Keane, 2021), as used in our work, aim to guide a model to identify terms less causally related to the label",
"(i.e. even if certain terms had been changed, the label would be kept the same).",
"Second, we apply a Dynamic Human-intervened Correction module , where the most salient features are identified for model predictions over a set of training examples, and human workers intervene by checking the correctness of the rationale in case first-round modifications introduce new artefacts.",
"We evaluate the two modules in a few-shot setting, where a minimum number of training instances are labeled for maximum generalisation power, both for in-distribution and OOD predictions.",
"also used in Kaushik et al.",
"(2020), demonstrate that the double-robust models can be less sensitive to spurious patterns.",
"In particular, models trained with RDL with only 50 labelled examples achieve the same or even better results than fully-supervised training with a full training set of 1,707 examples, and improvements are especially significant for OOD tests.",
"The predictive model trained with RDL using only 100 labelled examples outperforms models trained with manual",
"(Kaushik et al., 2020)",
"and automatic CAD",
"(Yang et al., 2021)",
"using the full augmented training set of 3,414 examples.",
"To the best of our knowledge, we are the first to exploit the efficacy of semi-factuals and human-intervention for improving the generalisation abilities of deep neural networks in few-shot learning scenarios.",
"* 2 Related Work Data augmentation has been used for resolving artefacts in training datasets before",
"(Gururangan et al., 2018; Srivastava et al., 2020; Kaushik et al., 2021).",
"In particular, previous work",
"(Kaushik et al., 2020)",
"relied on large-scale crowd-sourcing to generate useful augmented data.",
"More recently, Yang et al.",
"(2021), and Wang and Culotta",
"(2021)",
"investigated the efficacy of the automatically generated counterfactuals for sentiment analysis.",
"Similar to our work, these methods also consider the most salient features that a model uses when generating augmented data, which is in line with our rationale definition.",
"However, they use sentiment lexicon matching for identifying rationales, which is task-specific and not necessarily fully relevant.",
"In contrast, we employ human annotators to identify rationales, which can be task-agnostic and robust.",
"Moreover, our method generates semi-factuals instead of counterfactuals used in previous work.",
"Human-the-loop Machine Learning",
"(Wu et al., 2021)",
"has received increasing research attention.",
"Active learning",
"(Settles, 2009; Margatina et al., 2021), the most common example of human-in-the-loop machine learning, asks human annotators only to provide high-level annotations",
"(i.e. labels)",
"for important examples.",
"There is also some work exploring more explainable AI systems by exploiting feature-based information.",
"Such methods use relatively simple models such as Nave Bayes",
"(Stumpf * All resources are available at https://github.com/GeorgeLuImmortal/RDL-Rationales-centric-Double-robustness-Learning/ 6987 et al., 2009; Kulesza et al., 2015)",
"and Linear Regression with bag-of-words features",
"(Jia and Liang, 2017; Teso and Kersting, 2019; Ghai et al., 2021; Shao et al., 2021), because these classifiers are relatively intuitive in generating explanations and amenable to incorporating human feedback.",
"Some other work uses simple neural networks such as multi-layer perceptrons",
"(Shao et al., 2021)",
"and shallow CNNs",
"(Lertvittayakumjorn et al., 2020; Stammer et al., 2021; Teso et al., 2021)",
"because the predictions of such models can be explained in the form of features.",
"Very recently, Yao et al.",
"(2021)",
"proposed a human-in-the-loop method to inspect more complicated models",
"(e.g. BERT)",
"with the help of model-agnostic post-hoc explanation algorithms",
"(Ribeiro et al., 2018)",
"that can explain predictions of any linear or non-linear model without exploiting its weights.",
"However, previous work focuses on increasing the explainability of AI systems for high-stakes domains such as health and finance",
"(Li et al., 2020; Yang et al., 2020b), instead of improving model robustness or generalisation ability.",
"Also, they assume access to a large amount of labelled data.",
"In contrast, we focus on few-shot learning scenarios which are more compelling.",
"The RDL pipeline is shown in Figure 2 and consists of two modules: Static Semi-factual Generation and Dynamic Human-intervened Correction .",
"Static semi-factual generation is a more efficient alternative to manually generated counterfactuals",
"(Kaushik et al., 2020).",
"In the first phase, Rationale Marking",
"(Section 3.1), human annotators review each document in the training set to provide rationales",
"(i.e. phrases that support the document classification decisions shown as bold text in Figure 2).",
"The second phase is a semi-factual generation method based on synonym replacement",
"(Section 3.2)",
"that produces augmented examples",
"(blue text in Figure 2 indicates replaced words), which are added into the training set.",
"Dynamic human-intervened correction",
"(Section 3.3)",
"is a rationales-powered human-in-the-loop framework to dynamically correct the model's behaviours.",
"At the outset, sampling and sensitivity of contextual decomposition",
"(SCD)",
"(Jin et al., 2019)",
"is applied to detect the rationales given by the model that is obtained in the previous step.",
"Then, all model-identified rationales",
"(underlined texts in Figure 2)",
"are examined by human annotators to identify false rationales",
"(i.e. words or phrases that do not support the classifications but are falsely included by the model)",
"and missing rationales",
"(i.e. words or phrases that support the classifications but are not included by the model).",
"Both false rationales and missing rationales are corrected to produce augmented examples.",
"Finally, newly generated examples are added into the training set to re-train the deep learning model.",
"Following Kaushik et al.",
"(2020)",
"and Yang et al.",
"(2021), we use the IMDb movie review dataset",
"(Maas et al., 2011)",
"in our experiments.",
"It consists of positive and negative movie reviews that are easy for human participants to understand, re-annotate, and provide feedback upon",
"(Zaidan et al., 2007).",
"We use a crowdsourcing company to recruit editors and annotators for marking rationales that support classification decisions.",
"At the outset, annotators were given instructions and examples that gently guided them to annotate rationales.",
"Only adjectives, adverbs, nouns, and verbs were considered as rationales.",
"Besides, rationales were required to carry complete semantic information.",
"For example, for a phrase starting with a negation word such as not great , annotators are instructed to mark the whole phrase not great as a rationale instead of just marking not .",
"We also limited rationales to at most three consecutive words",
"(i.e. unigrams, bigrams and trigrams).",
"Phrases consisting of numerical scores are not counted as rationales",
"(e.g. 5 or 10 stars)",
"since different datasets may use different rating scales, and annotating digits may hurt OOD performance.",
"Overall, we encouraged annotators to try their best to mark as many rationales as possible to explain classification labels.",
"However, to guarantee the quality of rationale marking and prevent annotators from over including non-rationales for more payment, we also manually inspected annotated examples and rejected examples that contained incorrect rationales.",
"After inspection, we rejected 10.6% of negative reviews and 7.6% of positive reviews.",
"Editors and annotators re-annotated the rejected examples, which were then presented to us for another inspection.",
"All re-annotated examples were approved only if all authors were happy with the quality of the annotations.",
"Otherwise, the examples were re-annotated again.",
"rationales in 855 movie reviews involved in Section 3.1 and 3.3",
"(note that we did not annotate all 1,707 examples in the training set because only 855 examples were necessarily involved in our experiments).",
"Human annotators spent on average 183.68 seconds to identify rationales in a review and our method generated semi-factual examples automatically.",
"On the contrary, workers spent on average 300 seconds to revise a review to generate a counterfactual manually as reported by Kaushik et al.",
"(2020).",
"Note that our approach using 100 labelled examples can outperform manual CAD",
"(Kaushik et al., 2020)",
"using the entire training set of 1,707 examples",
"(see Section 5.3), making our approach 300 1707 183 .",
"68 100 27 .",
"88 times more efficient than manually generated CAD.",
"We take a simple replacement strategy, which has been taken by Yang et al.",
"(2021), to generate semifactual examples.",
"Given a human-identified rationale, our method constructs augmented examples by automatically replacing non-rationale words, thus leading to examples with the same labels.",
"This augmentation is consistent with semi-factual thinking: even if those non-rationales were changed, the label would not change.",
"Formally, given a training example x i = [ t i 1 , t i 2 , ..., t ij ]",
"(where t ij is the j th token of the i th document)",
"and its ground truth label y i , we create a rationale vector r i = [ a i 1 , a i 2 , ..., a ij ] where a ij is the value that indicates whether t ij is a rationale or not",
"(we set a ij = 1 to indicate that t ij is a rationale and 0 otherwise).",
"To generate a semi-factual example, x i , we randomly replace a certain number of non-rationales",
"(where a ij = 0 ), except for punctuation, with synonymous terms.",
"The synonyms can be provided by a human, retrieved automatically from a lexicon such as WordNet",
"(Miller, 1995), or generated using the mask-filling function of a pretrained context-aware language model",
"(Liu et al., 2019).",
"In our experiments, we randomly replace 5% of non-rationales using mask-filling and generate a set of augmented examples, x i , with some replaced non-rationales and all the other tokens identical to x i .",
"The label, y i , of a newly generated example is the same as the label of the original example, x i .",
"Examples of generated data are shown in Table",
"1. Afterwards, the augmented examples are added into the training set used to train the model.",
"Dynamic human-intervened correction further improves the robustness of the model by allowing human annotators to correct the model rationales online.",
"Firstly, SCD is applied to detect unigrams, bigrams or trigrams that are salient to the model.",
"SCD is a technique to assess the importance of terms by continuously removing terms and measuring changes in prediction",
"(Jin et al., 2019).",
"Human annotators examine all rationales given by the model from all documents to discover two types of incorrect rationale: false rationales and missing rationales.",
"The next phase allows human feedback to influence the learning process.",
"To this end, for each type of incorrect rationale, we propose a corresponding strategy to correct them.",
"For false rationales",
"(i.e. phrases that actually do not support classifications but are incorrectly identified by the model), we use synonym replacement again to generate semi-factual examples.",
"Unlike the static semi-factual generation",
"(Section 3.2), in this component we replace all false rationales with their synonyms instead of randomly replacing 5% of non-rationales in a document.",
"Examples of generated data are shown in Table",
"2. For missing rationales",
"(i.e. phrases that actually support classifications but are not identified by the model), we take another simple semi-factual generation strategy, that is, extracting sentences that contain missing rationales to form semi-factual data.",
"Specifically, given a sentence containing missing rationales, we use this sentence as a new example, and the label of this newly generated example is identical to that of the document where the sentence is extracted.",
"For example, there is a positive movie review",
"(bold font for rationales)",
"Robert Urich was a fine actor, and he makes this TV movie believable . I remember watching this film when I was 15 .... .",
"The model fails to identify fine and believable as rationales.",
"Thus we extract the text Robert Urich was a fine actor, and he makes this TV movie believable . as a new example, and the class of this example is still positive.",
"We extract the whole sentence rather than just the missing rationales to reserve more semantic information.",
"Note that the two correction methods in dynamic human-intervened correction can operate in parallel and the generated examples are added to the small training set to re-train the model.",
"Broadly speaking, our RDL framework takes advantage of invariance that makes a model less sensitive to non-rationale words or spurious patterns (Tu et al., 2020; Wang et al., 2021) in favour of focusing on useful mappings of rationales to labels.",
"More specifically, by using static semi-factual generation (Section 3.2) and false rationale correction (Section 3.3), we expect to break spurious associations.",
"For example, if a model incorrectly determines that Soylent Green is associated with positive sentiment (Table 2), the augmented examples that replace Soylent Green with other phrases such as Gang Orange break the spurious association.",
"Besides, using synonym replacement can generate examples that are similar to the original one, which is equivalent to adding noisy data to prevent models from overfitting (Wei and Zou, 2019).",
"Missing rationale correction (Section 3.3) emphasizes the ground truth associations between rationales and labels, enabling the model to better estimate the generally useful underlying distributions for OOD datasets, even in few-shot learning scenarios.",
"In the next section, we present experiments and empirical evidence to demonstrate the utility of the proposed RDL framework in improving model robustness.",
"Our intention is to improve the generalisability of models, and we use both in-distribution and OOD",
"performance for evaluation.",
"Our experiments are designed to address the following research questions: RQ1 Can we use static semi-factual generation to achieve better in-distribution and OOD performance?",
"RQ2 Does dynamic human-intervened correction improve generalisability of models?",
"For fair comparison with previous work (Kaushik et al., 2020; Yang et al., 2021), we use the IMDb sentiment classification dataset (Maas et al., 2011) as the in-distribution dataset.",
"Following Kaushik et al. (2020), all models were trained with the IMDb dataset predefined training, validation and test partitions containing 1 , 707 , 245 , and 488 reviews respectively and an enforced 50:50 class ratio.",
"To measure the generalisation ability of different models, we focus on OOD performance.",
"To this end, we test models on another four binary sentiment classification datasets: the sampled Amazon reviews dataset (Ni et al., 2019) (100,000 positives and 100,000 negatives) from six genres: beauty, fashion, appliances, gift cards, magazines, and software; the Yelp review dataset (Zhang et al., 2015) (19,000 positives and 19,000 negatives); the SST-2 dataset (Socher et al., 2013) (1,067 positives and 1,143 negatives), and the SemEval-2017 Twitter dataset (Rosenthal et al., 2017) (2,339 positives 6990 Training Data In-domain SemEval-2017 SST-2 Yelp Amazon Static (50 gold) 88.60 1.11 77.28 9.11 79.29 5.14 91.53 2.06 89.63 1.65 Full (1,707 gold) 93.23 0.46 71.17 2.54 80.23 2.09 93.66 0.84 90.29 0.57 DP (Static + 350 auto) (400) 86.70 2.92 74.36 2.92 77.33 6.01 89.60 2.51 89.15 1.89 RR (Static + 350 auto) (400) 89.65 1.27 79.20 1.27 78.89 5.95 91.93 2.10 89.73 1.26 Our Methods Static + 150 auto (200) 90.08 1.25 78.88 6.67 79.40 3.28 92.19 1.51 89.81 1.73 Static + 350 auto (400) 90.16 0.85 80.54 2.81 81.26 1.97 93.03 1.08 90.09 1.79 Static + 550 auto (600) 90.04 1.50 80.69 3.42 81.23 1.83 92.10 3.07 89.67 1.27 Static + 750 auto (800) 90.08 1.01 80.55 3.96 80.75 2.30 92.36 1.87 90.18 1.44 Static + 950 auto (1000) 89.83 1.28 80.90 3.29 80.58 2.57 92.30 2.19 90.62 1.29 Static + 1150 auto (1200) 90.12 1.82 79.31 1.82 79.52 3.15 91.47 3.61 90.16 1.46 Table 3: Results on in-distribution and OOD data.",
"To address RQ1 , we compare the performance of models trained by the static semi-factual generation strategy with models trained with the original 50 examples, referred to as Static .",
"We also compare to a model trained with the full training set (1,707 labelled examples), referred to as Full .",
"To simulate the few-shot training scenario, we randomly sample 50 examples (we also forced a 50:50 class balance) from the IMDb dataset as training data.",
"For each experiment, the training is repeated 10 times with training datasets sampled by 10 different random seeds.",
"We report the average result of these 10 repetitions and use accuracy to measure the classification performance.",
"Our experiments rely on an off-the-shelf cased RoBERTa-base model implemented by Hugging Face * to either perform mask-filling to provide synonyms or as a predictive model.",
"Following Kaushik et al. (2020), we fine-tune RoBERTa for up to 20 epochs and apply early stopping with patience of 5 (i.e. stop fine-tuning when validation loss does not decrease for 5 epochs).",
"We also explore the impact of the number of semi-factual examples on model performance.",
"To this end, we conduct static semi-factual generation with a different number of augmented examples for each instance: {3, 7, 11, 15, 19, 23}.",
"Considering we have 50 original examples, this would result in {150, 350, 550, 750, 950, 1,150} additional examples in the training set, respectively (we call * https://huggingface.co/transformers/model_doc/roberta.html this Static+ n , where n is the number of generated semi-factuals).",
"We use the Adam optimizer (Kingma and Ba, 2014) with a batch size of 4.",
"We found that setting the learning rate to {5e-5, 5e-6 and 5e-6} could optimise Static, Static+ n , and Full, respectively.",
"As shown in Table 3, all static semi-factual generation (Static+ n ) methods can outperform the baseline method (Static) in both in-distribution and OOD tests, demonstrating the utility of static semifactual generation.",
"Among all Static+ n methods, Static+350 seems the best-performing method and exceeds Static with a 1.56% in-distribution improvement in average accuracy.",
"Static+350 also outperforms Static with 3.26%, 1.97%, 1.5%, and 0.46% OOD improvement in the SemEval-2017 , SST-2 , Yelp and Amazon datasets respectively.",
"Although the improvement on the Amazon dataset appears modest, given that there are 200,000 examples in the Amazon test set, this actually stands for nearly 1,000 documents being correctly classified.",
"The Static+ n methods can even outperform Full (i.e. normal training with the full training set) on the SemEval , SST-2 , and Amazon datasets and are comparable on the Yelp dataset.",
"The performance of models with the full training set is best on the in-distribution dataset but the worst on the SemEval dataset, which can be caused by the big difference between underlying distributions of these two datasets.",
"In other words, a model that fits well with one dataset can cause performance decay on others.",
"In this case, training with a smaller training set is more likely to reduce overfitting with the in-distribution dataset and fit well with the SemEval dataset, which explains the big improvement.",
"It is interesting to note that models trained with the en-6991 tire training set perform slightly better on the OOD Yelp dataset (93.66 0.84 ) than on the in-distribution dataset (93.23 0.46 ), which could also be explained by the high similarity between the underlying distributions of these two datasets.",
"First, we test whether the improvement in model performance is brought about by static semi-factual generation (Static+ n ) or simply by an increase in the size of the training set.",
"We compare Static+350 (due to its relatively good performance) with another baseline called Duplication ( DP heareafter).",
"We multiply the original training set (50 examples) up into 400 examples identical to the size of the training set of Static+350, and fine-tune RoBERTa on this dataset with the same hyperparameters as Static+350.",
"As shown in Table 3, in most cases, DP un-derperforms other algorithms and is even worse than Static, demonstrating that solely increasing the dataset size cannot improve the performance.",
"We believe that the duplication of original examples increases the risk of overfitting and easily magnifies artefacts or spurious patterns hidden in the small training set, which leads to worse models.",
"Second, synonym replacement has been used previously for data augmentation (Wei and Zou, 2019), and we compare static semi-factual generation with simply replacing any words (i.e. both rationales and non-rationales).",
"Following Wei and Zou (2019), we replace 5% of words at random and set the training set size to 400 to ensure fair comparison (we use RoBERTa and the same hyperparameters of Static+350).",
"We call this Random Replacement ( RR hereafter).",
"As shown in Table 3, RR is slightly better than the baseline Static approach.",
"This result is similar to that reported in Wei and Zou (2019), since the augmented data generated by random replacement is similar to the original data, introducing noise that helps prevent overfitting to some extent.",
"However, the magnitude of improvement of the Static+ n method is much larger than that of RR, demonstrating the utility of only replacing non-rationales to generate semi-factuals.",
"These observations show that the model trained with Static+ n does improve both in-distribution and OOD performance, and the improvement is actually derived from static semi-factual generation.",
"As shown in Table 3 and Figure 3, the performance gain of static semi-factual generation (Static+ n ) marginalises when augmented data is increased.",
"Using too much augmented data even hurts the Static+1150 performance.",
"This observation is consistent with existing work on data augmentation (Wei and Zou, 2019).",
"We believe one reason could be that the use of static augmented examples could also introduce new spurious patterns that degrade model performance, necessitating a method that exploits rationales without generating too many augmented examples.",
"Human-in-the-loop can address this issue by dynamically correcting the model.",
"To address RQ2 , we compare the performance of models trained by dynamic human-intervened correction with a popular few-shot human-in-the-loop learning framework, Active Learning, as well as two other state-of-the-art CAD-based methods (Kaushik et al., 2020; Yang et al., 2021).",
"Lastly, we provide an ablation study to examine the influence of different correction methods, as well as an analysis regarding model sensitivity to spurious patterns.",
"We build up an active learning procedure as a baseline based on the model trained with Static.",
"In particular, we select another 50 examples by Uncertainty Sampling (i.e. prediction scores for two classes in these examples were close) and add them into the training set (called AL hereafter).",
"The training set size of the baseline becomes 100.",
"The best performing static semi-factual generation method Static+350 is also listed as a baseline.",
"For fair comparison, we also use Uncertainty Sampling to select another 50 examples (i.e. 100 original examples in the training set now) for the proposed dynamic human-intervened correction in-6992 Baseline Methods In-domain SemEval-2017 SST-2 Yelp Amazon Static (50 gold) 88.60 1.11 77.28 9.11 79.29 5.14 91.53 2.06 89.63 1.65 Static + 350 auto (400) 90.16 0.85 80.54 2.81 81.26 1.97 93.03 1.08 90.09 1.79 AL (100 gold) 88.64 1.75 78.61 5.90 80.50 3.37 92.47 0.68 89.80 1.91 CAD-based Methods Manual CAD (3,414 gold) 92.70 0.53 69.98 3.99 80.30 2.03 91.87 1.09 90.48 1.09 Automatics CAD (1,707 gold+1,707 auto) 91.82 0.74 79.39 5.37 80.60 3.10 91.92 0.97 90.46 1.08 Our Dynamic Methods Dynamic (100 gold + 700 auto) 90.84 0.99 80.32 4.31 82.40 2.14 93.19 1.24 90.51 2.17 Dynamic-MR (100 gold + 700 auto) 91.06 1.21 79.04 4.92 82.24 2.59 93.03 1.92 90.22 2.74 Dynamic-FR (100 gold + 700 auto) 89.85 1.38 82.39 1.88 81.59 1.82 92.98 0.91 90.12 2.42 Table 4: Results on in-distribution and OOD data.",
"cluding both False Rationale Correction and Missing Rationale Correction (called Dynamic ).",
"For Dynamic, we control the number of augmented examples for each review to 7 (4 from Missing Rationale Correction and 3 from False Rationale Correction), resulting in 800 examples in the training set.",
"For Automatic CAD (Yang et al., 2021) and Manual CAD (Kaushik et al., 2020), we use the entire training set to produce counterfactuals to build up two challenging baselines (one counterfactual for one example, which is limited by the method), resulting in 3,414 examples in the training set.",
"To investigate the influence of each correction method, we also construct another two datasets that augment the same 100 original examples to 800 exclusively by False Rationale Correction ( Dynamic-FR hereafter) and Missing Rationale Correction ( Dynamic-MR hereafter).",
"Again, experiments all rely on a RoBERTa model and all hyperparameters are identical to those described in Section 5.2.1, except for the learning rate of AL which is set to 1.25e-5 (we found this value optimised AL perfor-mance).",
"As shown in Table 4, both AL and Dynamic outperform Static in in-distribution and OOD datasets which makes sense, because we use Uncertainty Sampling to add new labelled data to minimise model uncertainty and increase model performance.",
"However, AL fails to compete with Static+350 even if more original data is added, which again demonstrates the utility of static semi-factual generation.",
"On the contrary, Dynamic does better than Static+350 with a 0.68% in-distribution improvement in average accuracy.",
"Dynamic also outperforms Static+350 with 1.14%, 0.16%, 0.42% OOD improvement in the SST-2 , Yelp and Amazon datasets, but no improvement for the SemEval Non-rationales Rationales Static 0.572 0.428 Dynamic 0.433 0.567 Table 5: Static versus Dynamic models on average sensitivity (normalised) to rationales and non-rationales for IMDb test samples.",
"dataset.",
"Finally, the performance of our methods is better that the state-of-the-art manual CAD method in few-shot learning scenarios on all OOD datasets.",
"Overall, these observations demonstrate that applying dynamic human-intervened correction (i.e. Missing Rationale Correction and False Rationale Correction) can further increase the robustness of a model on generalisation ability, effectively avoiding the improvement marginalisation caused by the increased volume of augmented data.",
"Missing Rationales vs. False Rationales We conduct an ablation study by examining the performance of Dynamic-MR and Dynamic-FR in Table 4.",
"Interestingly, Dynamic-FR is specifically good at improving model performance on the in-distribution and SemEval datasets while Dynamic-MR does a good job on the SST-2 dataset.",
"We believe that it is because Dynamic-MR biases the model to estimate an underlying distribution that is useful for SST-2 and in-distribution datasets, while Dynamic-FR biases the model to estimate a distribution similar to SemEval dataset.",
"The performance of Dynamic can be explained as a compromise of two correction methods.",
"Sensitivity to Spurious Patterns We conduct an analysis to explore whether the double-robust models are less sensitive to spurious patterns.",
"We compute models mean sensitivity to all rationales and non-rationales through SCD in the IMDb test set.",
"As shown in Table 5, the corrected model is much more sensitive to rationales with 13.9% average increase in the 6993 sensitivity to rationales, which demonstrates that our double-robust method can decouple models from spurious patterns.",
"We proposed a rationale-centric human-in-the-loop framework, RDL, for better model generalisability in few-shot learning scenarios.",
"Experimental results show that our method can boost performance of deep neural networks in both in-distribution and OOD datasets and make models less sensitive to spurious patterns, enabling fast generalisation.",
"In the future, we expect to see rationale-centric frameworks defined for different tasks, including NER, question answering, and relation extraction.",
"We honor the ACL Code of Ethics.",
"No private data or non-public information was used in this work.",
"All annotators have received labor fees corresponding to the amount of their annotated instances.",
"We acknowledge with thanks the discussion with Chenyang Lyu from Dublin City University, as well as the many others who have helped.",
"We would also like to thank anonymous reviewers for their insightful comments and suggestions to help improve the paper.",
"This publication has emanated from research conducted with the financial support of the Pioneer and \"Leading Goose\" R&D Program of Zhejiang under Grant Number 2022SDXHDX0003 and Science Foundation Ireland (SFI) under Grant Number [12/RC/2289_P2].",
"Yue Zhang is the corresponding author."
] | [
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"other",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"result",
"result",
"abstain",
"method",
"abstain",
"other",
"other",
"other",
"other"
] |
[
"Simile interpretation is a crucial task in natural language processing.",
"Nowadays, pre-trained language models (PLMs) have achieved state-of-the-art performance on many tasks.",
"However, it remains under-explored whether PLMs can interpret similes or not.",
"In this paper, we investigate the ability of PLMs in simile interpretation by designing a novel task named Simile Property Probing, i.e., to let the PLMs infer the shared properties of similes.",
"We construct our simile property probing datasets from both general textual corpora and human-designed questions, containing 1,633 examples covering seven main categories.",
"Our empirical study based on the constructed datasets shows that PLMs can infer similes' shared properties while still underperforming humans.",
"To bridge the gap with human performance, we additionally design a knowledge-enhanced training objective by incorporating the simile knowledge into PLMs via knowledge embedding methods.",
"Our method results in a gain of 8.58% in the probing task and 1.37% in the downstream task of sentiment classification.",
"The datasets and code are publicly available at https://github.com/Abbey4799/PLMs-Interpret-Simile.",
"A simile is a figure of speech comparing two fundamentally different entities via shared properties (Paul, 1970).",
"There are two types of similes as illustrated in Figure 1, closed similes explicitly reveal the shared properties between the topic entity and the vehicle entity, such as the property slow shared by lady and snail in the sentence The old lady walks as slow as a snail ; while open similes do not state the shared property such as the sentence The old lady walks like a snail .",
"Similes play a vital role in human expression to make literal Equal contribution Corresponding author Figure 1: Examples of two types of similes.",
"utterances more vivid and graspable and are widely used in the corpus of various domains (Liu et al., 2018; Chakrabarty et al., 2020a; Zhang et al., 2020).",
"It is estimated that over 30% of the comparisons can be regarded as similes in product reviews (Nic-ulae and Danescu-Niculescu-Mizil, 2014).",
"Simile interpretation is a crucial task in natural language processing (Veale and Hao, 2007; Qadir et al., 2016; Chakrabarty et al., 2021a), which can assist several downstream tasks such as understanding more sophisticated figurative language (Veale and Hao, 2007) and sentiment analysis (Niculae and Danescu-Niculescu-Mizil, 2014; Qadir et al., 2015).",
"Take the simile the lawyer is like a shark for an example.",
"Although all words in this simile are neutral, this simile expresses a negative affect since lawyer and shark share the negative property aggressive .",
"In the past few years, large pre-trained language models (PLMs) have achieved state-of-the-art performance on many natural language processing tasks (Devlin et al., 2018; Liu et al., 2019b).",
"Recent studies suggest that PLMs have possessed various kinds of knowledge into contextual representations (Goldberg, 2019; Petroni et al., 2019; Lin et al., 2019; Cui et al., 2021).",
"However, the ability of PLMs to interpret similes remains under-explored.",
"Although some recent work (Chakrabarty et al., 2021a) studies the ability of PLMs in choosing or generating the plausible continuations in narratives, this way cannot fully reveal the ability of PLMs to interpret similes.",
"ity of PLMs in simile interpretation by designing a novel task named as Simile Property Probing , i.e., to let the PLMs infer the shared properties of similes.",
"Specifically, we design a particular masked-word-prediction probing task in the form of multiple-choice questions.",
"This probe masks the explicit property of a closed simile and then lets the PLMs discriminate it from three distractors.",
"To make the questions convincing and challenging, the distractors should be not only true-negative as they would introduce logical errors once they are filled in the sentence, but also challenging as they are semantically close to the correct answer.",
"To achieve this, we propose to obtain some similar properties of the golden one from ConceptNet (Liu and Singh, 2004) and COMET (Bosselut et al., 2019), from which we select the three best distractors according to their proximity to the golden property in the feature space.",
"From two different types of data sources: textual corpus collection and human-designed questions, we collect a total of 1,633 probes with various usage frequencies and context diversities, covering seven categories as listed in Table 1. Based on our designed task, we evaluate the ability of BERT (Devlin et al., 2018) and RoBERTa (Liu et al., 2019b) to infer the shared properties of similes.",
"We perform an empirical evaluation in two settings: (1) zero-shot, where the models are off-the-shelf; (2) fine-tuned, where the models are fine-tuned with MLM objective via masking properties.",
"We observe that PLMs have been able to infer properties of similes in the pre-training stage and the ability can be further enhanced by fine-tuning.",
"However, fine-tuned PLMs still perform worse than humans.",
"Moreover, we find that the simile components vehicle and topic contribute the most when inferring the properties.",
"Inspired by the sufficient hints offered by the components vehicle and topic in our empirical study, we propose a knowledge-enhanced training objective to further bridge the gap with human performance.",
"Considering property (p) as the relation between topic (t) and vehicle (v), we design a simile knowledge embedding objective function following conventional knowledge embedding methods (Bor-des et al., 2013) to incorporate the simile knowledge (t,p,v) into PLMs.",
"To integrate simile knowledge and language understanding into PLMs, we jointly optimize the knowledge embedding objective and the MLM objective in our design.",
"Overall, the knowledge-enhanced objective shows effectiveness in our probing task and the downstream task of sentiment classification.",
"To summarize, our contributions are three-fold: (1) To our best knowledge, we are the first to systematically evaluate the ability of PLMs in interpreting similes via a proposed novel simile property probing task.",
"(2) We construct simile property probing datasets from both general textual corpora and human-designed questions, and the probing datasets contain 1,633 examples covering seven main categories of similes.",
"(3) We also propose a novel knowledge-enhanced training objective by complementing the MLM objective with the knowledge embedding objective.",
"This method gains 8.58% in the probing task and 1.37% in the downstream task of sentiment classification.",
"A sentence of simile generally consists of five major components (Hanks, 2013; Niculae and Danescu-Niculescu-Mizil, 2014), where four are necessary and the remaining one is optional.",
"The four explicit components are as follows: (1) topic (or tenor) : the subject of the comparison acting as 7876 Figure 2: A process for designing our simile property probing task.",
"source domain; (2) vehicle : the object of the comparison acting as target domain; (3) event : the predicate indicating act or state; (4) comparator : the trigger word of a simile such as as or like .",
"The optional component property reveals the shared characteristics between the topic and the vehicle.",
"There are two types of similes depending on whether the property is explicit or implicit (Beardsley, 1981).",
"The similes which mention the property directly are named as the closed similes, while the others are open similes, as shown in Figure 1. 3 The Simile Property Probing Task 3.1 Task Formulation To estimate the ability of PLMs in simile interpretation, we design a particular Simile Property Probing task, which masks the explicit property of a closed simile, and then lets the PLMs discriminate it among four candidates.",
"Considering that the shared properties between topic and vehicle may not be unique (Lacroix et al., 2005), we specifically design a multiple-choice question answering task (with only one correct answer) rather than a cloze task to probe the ability of PLMs to infer properties of similes, since the latter one may result in multiple correct answers.",
"vehicle is masked, the probing task requires the PLMs to find the correct property from four options, where the other three options are hard distractors.",
"We construct datasets for the proposed probing task in four steps.",
"The overview of our probing data collection process is described in Figure 2. 3.2.1 Data Sources We construct our datasets from two different sources to detect the capability of PLMs from two perspectives: textual corpus collection and human-designed questions.",
"To avoid laborious human labeling on the implicit properties of open similes, we collect closed similes with explicit properties.",
"General Corpus.",
"Following (Hanks, 2005; Niculae and Yaneva, 2013), we adopt two general corpora, British National Corpus (BNC) 1 and iWeb 2 .",
"To identify closed similes, we extract the sentences matching the syntax as ADJ as (a, an, the) NOUN .",
"Through syntactic pattern matching, we finally collect 1,917 sentences.",
"Teacher-Designed Quizzes.",
"Questions about similes designed by teachers from educational resources are ideal sources for assessing the ability to understand similes.",
"Hence, we choose Quizizz 3 , an 1 https://www.english-corpora.org/bnc/ 2 https://www.english-corpora.org/iweb/ 3 https://quizizz.com/ 7877 Figure 3: Illustration of the distractor selection method.",
"emerging learning platform founded in 2015.",
"On this platform, users can create quizzes on a specific topic as teachers to assess students' understanding of related knowledge.",
"We collect a set of quizzes with titles concerning similes and extract the complete closed simile sentences from the questions and answers in these quizzes.",
"Finally, we retrieve 875 complete closed similes from 1,235 quizzes.",
"To assure the quality of our constructed datasets and prepare for further analysis, three annotators are required to decide whether the extracted sentences are similes or not, and annotate their corresponding simile components.",
"The inter-annotator agreement on identifying similes is 0.77 using Fleiss' Kappa score (Fleiss, 1971).",
"All the properties in our datasets are single-token by replacing multi-token properties with their single-token synonyms in the knowledge base WordNet (Miller, 1995) and ConceptNet (Liu and Singh, 2004).",
"To make our probes convincing, three distractors are designed against the original property in each simile with two criteria (Haladyna et al., 2002; Ren and Zhu, 2020): true-negative and challenging .",
"We argue that well-designed distractors should be illogical when filled into the questions (true-negative) while being semantically related to the correct answer (challenging) .",
"Our distractor design mainly involves three phases: 1) distractor generation; 2) distractor selection; 3) Human Confirmation.",
"Distractor Generation.",
"To meet the requirement of challenging , we generate distractor candidates from the four semantic-related components of a simile, i.e., topic, vehicle, event, and property.",
"Given the original property, we harvest its antonyms from the knowledge base WordNet and ConceptNet.",
"With regard to three other components, we extract their properties from two sources Dataset GeneralCorpus Quizzes #Sentence 775 858 #Unique topic concept 415 366 #Unique property concept 280 160 #Unique vehicle concept 522 250 #Unique event concept 147 66 #Unique topic-vehicle pair 743 684 #Unique topic-property-vehicle pair 751 701 Maximum sentence length 98 44 Average sentence length 25.80 12.69 Minimum sentence length 7 7 @Start 34.32% 20.40% @Middle 43.23% 63.29% @End 22.45% 16.32% Table 2: Statistics of our simile property probing datasets.",
"as follows.",
"Given a component, we utilize the HasProperty relation from ConceptNet (Liu and Singh, 2004) and COMET (Bosselut et al., 2019) to retrieve the property.",
"Moreover, we rank the adjectives or adverbs concerning 4 each component in Wikipedia and BookCorpus corpus 5 by frequency and select the top ten candidates with a frequency of more than one.",
"Distractor Selection.",
"To select the most challenging distractors from the generated distractor candidates, we propose to measure the similarity between the original sentence with the correct property and the sentence with a distractor.",
"Intuitively, the more similar the two sentences, the more challenging the distractor.",
"An example of the distractor selection process is depicted in Figure 3. Given the original sentence or the new sentence replacing the correct property with a distractor, we first utilize RoBERTa LARGE to extract two types of features.",
"One feature is context embedding, which is the sentence embedding of [CLS] , while the other feature is word embedding, which is the token embedding of the answer or distractors.",
"We then concatenate the embeddings of the two features to compute the cosine similarity between sentences with the answer and a distractor.",
"Finally, we select the top 3 distractors with the highest similarities.",
"Human Confirmation.",
"To ensure the distractors are true-negative , three human annotators are asked to label each selected distractor.",
"If more than two annotators are uncertain about its correctness, we replace it with another suitable candidate.",
"Table 2 presents the statistics of our constructed datasets.",
"We count unique components and component pairs to present the usage frequencies of similes.",
"The length of the sentences in each dataset indicates the diversities of context.",
"Additionally, we analyze the distribution of the position of simile in the sentences in each dataset, where start , middle and end correspond to the positions of the three equally divided parts of each sentence.",
"We also investigate the categories covered by our datasets.",
"The results and details about the category classification are provided in Appendix C. Overall, the Quizzes dataset provides similes commonly expressed by people, while the General Corpus dataset presents similes with more diverse contexts.",
"Besides evaluating the ability of PLMs in the zero-shot setting where the models are off-the-shelf, we also study whether the performance could be improved through fine-tuning with the MLM objective via masking properties.",
"To achieve this, we collect training data from Standardized Project Gutenberg Corpus 6 (SPGC) (Gerlach and Font-Clos, 2020).",
"SPGC is a 3 billion words corpus collected from about 60 thousand eBooks.",
"We extract similes via matching the syntactic pattern (Noun ... as ADJ as ... NOUN) and end up with 4,510 sentences.",
"Additionally, we adopt dependency parsing 7 to automatically annotate the simile components of each sentence without human labor.",
"In this section, we first conduct a set of experiments to probe the ability of PLMs to infer properties in similes and then evaluate the influence of each component on the model performance.",
"To disentangle what is captured by the original representations and what is introduced from fine-tuning stage, we apply two different types of settings: (1) zero-shot; (2) fine-tuning.",
"In our first setting, we use BERT and RoBERTa with pre-trained masked-word-prediction heads to perform our probing task.",
"In the second setting, we utilize the MLM training objective inherited from PLMs to fine-tune 6 https://github.com/pgcorpus/gutenberg/ 7 https://stanfordnlp.github.io/CoreNLP/ Setting Models GeneralCorpus Quizzes Gain ConScore (Zheng et al., 2019) 27.48 34.85 Meta4meaning (Xiao et al., 2016) 27.74 47.44 -EMB (Qadir et al., 2016) 28.27 47.90 -MIUWE (Bar et al., 2020) 30.97 53.85 -Zero-Shot BERTBASE 64.13 74.36 -BERTLARGE 72.39 83.22 RoBERTa BASE 69.55 82.87 RoBERTa LARGE 78.97 87.41 -Fine-tuned MLM-BERTBASE 67.74 82.05 +5.65 MLM-BERTLARGE 73.85 84.58 +1.40 MLM-RoBERTa BASE 70.58 84.69 +1.43 MLM-RoBERTa LARGE 78.97 88.97 +0.78 Human Performance 87.60 93.60 Table 3: Accuracy of different models in our simile property probing task.",
"the models.",
"We replace the property of each simile with the special token [MASK] in our constructed supervised datasets (Section 3.3) and ask models to recover the original property.",
"The experimental details are provided in the Appendix B. We mainly compare the model accuracy of PLMs with the following baselines: (1) EMB (Qadir et al., 2016): It obtains the composite simile vector by performing an element-wise sum of the word embedding for the vehicle and event, then selects the answer with the shortest cosine distance from the composite vector.",
"(2) Meta4meaning (Xiao et al., 2016) : This method prefers the properties which are strongly associated with both topic and vehicle.",
"It also prefers the properties that are more relevant to the vehicle than to the topic.",
"The association is measured by statistical significance.",
"(3) ConScore (Zheng et al., 2019) : It suggests that better properties would have a smaller and balanced distance to the topic and vehicle in the word embedding space.",
"(4) MIUWE (Bar et al., 2020) : The ranking method assigns each property a list of scores, including the statistical co-occurrences and similarity to the collocations of the topic and vehicle.",
"The baselines above mainly consider the statistical information and embedding similarities between the properties and the simile components.",
"The other baseline is human performance.",
"We sample 250 random questions from both datasets, and for each question, we gather answers from three people.",
"We take the majority vote as the human performance of our probing task and ensure that three annotators agree on the questions that they gave completely different annotation results.",
"The accuracies of different methods under two different settings on our datasets are listed in Table 3,",
"where the last column represents the average absolute gains of each PLM after fine-tuning with the MLM objective.",
"All the results of our experiments are averaged over three random seeds.",
"First of all, the prediction accuracies of both BERT and RoBERTa in the zero-shot setting are much higher than the baselines only considering the statistical information and embedding similarities between simile components.",
"This phenomenon indicates that the knowledge learning from the pre-train stage can help infer the simile properties.",
"Moreover, the performance can be further improved by training with the MLM objective, demonstrating that the fine-tuning phase with the supervised dataset can introduce related knowledge about similes.",
"However, models still underperform humans by several accuracy points, leaving room for improvement in our probing task.",
"Overall, all the models perform better on Quizzes Dataset than on General Corpus Dataset, indicating that more diverse contexts increase the difficulty of inferring the shared properties.",
"Also, RoBERTa consistently outperforms BERT, likely due to a larger pre-training corpus containing more similes.",
"More complementary results are provided in the Appendix A.1.",
"Due to the high performance of off-the-shelf PLMs, we are interested in the contributions of each component to infer shared properties in the zero-shot setting.",
"First, the information of each component is hidden through a certain strategy.",
"Specifically, for topic , vehicle and comparator , we replace their tokens with a special token [UNK] which means unknown.",
"With regard to event , we convert it into a suitable copula, such as am and is, to ensure the integrity of syntax.",
"Furthermore, we also set up a baseline by randomly replacing a token with [UNK] in the context.",
"Examples corresponding to all settings are shown in Table 8 in the Appendix B. We finally report the model accuracy and declined absolute accuracy after hiding the information of each component.",
"The results in Table 4 show varying degrees of the decline of all settings.",
"If the model's performance decreases more, it means that the influence of the component is more significant than others.",
"Three Figure 4: An overview of our objective function design major components (i.e., vehicle, topic and comparator) obtain higher declined absolute accuracy than random token, which demonstrates that the information of these simile components is more valuable than other words to infer the shared properties.",
"Among all the components, removing the comparator may cause the most significant performance drop.",
"This result is mostly because PLMs cannot identify the sentence as a simile without an obvious indicator.",
"When it comes to the remaining 3 components, vehicle contributes the most, followed by topic .",
"Hence, we argue that it may be beneficial to explicitly leverage both the information of vehicle and topic to infer the properties.",
"Benefiting from the result that topic and vehicle are the two most essential components for predicting the shared properties of similes, we catch an insight that property can be seen as the relation between topic and vehicle following a set of knowledge embedding (KE) methods (Bordes et al., 2013; Wang et al., 2014; Ji et al., 2015).",
"To integrate the insight mentioned above into our training procedure, we design an objective function as shown in Figure 4. Inspired by triplets representing the relational facts, we can also extract the topic, property, and vehicle from a simile as a triplet ( t, p, v ) .",
"The distance between topic and vehicle in the embedding space represents the plausibility of property.",
"The plausibility can be measured by scoring functions (Bordes et al., 2013; Wang et al., 2014; Ji et al., 2015).",
"To this end, we follow the scoring function from TransE (Bordes et al., 2013) and define the following Mean Square Error (MSE) loss as our KE loss: LKE = MSE ( E t + E p , E v ) (1) 7880 Datasets Models Topic Vehicle Event Comparator Random GeneralCorpus BERTBASE 59.87 (-04.26) 54.58 (-09.55) 62.84 (-01.29) 46.32 (-17.81) 63.05 (-01.08) BERTLARGE 67.74 (-04.65) 61.16 (-11.23) 70.19 (-02.20) 46.06 (-26.33) 69.07 (-03.32) RoBERTa BASE 65.29 (-04.26) 61.03 (-08.52) 68.52 (-01.03) 50.32 (-19.23) 67.31 (-02.24) RoBERTa LARGE 76.90 (-02.07) 69.68 (-09.29) 77.55 (-01.42) 54.97(-24.00) 77.72 (-01.25) Quizzes BERTBASE 67.02 (-07.34) 62.35 (-12.01) 73.43 (-00.93) 52.80 (-21.56) 71.91 (-02.45) BERTLARGE 77.86 (-05.36) 64.57 (-18.65) 82.63 (-00.59) 55.24 (-27.98) 79.91 (-03.31) RoBERTa BASE 76.11 (-06.76) 69.00 (-13.87) 81.47 (-01.40) 55.24 (-27.63) 77.58 (-05.29) RoBERTa LARGE 83.80 (-03.61) 74.24 (-13.17) 86.60 (-00.81) 60.84 (-26.57) 85.12 (-02.29) Table 4: Accuracy of PLMs in the zero-shot setting before and after hiding the information of each component on two datasets.",
"where E t , E p , E v are the representations of topic, property and vehicle encoded by PLMs.",
"We also try more advanced methods such as TransH (Wang et al., 2014) and TransD (Ji et al., 2015) for the knowledge embedding objective, and their results are presented in Table 7 in the Appendix A.2.",
"Finally, our training procedure is to optimize MLM loss and KE loss jointly: L Ours = LKE + LMLM (2) where is a hyperparameter used to balance two objective functions.",
"Table 5 presents the performance of the models fine-tuned with the MLM objective and our knowledge-enhanced objective on the two datasets, where the last column shows the performance gains brought by our improvement to the training objective.",
"Overall, each model trained with our knowledge-enhanced objective outperforms the one trained with the MLM objective on both datasets, demonstrating the effectiveness of our objective in the probing task.",
"For the Quizzes dataset, BERT achieves more performance gains than RoBERTa does, which is probably because RoBERTa has better modeled the relationship among topic , property and vehicle in the similes with simple syntactic structure during Models Original LMLML Ours BERTBASE 84.96 85.45 85.63 BERTLARGE 86.02 86.65 86.95 RoBERTa BASE 88.51 88.61 89.51 RoBERTa LARGE 88.84 89.08 90.21 Table 6: Accuracy of PLMs with three settings in the downstream task of sentiment classification.",
"fine-tuning with the MLM objective.",
"For the General Corpus dataset, the BASE version of models tends to yield higher performance improvements, probably because the models with larger parameter sizes can better capture the relationship among simile components in the similes with more diverse contexts when fine-tuning with the MLM objective.",
"Similes generally transmit a positive or negative view due to the shared properties (Fishelov, 2007; Li et al., 2012; Qadir et al., 2015).",
"Taking the simile the lawyer is like a shark as an example, the implicit shared property aggressive between lawyer and shark indicates the negative polarity.",
"Therefore, we design a sentiment polarity downstream task to validate the improvement of our method to infer shared properties.",
"Our experiments are based on the Amazon reviews dataset 8 which provides reviews and their corresponding sentiment ratings.",
"Following (Mu-dinas et al., 2012; Haque et al., 2018), we first process the dataset into a binary sentiment classification task by defining the 1-star and 2-star ratings as negative, the 4-star, and 5-star ratings as positive, while excluding the 3-star neutral ratings.",
"To further address the label imbalance problem, we then sample the positive and negative reviews at 1:1.",
"The final dataset consists of 5,023 reviews and is split into a ratio of 6:2:2 for the train/dev/test set.",
"multi-8 https://www.kaggle.com/bittlingmayer/amazonreviews",
"layer perceptron (MLP) classifiers on top of PLM's contextualized representation.",
"The parameters of PLM are fixed and from three settings: (1) zero-shot; (2) fine-tuned with the MLM objective in the probing task; (3) fine-tuned with the knowledge-enhanced objective in the probing task.",
"The results are shown in the Table 6. First of all, fine-tuning with the MLM objective improves the performance of all models in the sentiment classification task, demonstrating that improving models' ability to infer the properties of similes can enhance models' understanding of the sentiment polarity.",
"Moreover, the performance is further improved by our knowledge-enhanced objective, especially for RoBERTa whose main gains are mostly contributed by our additional knowledge embedding objective.",
"This indicates the effectiveness of our knowledge-enhanced objective in the downstream task of sentiment analysis.",
"Furthermore, we investigate the mechanism of how knowledge-enhanced objective brings improvement.",
"We first calculate the L2 distance between the representations in the last hidden states of each pair of components.",
"The results are shown in Figure 5. In all pairs, the distance given by our objective is generally shorter than MLM-BERT, which indicates that modeling the relationships among the three important components is efficient to enhance the model performance.",
"Specifically, we visualize the final layer representation of a simile into two-dimensional spaces via Principal Component Analysis (PCA) (Pearson, 1901) in Figure 6. In both MLM and our objective, the models are required to fill in the masked token in the same simile sentence.",
"The model fine-tuned with the MLM objective predicts wrongly, while our fine-tuned model predicts correctly.",
"We find that our representations of the three components 6 4 2 0 2 4 6 8 PC1 4 2 0 2 4 6 PC 2 myfriendsandi sold our old toys and donatedthemoney to an old man whowasas [MASK] asa church mouse",
"Simile Processing.",
"Simile processing mainly involves 3 fields: simile detection, simile generation, and simile interpretation.",
"The bulk of work in similes mainly focuses on identifying similes and their components (Niculae, 2013; Niculae and Danescu-Niculescu-Mizil, 2014; Liu et al., 2018; Zeng et al., 2020).",
"Recent years have witnessed a growth of work to transfer literal sentences to similes (Zhang et al., 2020; Chakrabarty et al., 2020b).",
"(Chakrabarty et al., 2021b) study the ability of PLMs to recognize textual entailment related to similes.",
"With regard to simile interpretation, (Qadir et al., 2016; Xiao et al., 2016; Bar et al., 2020; Zheng et al., 2019) rank the properties by the statistical co-occurrence and embedding similarities with other simile components.",
"(Chakrabarty et al., 2021a) interpret similes by choosing or generating continuation for narratives via PLMs.",
"Different from these works, we investigate the ability of PLMs to infer shared properties of similes.",
"Probing Tasks for PLMs.",
"Many studies investigate whether PLMs encode knowledge in their contextual representations by designing probing tasks.",
"Early studies mainly focus on the linguistic knowledge captured by PLMs (Liu et al., 2019a; Tenney et al., 2019).",
"(Petroni et al., 2019) first propose a word prediction task to probe factual 7882 knowledge stored in PLMs.",
"Similar methods are utilized to evaluate various commonsense knowledge, such as symbolic reasoning ability (Talmor et al., 2020; Zhou et al., 2020), numerical commonsense knowledge (Lin et al., 2020), properties associated with concepts (Weir et al., 2020).",
"To our best knowledge, we are the first work to investigate the ability of PLMs to interpret similes by proposing a simile property probing task.",
"Enhance PLMs via Knowledge Regularization.",
"Recently, many researchers integrate external knowledge with PLMs by complementing the MLM objective with an auxiliary knowledge-based objective.",
"For example, there are works that introduce span-boundary objective for span-level prediction (Joshi et al., 2020), copy-based training objective for mention reference prediction (Ye et al., 2020), knowledge embedding objective for factual knowledge (Wang et al., 2021) and arithmetic relationships of linguistic units for universal language representation (Li and Zhao, 2021).",
"Different from these works, we incorporate simile knowledge with the training objective by modeling the relationship between the salient components of similes.",
"In this work, we are the first to investigate the ability of PLMs in simile interpretation via a proposed novel simile property probing task.",
"We construct two multi-choice probing datasets covering two data sources.",
"By conducting a series of empirical experiments, we prove that PLMs exhibit the ability to infer simile properties in the pre-training stage and further induce more related knowledge during the fine-tuning stage, but there is still a gap between PLMs and humans in this task.",
"Furthermore, we propose a knowledge-enhanced training objective to bridge the gap, which shows effectiveness in the probing task and the downstream task of sentiment classification.",
"In future work, we are interested in exploring the interpretation of more sophisticated figurative language, such as metaphor or analogy.",
"We would like to thank anonymous reviews for their helpful comments and suggestions.",
"Also, thanks to Jingping Liu, Leyang Cui for their insightful feedback that helped improve the paper.",
"We also thank Botian Jiang, Shuang Li for supporting our data collection.",
"This research was supported by the National Key Research and Development Project (No. 2020AAA0109302), National Natural Science Foundation of China (No. 62072323), Shanghai Science and Technology Innovation Action Plan (No. 19511120400), Shanghai Municipal Science and Technology Major Project (No. 2021SHZDZX0103).",
"We provide details of our work to address potential ethical concerns.",
"In our work, we propose a simile property probing task and construct probing datasets from both general textual corpora and human-designed questions.",
"First of all, all the data sources used in the data collection process are publicly available.",
"Specifically, we follow the robots.txt 9 to respect the copyright when we collect similes from the learning platform Quizizz (Sec. 3.2.1).",
"Moreover, there are three steps involving human annotation to ensure the quality of the datasets: simile and simile components recognition (Sec. 3.2.1), human confirmation for distractors (Sec. 3.2.2), and human performance (Sec. 4.1).",
"To ensure the quality of annotation, all the annotators do not participate in our probing data collection, and they always label a small set of 50 examples to reach an agreement on the labeling criteria before the formal labeling.",
"We protect the privacy rights of annotators and pay them above the local minimum wage."
] | [
"abstain",
"abstain",
"abstain",
"objective",
"method",
"result",
"objective",
"result",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"result",
"abstain",
"objective",
"objective",
"abstain",
"objective",
"objective",
"objective",
"method",
"objective",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"other",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"other",
"other",
"objective",
"objective",
"method",
"method",
"objective",
"objective",
"other",
"other",
"other",
"other",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain"
] |
[
"Memory augmented encoder-decoder framework has achieved promising progress for natural language generation tasks.",
"Such frameworks enable a decoder to retrieve from a memory during generation.",
"However, less research has been done to take care of the memory contents from different sources, which are often of heterogeneous formats.",
"In this work, we propose a novel attention mechanism to encourage the decoder to actively interact with the memory by taking its heterogeneity into account.",
"Our solution attends across the generated history and memory to explicitly avoid repetition, and introduce related knowledge to enrich our generated sentences.",
"Experiments on the answer sentence generation task show that our method can effectively explore heterogeneous memory to produce readable and meaningful answer sentences while maintaining high coverage for given answer information.",
"Most previous question answering systems focus on finding candidate words, phrases or sentence snippets from many resources, and ranking them for their users (Chu-Carroll et al., 2004; Xu et al., 2016).",
"Typically, candidate answers are collected from different resources, such as knowledge base (KB) or textual documents, which are often with heterogeneous formats, e.g., KB triples or semi-structured results from Information Extraction (IE).",
"For factoid questions, a single answer word or phrase is chosen as the response for users, as shown in Table 1 (A1).",
"However, in many real-world scenarios, users may prefer more natural responses rather than a single word.",
"For example, as A2 in Table 1, James Cameron directed the Titanic.",
"is more favorable than the single name James Cameron.",
"A straightforward solution to compose an answer sentence is to build a template based model, where the answer Q Who is the director of the Titanic?",
"But such systems intrinsically lack variety, hence hard to generalize to new domains.",
"To produce more natural answer sentences, Yin et al. (2015) proposed GenQA, an encoder-decoder based model to select candidate answers from a KB styled memory during decoding to generate an answer sentence.",
"CoreQA (He et al., 2017b) further extended GenQA with a copy mechanism to learn to copy words from the question.",
"The application of attention mechanism enables those attempts to successfully learn sentence varieties from the memory and training data, such as usage of pronouns (A3 in Table 1).",
"However, since they are within the encoder-decoder framework, they also encounter the well noticed repetition issue: due to loss of temporary decoder state, an RNN based decoder may repeat what has already been said during generation (Tu et al., 2016a,b).",
"Both GenQA and CoreQA are designed to work with a structured KB as the memory, while in most real-world scenarios, we require knowledge from different resources, hence of different formats.",
"This knowledge may come from structured KBs, documents, or even tables.",
"It is admittedly challenging to leverage a heterogeneous memory in a neural generation framework, and it is not well studied in previous works (Miller et al., 2016).",
"Here in our case, the memory should contain two main formats: KB triples and semi-structured en-185 tities from IE, forming a heterogeneous memory (HM).",
"The former is usually organized in is a subject-predicate-object form, while, the latter is usually extracted from textual documents, in the form of keywords, sometimes associated with certain categories or tags oriented to specific tasks (Bordes and Weston, 2016).",
"Miller et al. (2016) discuss different knowledge representations for a simple factoid QA task and show that classic structured KBs organized in a Key-Value Memory style work the best.",
"However, dealing with heterogeneous memory is not trivial.",
"Figure 1 shows an example of generating answer sentences from HM in a Key-Value style, which is indeed more challenging than only using a classic KB memory.",
"Keys and values play different roles during decoding.",
"A director key indicates this slot contains the answer.",
"Same James Cameron values with different keys indicate duplication.",
"The decoder needs this information to proactively perform memory addressing.",
"Because keys from documents are not canonicalized, e.g., doc directed and doc director , they may lead to redundancy with the structured KB, e.g., kb directed_by and doc director .",
"A decoder could repetitively output a director twice simply because there are two different memory slots hit by the query, both indicating the same director.",
"This will make the the repetition issue even worse.",
"Although many neural generation systems can produce coherent answer sentences, they often focus on how to guarantee the chosen answer words to appear in the output, while ignoring many related or meaningful background information in the memory that can further improve user experiences.",
"In real-world applications like chatbots or personal assistants, users may want to know not only the exact answer word, but also information related to the answers or the questions.",
"This information is potentially helpful to attract users' attention, and make the output sentences more natural.",
"For example in Table 1 (A4), the extra 1999 not only enriches the answer with the movie's release year, but also can act as a clue to help distinguish ambiguous candidate answers, e.g., Titanic (1999) and Titanic (HD, 2016) .",
"In this paper, we propose a sequence to sequence model tailing for heterogeneous memory.",
"In order to bridge the gap between decoder states and memory heterogeneity, we split decoder states into separate vectors, which can be used to address Figure 1: An example qa-pair with heterogeneous memory different memory components explicitly.",
"To avoid redundancy, we propose the Cumulative Attention mechanism, which uses the context of the decoder history to address the memory, thus reduces repetition at memory addressing time.",
"We conduct experiments on two WikiMovies datasets, and experimental results show that our model is able to generate natural answer sentences composed of extra related facts about the question.",
"Natural Answer Generation with Sequence to Sequence Learning: Sequence to sequence models (with attention) have achieved successful results in many NLP tasks (Cho et al., 2014; Bah-danau et al., 2014; Vinyals et al., 2015; See et al., 2017).",
"Memory is an effective way to equip seq2seq systems with external information (We-ston et al., 2014; Sukhbaatar et al., 2015; Miller et al., 2016; Kumar et al., 2015).",
"GenQA (Yin et al., 2015) applies a seq2seq model to generate natural answer sentences from a knowledge base, and CoreQA (He et al., 2017b) extends it with copying mechanism (Gu et al., 2016).",
"But they do not consider the heterogeneity of the memory, only tackle questions with one single answer word, and do not study information enrichment.",
"Memory and Attention: There are also increasing works focusing on different memory representations and the interaction between the decoder and memory, i.e., attention.",
"Miller et al. (2016) propose the Key-Value style memory to explore textual knowledge (both structured and unstructured) from different sources, but they still utilize them separately, without a uniform addressing and attention mechanism.",
"Daniluk et al. (2017) split the decoder states into key and value representation, and increase language modeling 186 performance.",
"Multiple variants of attention mechanism have also been studied.",
"Sukhbaatar et al. (2015) introduce multi-hop attention, and extend it to convolutional sequence to sequence learning (Gehring et al., 2017).",
"Kumar et al. (2015) further extend it by using a Gated Recurrent Unit (Chung et al., 2014) between hops.",
"These models show that multiple hops may increase the model's ability to reason.",
"These multi-hop attention is performed within a single homogeneous memory.",
"Our Cumulative Attention is inspired by them, but we utilize it cross different memory, hence can explicitly reason over different memory components.",
"Conditional Sentence Generation: Controllable sentence generation with external information is wildly studied from different views.",
"From the task perspective, Fan et al. (2017) utilize label information for generation, and tackle information coverage in a summarization task.",
"He et al. (2017a) use recursive Network to represent knowledge base, and Bordes and Weston (2016) track generation states and provide information enrichment, both are in a dialog setting.",
"In terms of network architecture, Wen et al. (2015) equip LSTM with a semantic control cell to improve informativeness of generated sentence.",
"Kiddon et al. (2016) propose the neural checklist model to explicitly track what has been mentioned and what left to say by splitting these two into different lists.",
"Our model is related to these models with respect to information representation and challenges from coverage and redundancy.",
"The most closely related one is the checklist model.",
"But it does not explicitly study information redundancy.",
"Also, the information we track is heterogeneous, and we track it in a different way, i.e. using Cumulative attention.",
"Due to loss of states across time steps, the decoder may generate duplicate outputs.",
"Attempts have been made to address this problem.",
"Some architectures try to utilize History attention records.",
"See et al. (2017) introduce a coverage mechanism, and Paulus et al. (2017) use history attention weights to normalize new attention.",
"Others are featured in network modules.",
"Suzuki and Na-gata (2017) estimate the frequency of target words and record the occurrence.",
"Our model shows that simply attending to history decoder states can reduce redundancy.",
"Then we use the context vector of attention to history decoder states to perform attention to the memory.",
"Doing this enables the decoder to correctly decide what to say at memory addressing time, rather than decoding time, thus increasing answer coverage and information enrichment.",
"Given a question q and a memory M storing related information, our task is to retrieve all the answer words from the memory, generate an answer sentence x , and use the rest information as enrichment.",
"Answer Coverage is the primary objective of our task.",
"Since many answers contain multiple words, the system needs to cover all the target words.",
"Information Redundancy is one challenge for this task.",
"It is well noticed that the decoder language model may lose track of its state, thus repeating itself.",
"Also, the decoder needs to reason over the semantic gap between heterogeneous memory slots, figuring out different keys may refer to the same value.",
"These two kinds of redundancy should both be addressed.",
"Information Enrichment is another challenge.",
"It requires the decoder to interact with the memory effectively and use the right word to enrich the answer.",
"The tradeoff between redundancy and cov-erage/enrichment is one of our main considerations.",
"This is because when the decoder generates a word, it either generates a new word or a mentioned word.",
"The more answer words and information enrichment are considered, the more likely the model repeats what it has already generated.",
"Our model consists of the question encoder, the heterogeneous memory, and the decoder.",
"The encoder embeds the question into a vector representation.",
"The decoder reads questions, retrieves the memory, and generates answer sentences.",
"We use a Long Short Term Memory (LSTM) (Hochreiter and Schmidhuber, 1997) for question encoding and encode the question into an embedding.",
"It takes every word embedding ( q 1 , q 2 ...q n ) of question words as inputs, and generates hidden states s t = LST M enc ( q t , s t 1 ) .",
"These s are later used for decoder's attention.",
"The last hidden state s n is used as the vector representation of the question, and is later put into the initial hidden state of the decoder.",
"We use a key-value memory M to represent the information heterogeneity.",
"In our experiments, we study information from KB, topic words, and words extracted from documents.",
"The memory is formatted as ( ( m K 0 , m V 0 ) , ( m K 1 , m V 1 ) ... ( m K n , m V n ) ), where m K i and m V i are respectively the key embedding and word embedding for the i -th memory slot.",
"The vocabulary for keys V key consists of all predicates in the KB, and all tags we use to classify the value words (e.g: director, actor, or release_year ).",
"The vocabulary for values V val consists all related words from web documents, subjects and objects from the KB.",
"This memory is later used in two ways:",
"1. the decoder uses its previous hidden state to perform attention and generate context vectors.",
"2. the decoder uses the updated hidden states as pointers (Vinyals et al., 2015) to retrieve the memory and copy the memory contents into the decoder's output.",
"As in the standard encoder-decoder architecture with attention, the word embedding of the decoder's previous time step x t and context vector c t is fed as the input of the next time step, and the hidden state h t is updated then.",
"The initial hidden state is the question embedding concatenated with average memory key and value: h t = LST M dec ( x t , c t , h t 1 ) h 0 = [ s n , avg ( m K ) , avg ( m V )] where [ , ] denotes concatenation.",
"As shown in figure 2, to match the key-value memory representation, we use three linear transformations to convert the decoder's current h t into h N t , h K t , and h V t : h N t = W n h t h K t = W k h t h V t = W v h t where the W s are initialized as identity matrix I = diag (1 , 1 ... 1) .",
"h N t will be projected to normal word vocabulary V norm to form a distribution p N t .",
"h K t and h V t will be used as pointers to perform attention to memory keys m K and values m V , respectively, and forms two distributions: p MK t and p MV t .",
"We use the average of the two as distribution over the memory: p M t = ( p MK t + p MV t ) / 2 .",
"By doing this, we bridge the decoder's semantic space with the memory's semantic space, and explicitly maintains heterogeneity.",
"The decoder then uses a gating mechanism g = sigmoid ( W g h t + b g ) to decide whether the output x t comes from the normal vocabulary or the memory.",
"By mixing p N t and p M t with g , we get the distribution for the next decoder output: P ( x t | q, M, x 0 , x 1 , ...x t 1 ) = (1) g P ( X t = w k | q, M, x 0 , x 1 ...x t 1 ) + (1 g ) P ( X t = m k | q, M, x 0 , x 1 ...x t 1 ) where P ( X t = w k | q, M, x 0 , x 1 ...x t 1 ) = p N t P ( X t = m k | q, M, x 0 , x 1 ...x t 1 ) = p M t The three h s are then recorded as history states for later decoding time steps to perform the self-attention.",
"We will explain this in the next section.",
"As shown in Figure 3, our Cumulative Attention mechanism is exploited similarly to a multi-hop attention (Sukhbaatar et al., 2015).",
"The difference is that the multi-hop attention uses context vector over one single memory at different hops, while our Cumulative Attention utilizes the context vector to query different memories.",
"As shown in the left part of Figure 3, the decoder first performs self-attention to its history h N t , h K t , and h V t , and generates corresponding context vectors c as: c HN t = attn ( h t 1 , hist ( h N t )) c HK t = attn ( h t 1 , hist ( h K t ) c HV t = attn ( h t 1 , hist ( h V t )) 188 Figure 3: The Cumulative Attention Mechanism where c = attn ( query, memory ) denotes the attention function (Bahdanau et al., 2014), and the decoder's history states are defined as: hist ( h N t ) = ( h N 0 , h N 1 , ...h N t 1 ) hist ( h K t ) = ( h K 0 , h K 1 , ...h K t 1 ) hist ( h V t ) = ( h V 0 , h V 1 , ...h V t 1 ) The overall context vector is obtained through concatenation : c H t = [ c HN t , c HK t , c HV t ] , which is then used together with h K and h V to perform attention to m K and m V , respectively: c MK t = attn ([ h K t 1 , c H t ] , m K ) c MV t = attn ([ h V t 1 , c H t ] , m V ) where m K = ( m K 0 , m K 1 ...m K n ) and m V = ( m V 0 , m V 1 ...m V n ) , as shown in the right part of Figure",
"3. The decoder also performs attention to the question to get context vector c Q t , as in the standard seq2seq attention model.",
"At time step t , all context vectors are concatenated: c t = [ c Q t , c H t , c MK t , c MV t ] to form the current input to the decoder.",
"The decoder takes the context vector, the previous output, and the previous state to update its state, then generates a distribution for the next token, as shown in Section 4.1.",
"We use the greedy decoding approach and choose the word with the highest probability as the current output.",
"For optimization, we jointly optimize the negative log-probability of the output sentence and the cross entropy H for gate g .",
"Since g is the probability about whether the current output comes from the memory or the vocabulary, we can extract the label for g by matching sentence words with the memory.",
"The overall loss function L can be written as: L = N t =1 log ( P ( x t | q, M, x 0 ...x t 1 )) + H ( g, g ) We optimize L with gradient descent based optimizers.",
"Our experiments are designed to answer the following questions: (1) whether our model can properly utilize heterogeneous memories to generate readable answer sentences, (2) whether our model can cover all target answers during generation, (3) whether our model can introduce related knowledge in the output while avoiding repetition.",
"Our task requires a question, and a memory storing all the answer words and related knowledge as input, and produces a natural, readable sentence as the output.",
"Unfortunately, there is no existing dataset that naturally fits to our task.",
"We thus tailor the WikiMovies 1 dataset according to our requirements.",
"This WikiMovies dataset was originally constructed for answering simple factoid questions, using memory networks with different knowledge representations, i.e., structured KB ( KB entries in Table 2), raw textual documents ( Doc ), or processed documents obtained through information extraction ( IE ), respectively.",
"The first is in the classic subject-predicate-object format.",
"The second contains sentences from Wikipedia and also sentences automatically generated from predefined templates.",
"The third is in the subject-verb-object format, collected by applying off-the-shell information extractor to all sentences.",
"1 http://fb.ai/babi 189 The original data format Question Who directed the film Blade Runner?",
"As shown in Table 2, we treat each question in WikiMovies with its original answer (usually one or more words) as a QA pair, and one of the question's supportive sentences (ei-ther from Wikipedia or templates) as its gold-standard answer sentence.",
"For each question, the memory will contain all knowledge triples about the question's topic movie from the KB entries , and also include entities and keywords extracted from its IE portion.",
"For each entry in KB entries , we use the predicate as the key and the object as value to construct a new entry in our memory.",
"For those from IE , we keep the extracted tags as the key and entities or other expressions as the value.",
"Given a question, if an en-tity/expression in the memory is not the answer, it will be treated as information enrichment.",
"According to whether the supportive sentences are generated by predefined templates or not, we split the dataset into WikiMovies-Synthetic and WikiMovies-Wikipedia .",
"The resulting WikiMovies-Synthetic includes 115 question patterns and 194 answer patterns, covering 10 topics, e.g., director, genre, actor, release year, etc.",
"We follow its original data split, i.e., 47,226 QA-pairs for training, 8,895 for validation and 8,910 for testing.",
"In WikiMovies-Wikipedia , answer sentences are extracted from Wikipedia, admittedly noisy in nature.",
"Note that there are more than 10K Wikipedia sentences that cannot be paired with any questions.",
"We thus left their questions as blank and treat it as a pure generation task from a given memory, which can be viewed as a form of data augmentation to improve sentence variety.",
"We split WikiMovies-Wikipedia the dataset randomly into 47,309 cases for training, 4,093 for testing and 3,954 for validation.",
"We treat normal words occurring less than 10 times as UNK , and, eventually, have 24,850 normal words and 37,898 entity words.",
"We cut the maximum length of answer sentences to 20, and the maximum memory size to 10, which covers most cases in both synthetic and Wikipedia datasets.",
"We evaluate our answer sentences in terms of answer coverage , information enrichment , and redundancy .",
"For cases with only one answer word, we design C single to indicate the percentage of cases being correctly answered.",
"Cases with more than one answer word are evaluated by C part , i.e., the percentage of answer words covered correctly, and C perfect is the percentage of cases whose answers are perfectly covered.",
"Here, the definition of coverage is similar in spirit with the conventional recall as both measure how many gold words are included in the output.",
"Specifically, C part is essentially the same as recall with respect to its own cases.",
"Note that perfect coverage is the most diffi-cult, while single coverage is the easiest one.",
"For Enrich , we measure the number of none-answer memory items included in the output.",
"Regarding Redundancy , we calculate the times of repetition for memory values in the answer sentence.",
"We also compute BLEU scores (Papineni et al., 2002) on the WikiMovies-Wikipedia , as an indicator of naturalness, to some extent.",
"We compare our full model (HS-CumuAttn) with state-of-the-art answer generation models and constrained sentence generation models.",
"Our first baseline is GenQA (Yin et al., 2015), a standard encoder-decoder model with attention mechanism.",
"We equip it with our Key-Value style heterogeneous memory.",
"We also compare with its two variants.",
"HS-GenQA: we split its decoder state into heterogeneous representations.",
"The other one, 190 Model Redundancy C single C part C perfect Enrich GenQA 0.1109 91.25% 69.19% 38.92% 0.1535 HS-GenQA 0.1218 94.10% 76.47% 50.10% 0.1951 GenQA-AttnHist 0.1280 95.99% 73.44% 44.94% 0.1903 CheckList 0.1176 93.80% 76.32% 50.04% 0.1963 HS-AttnHist 0.1295 97.17% 77.90% 51.55% 0.1996 HS-CumuAttn 0.0983 98.15% 77.28% 50.79% 0.1665 Table 3: Results on the WikiMovies-Synthetic dataset Model BLEU Redundancy C part C perfect Enrich GenQA 42.50 0.2603 62.80% 18.24% 0.5903 CheckList 43.69 0.2744 63.42% 18.23% 0.6094 HS-CumuAttn 44.97 0.2385 64.06% 19.09% 0.6218 Table 4: Results on the WikiMovies-Wikipedia dataset GenQA-AttnHist, is enhanced with a history attention during decoding.",
"CheckList (Kiddon et al., 2016) is the state-of-the-art model for generating long sentences with large agenda to mention.",
"It keeps words that have been mentioned and words to mention using two separate records, and updates the records dynamically during decoding.",
"To adapt to our task, we modify CheckList with a question encoder and a KV memory.",
"Our model is implemented with the Tensorflow framework 2 , version 1.2.",
"We use the Adam optimizer (Kingma and Ba, 2014) with its default setting.",
"The embedding dimension is set to be 256, as is the LSTM state size.",
"We set the batch size to 128 and train the model up to 80 epochs.",
"As mentioned, there is a tradeoff between Cov-erage/Enrichment and Redundancy.",
"To set up a more fair comparison for different models, we ask the control group to reach a comparable level of Redundancy , i.e., approximately 0.11-0.12 on WikiMovies-Synthetic and 0.26-0.27 on WikiMovies-Wikipedia .",
"Keeping the Redundancy in around the same bucket, we compare their Coverage and Enrichment.",
"Let us first look at the performance on the Synthetic set in Table",
"3. GenQA is originally proposed to read only one single fact during decoding, so it is not surprising that it has the lowest answer coverage (38.92% C perfect ) 2 www.tensorflow.org Question the movie Torn Curtain starred who?",
"and information enrichment (0.1535).",
"After splitting the decoder state, HS-GenQA obtains sig-nificant improvement in both coverage (50.10% C perfect ) and enrichment (0.1952).",
"When considering history for attention, GenQA-AttnHist achieves even better coverage ( +3.% in C part and +5% in C perfect ).",
"By combining these two mechanisms, HS-AttnHist achieves the best perfect coverage, 51.55%.",
"Although CheckList is not originally designed for our task, it still gives a strong performance (50.04% C perfect and 0.1963 enrich-ment), at a slightly lower redundancy (0.1176).",
"Finally, our full model, HS-CumuAttn, achieves the best single coverage 98.15%, and comparable par-tial/perfect coverage, with the lowest redundancy (0.0983).",
"Due to the lower level of redundancy, HS-CumuAttn does not include as much enrichment as other strong models, but still outperforms GenQA.",
"Compared to vanilla GenQA, HS-GenQA splits the decoder states, thus improves the decoder's memory addressing process by performing attention separately, leading to improvements in both coverage and enrichment.",
"Improvements of GenQA-AttnHist are of a different rationale.",
"Looking at the history enables the decoder to avoid what are already said.",
"Compared with HS-GenQA, GenQA-AttnHist improves Enrichment by avoiding repetition when introducing related information, while, HS-GenQA improves Enrichment by better memory addressing to select proper slots.",
"Combining the two mechanisms together gives HS-AttnHist the best performance in Enrichment.",
"However, HS-AttnHist still suffers from the repetition issue, to certain extent.",
"Because when choosing memory content, there is no explicit mechanism to help the decoder to avoid repetitions according to the history (left of Figure 4).",
"Therefore, a generated word may still be chosen again at the memory addressing step, leaving all the burden of avoiding repetition to the generation step.",
"Our Cumulative Attention mechanism is designed to utilize the context vector of the history to address the memory, thus helps avoid choosing those already mentioned slots at memory addressing time (right of Figure 4), leading to almost the best coverage with the lowest redundancy.",
"Now we compare the three main models, GenQA, CheckList and our HS-CumuAttn on WikiMovies-Wikipedia (Table 4), which is admittedly more challenging than WikiMovies-Synthetic .",
"We skip the C single metrics here since most questions in WikiMovies-Wikipedia contain more than one answer word.",
"It is not surprising that 192 CheckList, with a lower redundancy, still outperforms GenQA in almost all metrics, except C perfect , since CheckList is originally designed to perform well with larger agenda/memory and longer sentences.",
"On the other hand, our model, HS-CumuAttn, achieves the best performance in all metrics.",
"Although the BLEU score is not designed to fully reflect the naturalness, it still indicates that our model can output sentences that share more n-gram snippets with reference sentences and are more similar to those composed by humans.",
"Case Study and Error Analysis Table 5 provides the system outputs from different models for an example question.",
"We can see that GenQA may lose track of the decoder history, and repeat itself ( and and ), because there is no explicit mechanism to help avoid repetition.",
"Also, it lacks informativeness and may not utilize other information stored in the memory.",
"CheckList keeps records of what have been said and what are left to mention, thus reaches a good answer coverage.",
"But its decoder is unable to explicitly address separate components within one memory slot, so it may not realize that the two Julie Andrews s are essentially the same person.",
"HS-CumuAttn is able to find all the answer words correctly and also include the director into the sentence.",
"After generating Paul Newman , the Cumulative Attention mechanism enables the model to realize that Paul Newman in slot 2 has been said, and Paul Newman in slot 6 is the same as slot 2, so it should not choose the 6th slot again.",
"Rather it should move to Julie Andrews .",
"Although the decoder may figure out the two Paul Newman are the same during decoding, the Cumulative Attention can explicitly help make the clarification during memory addressing.",
"Intuitively, the attention across memory and history induces a stronger signal for the decoder to gather the right information.",
"Table 6 lists more typical imperfect output from our model.",
"In question 1, there is considerable redundancy in the memory, but our decoder is still able to avoid repeatedly choosing the same entities from difference sources, though it produces a \"_UNK\" showing a slight incoherence.",
"We think it comes from the gate g as it fails to decide that the current word should come from the memory.",
"In question 2, the model correctly chooses the memory slot, but outputs the word \" directed \" while the correct word should be \" written \".",
"This also shows an word choice inconsistency between the language model and the memory retrieval.",
"Question 3 makes the same mistake, where it indeed chooses the right answer, but adds an incorrect word \" written \".",
"We also observe a pair of additional parentheses, which are often used to accomodate movie tags , but we do not see any tags in this memory, so it has to be left blank.",
"Question 4 shows an incorrect memory retrieval, where the decoder should have chosen slot 5 as the movie name.",
"Question 5 is generally good enough, except the same parenthesis error as in question",
"4. It is also interesting to see additional descriptions like \" Australian \", \" supernatural \" and \" drama \" in question 2, 3, and 5, introduced by the language model, rather than the memory.",
"Although our model prevents repetition and obtains general naturalness, it cannot guarantee that the decoder can precisely use the right language to describe the memory information.",
"We see the general readability of these sentences, yet they are still not as good as human composed ones.",
"It is fairly subtle for the decoder to collaborate with the memory in different levels of semantics.",
"The semantic coherency and word choice consistency is still a challenge in natural language generation.",
"In this paper, we propose a novel mechanism within an encoder-decoder framework to enable the decoder to actively interact with a memory by taking its heterogeneity into account.",
"Our solution can read multiple memory slots from different sources, attend across the generated history and the memory to explicitly avoid repetition, and enrich the answer sentences with related information from the memory.",
"In the future, we plan to extend our work through 1) investigating more sophisticated structures in the memory such as knowledge graph, 2) solving more complex questions, such as those involving deep reasoning over multiple facts.",
"This work is supported by National High Technology R&D Program of China (Grant No.2015AA015403), Natural Science Foundation of China (Grant No. 61672057, 61672058).",
"For any correspondence, please contact Yansong Feng."
] | [
"abstain",
"abstain",
"abstain",
"objective",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"objective",
"result",
"other",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"method",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"objective",
"method",
"objective",
"other",
"other"
] |
[
"Emotion-cause pair extraction aims to extract all potential pairs of emotions and corresponding causes from unannotated emotion text.",
"Most existing methods are pipelined framework, which identifies emotions and extracts causes separately, leading to a drawback of error propagation.",
"Towards this issue, we propose a transition-based model to transform the task into a procedure of parsing-like directed graph construction.",
"The proposed model incrementally generates the directed graph with labeled edges based on a sequence of actions, from which we can recognize emotions with the corresponding causes simultaneously, thereby optimizing separate subtasks jointly and maximizing mutual benefits of tasks interdependently.",
"Experimental results show that our approach achieves the best performance, outperforming the state-of-the-art methods by 6.71% ( p < 0 . 01 ) in F 1 measure.",
"Emotion-cause pair extraction (ECPE) is a new task to identify emotions and the corresponding causes from unannotated emotion text (Xia and Ding, 2019).",
"This involves several subtasks, including 1) Extracting pair components from input text, e.g., emotion detection and cause detection; 2) Combining all the elements of the two sets into emotion-cause pairs and eliminating the pairs that do not exist a causal relationship.",
"For the former subtask, a clause can be categorized into emotion, which usually contains an emotion keyword to express specific sentiment polarity, or cause, which contains the reason or stimuli of an observed emotion.",
"Then, the set of all possible emotion-cause pairs will be fed into the second subtask to determine the relationship.",
"In general, it is an essential issue in emotion analysis since it provides Co-Corresponding Authors Figure 1: An example of emotion-cause pair extraction.",
"a new perspective to investigate how emotions are provoked, expressed, and perceived.",
"Figure 1 shows an example of ECPE, and the text is segmented into three clauses.",
"In this instance, only the second clause and the third clause hold an emotion causality, where I lost my phone while shopping is the cause of emotion I feel sad now .",
"Thus, the extracted results of this sample should be { I lost my phone while shopping , I feel sad now } .",
"The goal of ECPE is to identify all the pairs that have emotion causality in an emotion text.",
"However, from both theoretical and computational perspectives, due to the inherent ambiguity and subtlety of emotions, it is hard for machines to build a mechanism for understanding emotion causality like human beings.",
"Previous approaches mostly focused on detecting the causes towards the given annotation of emotions, which was followed by most of the recent studies in this field (Lee et al., 2010; Gui et al., 2014; Gao et al., 2015; Gui et al., 2016, 2017; Li et al., 2018; Xu et al., 2019; Fan et al., 2019).",
"Nevertheless, it suffers that emotions must be annotated before extracting the causes, which limits the applications in real-world scenarios.",
"Towards this issue, Xia and Ding (2019) presented a new task to extract emotion-cause pairs from the unannotated text.",
"However, they followed a pipelined framework, which models emotions and causes separately, rather than joint decoding.",
"Hence, to overcome the drawback of error propagation may occur in existing methods.",
"Ideally, the emotion-cause structure should be considered as an integral framework, including representation learning, emotion-cause extraction, and reasoning.",
"To this end, we transform the ECPE problem into a procedure of directed graph construction, from which emotions and the corresponding causes can be extracted simultaneously based on the labeled edges.",
"The directed graph is constructed by designing a novel transition-based parsing model, which incrementally creates the labeled edges according to the causal relationship between the connected nodes, through a sequence of defined actions.",
"In this process, the emotion detection, cause detection, and their causality association can be jointly learned through joint decoding, without differentiating subtask structures, so that the maximum potential of information interaction between emotions and causes can be exploited.",
"Besides, the proposed model processes the input sequence in a psycholinguistically motivated left to right order, consequently, reducing the number of potential pairs needed to be parsed and leading to speed up (if all clauses are connected by Cartesian products, the time complexity will be O ( n 2 ) ).",
"Regarding feature representation, BERT (Devlin et al., 2019) is used to produce the deep and contex-tualized representation for each clause, and LSTMs (Hochreiter and Schmidhuber, 1997) are performed to capture long-term dependencies among input sequences.",
"In addition, action history and relative distance information between the emotion-cause pairs are also encoded to benefit the task.",
"To summarize, our contribution includes: Learning with a transition-based framework, so that end-to-end emotion-cause pair extraction can be easily transformed into a parsing-like directed graph construction task.",
"With the proposed joint learning framework, our model can extract emotions with the corresponding causes simultaneously, often with linear time complexity.",
"Performance evaluation shows that our model statistically significant improvements over the state-of-the-art methods on all the tasks 1 .",
"given a piece of emotion text d n 1 = ( c 1 , c 2 , . . . , c n ) , which consists of several manually segmented clauses.",
"The goal of ECPE is to output all potential pairs where exist emotion causality: P = { , ( c e , c c ) , } (1) where c e is an emotion clause, and c c is the corresponding cause clause.",
"Note that, the previous emotion cause extraction (ECE) task aims to extract c c given the annotation of c e : { c c c e } .",
"In comparison, the ECPE is a new and challenging task since there is no annotation provided in the emotion text.",
"Similar as the traditional ECE task, the ECPE is also defined at the clause level, because it is difficult to describe emotion causes at the word or phrase level.",
"That is, in this paper, the emotion and cause are refer to emotion clause and cause clause, respectively.",
"We present a new framework aimed at integrating the emotion-cause pair extraction into a procedure of parsing-like directed graph construction.",
"The proposed framework incrementally constructs and labels the graph from input sequences, scoring partially segmented results using rich non-local features.",
"Figure 2 shows the overall architecture of the proposed framework.",
"In the following, we first introduce how to construct the directed graph based on a novel transition-based system, then the details of feature representation will be described.",
"Let G = ( V, R ) be an edge-labeled directed graph where: V = { 1 , 2 , . . . , n } is the set of nodes that correspond to clauses in the input text and",
"R = VR V is the set of labeled edges.",
"We will denote a connection between a head node i V and a modifier node j V as i l j , where l { l t , l n } is the causality label connecting them.",
"l t indicates the node i is the cause of the emotion node j while l n indicates node j is an emotion but node i is not the corresponding cause.",
"Besides, other nodes irrelevant to the final result have no edges.",
"Note that, in this task, a node can be emotion and the corresponding cause simultaneously.",
"Furthermore, an emotion node can also be associated with multiple causes.",
"Thus, the acyclicity and single-head constraints are not necessary for our model, as arbitrary graphs are allowed.",
"We build the directed graph by designing a novel transition-based parser.",
"Formally, each state of our parser is represented by a tuple: S = ( , , E, C, R ) , where and are disjoint lists called stack and buffer , which store the indices of nodes that have been processed and to be processed, respectively.",
"E is the set of emotions, and C is the set of causes.",
"R is used to store the edges generated so far.",
"Besides, action history is stored to a list A .",
"The definition of action set plays a crucial role in the transition-based system, and it relies on the type of task.",
"As shown in Table 1, we define 6 types of actions based on our empirical observation, and their logics are summarized as follows: SHIFT (SH).",
"RIGHT-ARC l t (RA l t ).",
"It assigns an edge from 1 to 0 with label l t : 1 l t 0 , then copies 0 to E and pops 1 from to C .",
"LEFT-ARC l t (LA l t ).",
"It assigns an edge from 0 to 1 with label l t : 1 l t 0 .",
"Then copies 1 to E and pops 0 from to C RIGHT-ARC l n (RA l n ).",
"Adds a relation from 1 to 0 with label l n : 1 l n 0 .",
"Then pops 1 out of and only copies 0 to E .",
"LEFT-ARC l n (LA l n ).",
"It denotes a relation from 0 to 1 : 1 l n 0 and copies 1 to E .",
"Note that, we move 0 to the top of to improve coverage rather than pops 0 , because 0 may be the cause of incoming nodes in the .",
"CYCLE-ARC (CA).",
"It assigns a loop edge on the node 0 with label l t and then copies 0 to both E and C .",
"Action Constraints.",
"To ensure that each parser state is valid, we need to specify some constraints on the action.",
"For example, RIGHT and LEFT can only be conducted when there are at least two elements in the .",
"We also empirically set a constraint that RIGHT-ARC l n will be performed when | 1 | 0 are both emotions but has no emotion causality.",
"Additionally, in practical, CYCLEARC may conflict with other actions, e.g., 0 is the cause of itself but is also the cause of 1 , which conflicts with the LEFT-ARC l t .",
"For simplicity and efficiency, we separate it from other actions and distinguish it by training a binary classifier only depends on the representation of 0 .",
"Table 2 illustrates the gold-standard sequence of transitions for the text in Figure 1. The parser state is initialized to ([ ] , [1 , 2 , 3] , , , ) and the terminal state is ([ . . . , $] , [ ] , E, C, R ) , where $ indicates the termination of transitions.",
"Search Algorithm.",
"For the ECPE task, we transform it into a procedure of directed graph construction by a sequence of actions.",
"The input is an emotion text d n 1 = ( c 1 , c 2 , . . . , c n ) and the output is the corresponding sequence of actions A m 1 = ( a 1 , a 2 , . . . , a m ) .",
"Hence, the task can be regarded as searching for an optimal action sequence A given the stream of clauses d n 1 : A = argmax A p ( A m 1 | d n 1 ) (2) Formally, at step t , our model predicts the next action based on the current system state S t and the action history A t 1 1 .",
"Let r t to denote the representation for computing the probability of the action a t at step t , this yields: p ( a t | r t ) = exp ( w (cid:62) a t r t + b a t ) (cid:80) a (cid:48) A ( S ) exp ( w (cid:62) a (cid:48) r t + b a (cid:48) ) (4) where w a denotes a learnable parameter vector and b a is a bias term.",
"Finally, the overall optimization function is: ( A , S ) = argmax A,S (cid:89) t p ( a t , S t +1 | A t 1 1 , S t ) = argmax A,S (cid:89) t p ( a t | r t ) (5) where the ECPE is merged into a transition-based action prediction task.",
"Thus, the task is modeled as: ( A , S ) = argmax A,S (cid:89) t p ( a t , S t +1 | A t 1 1 , S t ) (3) where a t is the generated action at step t , and S t +1 is the updated system state according to a t .",
"The set A ( S ) represents the legal actions that can be taken given the current parser state.",
"For efficient decoding, the maximum probability action is chosen greedily until the parsing procedure is termination.",
"We apply BERT to produce the representation for each clause and use LSTMs to capture long-term dependencies of each parser state.",
"Representation of Clause.",
"Given an emotion text d n 1 = ( c 1 , c 2 , . . . , c n ) consisting of n clauses and each clause c i = ( w i 1 , w i 2 , . . . , w il ) contains l words.",
"We formulate each clause as a sequence x i = ([CLS] , w i 1 , . . . , w il , [SEP]) , where [CLS] is a special classification token that the final hidden state is used as the aggregate sequence features and [SEP] is a dummy token not used in our model.",
"Thus, we obtain the hidden representation as h c i = BERT( x i ) R d b | x i | where d b is the size of hidden dimension and | x i | is the length of sequence x i .",
"Then, the text d n 1 can be represented as h d = [ h c 1 , h c 2 , . . . , h c n ] .",
"Representation of Parser State.",
"When the parsing starts, the parser state will be initialized to ([ ] , [1 , 2 , . . . , n ] , , , ) and a series of actions will consume the clauses in the buffer to incrementally build an output until reaches the terminal state ([ . . . , $] , [ ] , E, C, R ) , as shown in Table 2. Specifically, at step t , considering the triple ( t , t , A t ), where t = ( . . . , 1 , 0 ) , t = ( 0 , 1 , . . . ) and A t = ( . . . , a t 2 , a t 1 ) .",
"For the stack , to summarize the information from both directions, we use bidirectional LSTM to exploit two parallel passes, thus, the feature representation of t is denoted as: s t = LSTM s ([ . . . , 1 , 0 ] , [ . . . , 1 , 0 ]) (6) where s t = [ s t , s t ] that both s t and s t R d l | t | , d l is the size of hidden dimension of LSTM and | t | is the size of t .",
"Similarly, we can get the representation for t by: b t = LSTM b ([ 0 , 1 , . . . ] , [ 0 , 1 , . . . ]) (7) where b t = [ b t , b t ] that b t and b t R d l | t | where t is the size of t .",
"For action sequence, we map each action a to a distributed representation e a through a looking-up table E a , and apply an unidirectional LSTM to obtain the complete history of actions from left-to-right: t = LSTM a ( . . . , a t 2 , a t 1 ) (8) Once a new action a t is generated, the embedding e a t will be added into the rightmost position of the LSTM a .",
"To enhance the position relation between the pair ( 1 , 0 ) , we also represent their relative distance d as an embedding e d from a looking-up table E d .",
"The final representation of parser state at step t is the combination of these features.",
"Action Reversal.",
"Let us visit the example in Figure 1 again.",
"Reading it from left-to-right, as shown in the top of Figure 3, we see the clause I lost my phone while shopping trigger the emotion I feel sad now , so the predicted action would be RIGHTARC l t .",
"However, from a different perspective, we read it from right-to-left, as shown in the bottom Figure 3: Illustration of action reversal.",
"of Figure 3, the cause I lost my phone while shopping behind the emotion I feel sad now , so the predicted action should be reversed to LEFT-ARC l t .",
"That is, s t and s t should be regarded as different features to produce different action.",
"Based on this observation, we apply r t and r t to predict the original action and reversed action, respectively, which can be used to mine the deep directional information for this task: r t = ReLU([ s t 1 ; s t 0 ; b 0 t ; 1 t ; e d ]) (9) r t = ReLU([ s t 1 ; s t 0 ; b 0 t ; 1 t ; e d ]) (10) where ReLU is an activation function for nonlinearity.",
"Index 0 and 1 indicate the first and second representation of and , 1 indicates the last representation of action history.",
"Training.",
"By learning with the transition-based framework, we convert the gold output structure in a set of training data into a gold sequence of defined actions.",
"For each parser state at step t , we maximize the log-likelihood of the classifier in formula (5), which can be revised as: J ( ) = (cid:88) t log p ( a t | r t ) + log p ( a t | r t ) + log p ( c t | s 0 t ) + 2 || || 2 (11) where a t is the reversed action, and p ( c t | s 0 t ) is the predictive distribution of CYCLE-ARC which is separated from the other actions due to the action constraints.",
"is the coefficient of L 2 -norm regularization, and denotes all the parameters in this model.",
"Note that, during the test decoding, only r t and s 0 t are used to predict the next action.",
"To be consistent with previous approaches, we adopt the only benchmark (Gui et al., 2016) to evaluate our model by following (Xia and Ding,",
"In this paper, we stochastically divide the corpus into a training/development/test set in a ratio of 8:1:1.",
"In order to obtain statistically credible results, we evaluate our method 20 times with different data splits by following (Xia and Ding, 2019) and then perform one sample t -test on the experimental results.",
"The average results of Precision ( P ), Recall ( R ) and F-measure ( F 1 ) are employed to measure the performance.",
"Note that when we extract the emotion-cause pairs, we obtain the emotions and causes for each text simultaneously.",
"Thus, we also evaluate the performance of emotion extraction and cause extraction in our model.",
"We adopt BERT Chinese as the basis in this work 3 .",
"Adam optimizer is used for online learning (Kingma and Ba, 2015), and initial learning rates for the BERT layer and top MLP layer are set to 1e-5 and 1e-3, respectively.",
"The hidden size of MLP layer is set to 256, and the hidden size of all LSTMs is set to 128 with 1 layer.",
"The embeddings of position and action are initialized randomly with dimension 128 and keep unchanged during the training stage.",
"The dropout rate is 0.5, the batch size is 3, and the coefficient of L 2 term is 1e-5.",
"We train the model 10 epochs in total and adopt early stopping strategy based on the performance of development set.",
"Then, the highest F-measure model on the development set is used to evaluate the test set.",
"We first compare our transition-based model with the method proposed by (Xia and Ding, 2019),",
"2 http://news.sina.com.cn/society/ 3 Our BERT model is adapted from this implementation: https://github.com/huggingface/ pytorch-pretrained-BERT",
"which contains three models: 1) Indep: Emotion extraction and cause extraction are independently trained, then filtering the pairs that have no emotion causality; 2) Inter-CE: The difference is that the predictions of cause extraction are used to improve emotion extraction; 3) Inter-EC: Contrary to the Inter-CE, the predictions of emotion extraction are used to improve cause extraction.",
"It is the current state-of-the-art model for this task.",
"To compare with other joint models, we implement SL-BERT (Zheng et al., 2017) and MT-BERT (Caruana, 1993) for this task.",
"The former aims to joint extract entities and relations based on a novel tagging scheme with multiple labels and the other is a multi-task learning framework by sharing the hidden layers among all tasks.",
"We implement them both based on BERT to be consistent with our experimental setting.",
"We also evaluate our model by only removing the transition procedure to reveal the effect of the transition-based algorithm, denoted as -transition.",
"Besides, for a fair comparison, we use LSTM as the basic encoder of clauses instead of BERT and keep the same experimental setting by following (Xia and Ding, 2019), namely LSTM based .",
"Table 4 shows the experimental results.",
"With the transition-based algorithm, our proposed model achieves the best performance over all the three tasks, outperforming a number of competitive baselines by at least 1.74%, 3.30% and 3.33% in F 1 score, respectively.",
"The improvements are significant with p < 0 .",
"01 in one sample t -test.",
"Regarding pipelined approaches, Indep considers framework individually and ignores the fact that emotions and causes are usually mutually indicative, leading to the lowest performance.",
"On the contrary, Inter-CE and Inter-EC yield better results by exploiting the relevance between emotions and causes.",
"By comparing Inter-CE and Inter-EC, we find that the improvement of Inter-EC on cause extraction is much more than the improvement of Inter-CE on emotion extraction, thus Inter-EC shows better results.",
"Differently, our model jointly extracts emotion-cause pairs and shows consistent performance improvement over the Indep-CE and Indep-EC, demonstrating the superiority of one-stage model by reducing error propagation.",
"In comparison with other joint models, our proposed model significantly outperforms SL-BERT by 12.56%, 4.44 % and 5.52% in F 1 measure, respectively.",
"We guess that SL-BERT jointly identifies emotion-cause pairs but still follows an emotion cause pipelined decoding order.",
"In contrast, we achieve fully joint decoding with interleaving actions for all the three tasks, thereby achieving better information interaction.",
"Besides, our model also yields better results than MT-BERT, one possible reason is that the interdependence between the emotions and causes cannot be mined effectively only through parameter sharing.",
"We also show the results where BERT embeddings are replaced by LSTM from the input.",
"It can be seen that the results still outperform the existing methods by at least 3.06% in F 1 score.",
"Furthermore, when we remove the transition procedure, the performance drops heavily over all the three tasks, especially with a 7.87% decrease in F 1 measure on the ECPE task.",
"These results show that the improvements provided by the proposed transition system are more noticeable than other components.",
"To further evaluate the contribution of neural components, we conduct feature ablation experiments to study the effects of different parts.",
"As shown in Table 5, the F 1 score decreases most heavily without LSTM (-4.40%), which indicates that it is necessary to capture non-local dependencies among input clauses, and our model can benefit from it effectively.",
"Distance is also particularly relevant to the model by capturing the position information between the emotions and causes, which is consistent with our intuition that the closer a clause is to the emotion, the higher probability it should be the cause.",
"Seen from the results, the history of actions stored in action has a crucial influence on predicting the next action.",
"The results also show that reversal , which can be regarded as a data augmentation strategy, is useful by exploring the deep directional information.",
"Without buffer , the F 1 score drops 1.8% over the ECPE task.",
"It may be due to the reason that buffer can provide more valuable information about the succeeding sequence.",
"To gain more insights into the parsing procedure, we analyze the situations that emotion-cause pairs in an emotion text cannot be extracted entirely by our defined actions, as illustrated in Figure 4. For the pseudo sample in Figure",
"4(a), it can be parsed by the transition system using computation: SH(1); SH(2); SH(3);RA l n (2 l n 3); RA l t (1 l t 3); SH(4); RA l n (3 l n 4); SH($) Similarity, for the pseudo sample in Figure",
"4(b), we get the transition sequence by: SH(1); SH(2);RA l t (1 l t 2); SH(3); RA l n (2 l n 3); SH(4); LA l n (3 l n 4); SH($)",
"In both situations, our model can only extract one emotion-cause pair (i.e., RA l t (1 l t 3) and RA l t (1 l t 2) ,",
"respectively.), because the cause which belongs to another emotion has been popped during the parsing procedure.",
"Based on this observation, one crucial problem about the proposed model is how many situations involving the emotion-cause transformation can be covered by the action set defined here.",
"Although a formal theoretical proof is beyond the scope of this paper, we can empirically verify that the action set works well from Table 4. Going one step further, to further validate the actions, we input the texts into our transition system to obtain the pseudo-gold emotion-cause pairs P (cid:48) based on the annotation, which can give us the correct action to take for a given parse state.",
"Then we compare P (cid:48) with the gold-standard emotion-cause pairs P to see how similar they are.",
"On the whole dataset, we obtain an overall 98.5% F 1 score for (cid:104) P, P (cid:48) (cid:105) , which indicates the upper bound of our transition system can achieve 98.5% in F 1 score.",
"Thus, the defined action set here is capable of extracting emotion-cause pairs through a sequence of actions.",
"We also perform an experiment to understand the impact of action reversal on the performance.",
"Fig-(a) Without action reversal.",
"(b) With action reversal.",
"ure 5 shows the confusion matrices that present a comparison between the predicted actions and corrective actions.",
"The results shows that SHIFT , LEFTARC l n and RIGHT-ARC l n yield higher accuracy on both Figure",
"5(a) and Figure",
"5(b) since they are account for a large proportion of the total actions.",
"As expected, our model makes more mistakes involving the RIGHT-ARC l t and LEFT-ARC l t , which play decisive roles in identifying the emotion-cause pairs.",
"Especially for the LEFT-ARC l t action, there is only about 0.43% in the total actions, turning out to be the most difficult action to learn given the relatively small training samples.",
"Thus, as shown in Figure",
"5(a), the accuracy for LEFT-ARC l t is 0, which drops the overall performance heavily.",
"However, when we apply the action reversal into our model, boosting the accuracy of LEFT-ARC l t by 58.8% and further improving the overall performance.",
"We guess that based on action reversal, the original RIGHT action can be reversed to LEFT and vice versa, so that double the training actions.",
"The results in Figure 5 show that our proposed model can capture this subtlety of emotions effectively by exploiting the deep directional information through action reversal strategy.",
"Different from the traditional emotion analysis, which aims to identify emotion categories in text.",
"Emotion cause extraction (ECE) reveals the essential information about what causes a certain emotion and why there is an emotional change.",
"It is a more challenging task due to the inherent ambiguity and subtlety of emotion expressions.",
"Lee et al. (2010) first defined the emotion cause extraction as a word-level extraction task.",
"manually constructed a dataset from the Academia Sinica Balanced Chinese Corpus and generalized a series of linguistics rules based on the dataset.",
"Based on this setting, there are some studies have been exploited for this task such as rule-based methods (Li and Xu, 2014; Gao et al., 2015; Yada et al., 2017) and machine learning methods (Ghazi et al., 2015; Song and Meng, 2015).",
"Chen et al. (2010) converted the task from word-level to clause-level due to a clause may be the most appropriate unit to detect causes, and extracted causes using six groups of manually constructed linguistic cues.",
"By following this task setting, Gui et al. (2014) extended the rule-based features to 25 linguistics cues, then trained classifiers on SVM and CRFs to detect causes.",
"Gui et al. (2016) released a new Chinese emotion cause dataset collected from SINA city news 4 and proposed a multi-kernel based method to identify emotion causes.",
"Following this corpus, Xu et al. (2019) proposed a learning to re-rank method based on a series of emotion-dependent and emotion-independent features.",
"Recently, inspired by the success of deep learning architecture, some studies focused on identifying emotion causes with well designed neural network and attention mechanism (Gui et al., 2017; Li et al., 2018, 2019; Fan et al., 2019; Xia et al., 2019; Ding et al., 2019).",
"All of the above studies extracted emotion causes rely on the given emotion annotations, which limits the application in real-world scenarios due to the expensive annotations.",
"Targeting this problem, Xia and Ding (2019) proposed a novel task based on ECE, namely emotion-cause pair extraction (ECPE), which aims at extracting emotions and the corresponding causes from unannotated emotion text.",
"However, they followed a pipelined framework which first detects emotions and causes with individual learning frameworks, then performed emotion-cause pairing to eliminate the unmatched pairs, leading to a drawback of error propagation.",
"In this work, we design a novel transition-based model to extract emotions and causes simultaneously to maximize the mutual benefits of subtasks, thus alleviating the drawback of error propagation.",
"Transition-based system is usually designed to model the chunk-level relation in a sentence for dependency parsing (Zhang and Nivre, 2011; Wang et al., 2015; Fernandez-Gonzalez and Gomez-Rodrguez, 2018).",
"Apart from its application in dependency parsing, transition-based method has 4 http://news.sina.com.cn/society/ also achieved great success in other natural language processing tasks, such as word segmentation (Zhang et al., 2016), information extraction (Wang et al., 2018b; Zhang et al., 2019), disfluency detection (Wang et al., 2017) and nested mention recognition (Wang et al., 2018a).",
"To the best of our knowledge, this is the first work which extracts the emotion-cause pairs in an end-to-end manner.",
"In this paper, we present a novel transition-based framework to extract emotion-cause pairs as a procedure of directed graph construction.",
"Instead of previous pipelined approaches, the proposed framework incrementally outputs the emotion-cause pairs as a single task, thereby the interdependence between emotions and causes can be exploited more effectively.",
"Experimental results on a standard benchmark demonstrate the superiority and robustness of the proposed model compared to a number of competitive methods.",
"In the future, one possible direction is creating complete graphs with their nodes being input clauses to achieve full coverage.",
"Besides, graph neural network-based (Kipf and Welling, 2016) methods are also worth investigating to model the relations among nodes for this task.",
"This work was partially supported by National Natural Science Foundation of China 61632011, 61876053, 61906185, Shenzhen Foundational Research Funding JCYJ20180507183527919, JCYJ20180507183608379, Key Technologies Research and Development Program of Shenzhen JSGG20170817140856618, EU-H2020 (grant no. 794196) and the project AWS13C008."
] | [
"abstain",
"abstain",
"objective",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"objective",
"objective",
"objective",
"objective",
"objective",
"objective",
"objective",
"result",
"objective",
"objective",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"other",
"objective",
"objective",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"other",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"other",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"other"
] |
[
"Abstract",
"This paper addresses the problem of dialogue reasoning with contextualized commonsense inference.",
"We curate CICERO , a dataset of dyadic conversations with five types of utterance-level reasoning-based inferences: cause, subsequent event, prerequisite, motivation, and emotional reaction.",
"The dataset contains 53,105 of such inferences from 5,672 dialogues.",
"We use this dataset to solve relevant generative and discriminative tasks: generation of cause and subsequent event; generation of prerequisite, motivation, and lis-tener's emotional reaction; and selection of plausible alternatives.",
"Our results ascertain the value of such dialogue-centric commonsense knowledge datasets.",
"It is our hope that CICERO will open new research avenues into commonsense-based dialogue reasoning.",
"Conversational content on the internet is quickly growing, and such content holds valuable knowledge about how information exchange takes place among speakers.",
"A key step towards understanding such dialogues is gaining the ability to reason with the information shared in the dialogue.",
"To this end, we curate a dataset of dyadic conversations named CICERO ( C ontextual I zed C ommons E nse Infe R ence in dial O gues) , which contains inferences around the utterances in the dialogues.",
"The dataset focuses on five types of reasoning-based inferences for a given utterance in a dialogue: cause, subsequent event, prerequisite, motivation, and emotional reaction.",
"Arguably, making such reasoning-based inferences often demands commonsense knowledge, especially when the inference is implicit.",
"Fig. 1a shows such a case where the cause behind the target utterance is not explicit in the context.",
"allowed the annotator to infer a probable cause of the utterance.",
"On the other hand, commonsense can be crucial in sifting relevant information from the context.",
"Fig. 1b depicts an instance where the cause behind the target utterance is inferred from the context.",
"This inference can be explained by commonsense knowledge (see Fig. 3) such as repetitive consumption of the same food causes boredom dispelled by changing food achieved by eating at McDonald's .",
"Thus, it is reasonable to posit that such knowledge could aid to bridge the gap between the input and the target inference.",
"ATOMIC (Sap et al., 2019; Hwang et al., 2020) is one such dataset for commonsense reasoning-based inference, allowing for a large set of inference types.",
"However, ATOMIC is context-free, as it only provides inferences on short phrases, ignoring the broader context around them.",
"Making an inference on an entire utterance, on the other hand, requires understanding the context around it.",
"As per Grice's maxim (Grice, 1975), in conversations, the interlocutors provide any piece of 5010 information as is needed, and no more.",
"Thus, much of the information required to understand an utterance is likely interspersed along the dialogue, and not necessarily localized in the given utterance.",
"For instance, in the example in Figure 1b, understanding the cause for one of the speakers' desire to go to McDonald's requires the context of the previous utterances.",
"ATOMIC is thus not ideal for commonsense reasoning-based inferences on dialogues, where context is critical for understanding an utterance's implications.",
"We confirm this with our experiments in the subsequent sections (4).",
"GLUCOSE (Mostafazadeh et al., 2020) exclusively curates causal inferences cause, enable, and result in from monologues.",
"Thus, it is not ideal for making context-consonant inferences on the dialogues.",
"Also, dialogue-specific dimensions like motivation and reaction are beyond its scope.",
"On the other hand, CIDER (Ghosal et al., 2021a) does provide a dataset for commonsense-based inference on dialogues, but it is limited to inferences explicitly observable in the dialogues.",
"As such, sys-tems based on CIDER cannot e ectively speculate around the dialogue for implicit inference.",
"CICERO strives to bring the best of these three datasets by creating a dataset that can enable models to e ectively operate on a dialogue by considering the context and speculating when the answer is not apparent.",
"We create CICERO a large dataset of English dyadic conversations annotated with five types of inferences with the help of human annotators, who are instructed with a carefully crafted set of guidelines.",
"The annotators are given a dialogue and a target utterance, as exemplified in Fig. 2. The annotators are then asked to make an inference, posed as a question, about the target utterance.",
"They write a one-sentence answer that is grammatically correct, concise, and consistent with the dialogue.",
"The answer may contain both overt and speculative scenarios.",
"An overt scenario is explicitly or implicitly present in the dialogue context.",
"If such contextual scenarios answer the question, the annotators write them as a well-formed sentence.",
"However, in many cases, the dialogue may not hold the answer, neither explicitly nor implicitly.",
"In such cases, the annotators are asked to speculate plausible scenar-Linda would you care for some candies or cookies?",
"ios around the dialogue, using commonsense and world knowledge, to devise answers that do not contradict the given dialogue context.",
"Given the dialogue-target pair in Fig. 2, at least one of the following five inferences about the target is made by the annotators: Dad, what will we have for dinner?",
"Q1.",
"What is the event that directly causes (overt) or could cause (speculative) Target ?",
"The annotators consider if any of the events that are or likely to be antecedent to the target can cause the target .",
"Answer: Linda didn't exercise regularly during the winter.",
"Remark: The annotators provided possible, speculative answers as the dialogue itself does not provide any reason for Linda's weight gain.",
"Q2.",
"What subsequent event happens (overt) or could happen (speculative) following the Target ?",
"The annotators write about the event that happens or could happen following the target .",
"Additionally, annotators were told that sometimes, such subsequent events of the target are triggered or likely to be triggered by the target .",
"Answer: Linda starts a diet and tries to lose weight.",
"Q3.",
"What is (overt) or could be (speculative) the prerequisite of Target ?",
"Does the target have any direct prerequisite or dependency that has to happen or be fulfilled first?",
"(In most cases, prerequisite is the state / event which has to be satisfied before another event causes target .)",
"The answer is a state / event which enables the happening of the target .",
"In other words, prerequisites are the prior assumptions or background information that the interlocutors agree on about the context.",
"Answer: Linda was slimmer before the winter.",
"Remark: Annotators were required to understand the di erence between cause and prerequisite clearly before proceeding with the final annotation.",
"Cause of an event X is the event that directly causes X. Prerequisite of an event X is the condition which has to be satisfied in order for X to happen.",
"Q4.",
"What is an emotion or basic human drive that motivates or could motivate Target ?",
"Consider the basic human drives, needs (and / or likely emotions) of the speaker of the target .",
"Basic human drives and needs are food, water, clothing, warmth, rest, security, safety, intimate relationships, friends, prestige, feeling of accomplishment, self-fulfillment, creative activities, enjoyment, etc.",
"Do any of these human drives / states of mind / emotional feelings motivate the target ?",
"Answer: Not Applicable for this target.",
"the listener: A (or B)?",
"What could be the possible emotional reaction or responses of the listener with respect to the target ?",
"The annotators capture the appropriate emotion of the listener using the emotion terms listed in Table 1 verbatim or related words (e.g., anxious, confused, interested, etc).",
"Answer: The listener encourages Linda to maintain her diet.",
"annotators to adhere to the following guidelines:",
"Be creative in speculation.",
"Refrain from rephrasing the target and writing low-e ort trivial answers.",
"It is recommended to skip a question if rephrasing the target is the only possible answer.",
"Avoid repeating the same answer for distinct questions on the same target .",
"The answer must be consistent with the given dialogue.",
"It is recommended to base the answer on the most important phrase of the target should it contain multiple phrases.",
"DailyDialog (Li et al., 2017) covers dialogues from wide range of topics life, work, relationships, tourism, finance, etc.",
"The constituent utterances are labelled with emotion and dialogue-act.",
"MuTual (Cui et al., 2020) is a multi-turn dialogue reasoning dataset.",
"Given a dialogue history, the objective is to predict the next utterance by considering aspects such as intent, attitude, algebraic, multi-fact, and situation reasoning.",
"DREAM (Sun et al., 2019) is a multiple-choice reading-comprehension dataset collected from exams of English as a foreign language.",
"The dataset presents significant challenges as many answers are non-extractive and require commonsense knowledge and multi-sentence reasoning.",
"1. We remove dialogues that are too short or long on either utterance or word level.",
"Dialogues with fewer than five utterances or fewer than six words per utterance on average are removed.",
"Dialogues having more than 15 utterances or more than 275 words in total are also removed.",
"2. All three source datasets contain dialogues having near identical utterances.",
"We remove these near duplicate dialogues to ensure topical diversity of CICERO .",
"We use a sentence embedding model based on fine-tuned RoBERTa (Gao et al., 2021) to extract dense feature vectors of the dialogues.",
"We remove the duplicates assuming that a pair of duplicate dialogues have at least 0 .",
"87 cosine similarity.",
"We first determine the number of target utterances in D : if D has 16 utterances, then we select 2 or 3 targets; if it has 712 utterances then we select 35 targets; otherwise, we select 47 targets if it has more than 12 utterances.",
"We divide D into 23 segments having roughly equal number of consecutive utterances.",
"We choose roughly an equal number of the top-ranking utterances from each segment.",
"We call this set of utterances x 1 .",
"The ranking is performed using a sentence ranking algorithm (Erkan and Radev, 2004; Mihalcea and Tarau, 2004) with sentence-BERT embeddings (Reimers and Gurevych, 2019a).",
"We also select the longest utterances in D and the utterances that contain phrases such as I'm, I'd, I've, I'll or their expansions.",
"We call this set of utterances x 2 .",
"The sets x 1 and x 2 may not be disjoint.",
"Set x 3 consisting of the final utterance of D .",
"From x 1 x 2 :",
"Subsequent Event: 80% of the targets.",
"Both Cause and Prerequisite: 60% of the targets.",
"Exclusively Cause: 28% of the targets.",
"Exclusively Prerequisite: 12% of the targets.",
"From x 2 : Motivation for all targets.",
"Initially, we sample 50 random dialogues and manually annotate all the questions (as in 2.1) in those.",
"Each annotator is then evaluated on those dialogues, and is selected for the annotation task if 95% of his / her annotations are approved by us.",
"We constantly review and provide feedback to the annotators during the annotation process.",
"Annotators are also instructed to amend their answers.",
"Upon completion of the annotation, we employ three additional annotators who manually check the annotated samples and score their acceptability.",
"These annotators reached a consensus for approving 86% of these samples.",
"The samples not bearing majority agreement were removed from the dataset.",
"A ( u 1 ) ( u 1 ) ( u 1 ) : Hi, Jenny.",
"Is it true you're moving to London?",
"B ( u 2 ) ( u 2 ) ( u 2 ) : Yes, it is.",
"A ( u 3 ) ( u 3 ) ( u 3 ) : What made you decide to do that?",
"B ( u 4 ) ( u 4 ) ( u 4 ) : Work, mainly.",
"I'm sure I'll be able to find a job there.",
"A ( u 5 ) ( u 5 ) ( u 5 ) : You're probably right.",
"But where are you going to live?",
"B ( u 6 ) ( u 6 ) ( u 6 ) : I hope I'll find a flat to share with somebody.",
"That way it will be cheaper.",
"A ( u 7 ) ( u 7 ) ( u 7 ) : Yes, that's a good idea.",
"Are you taking your dog with you?",
"B ( u 8 ) ( u 8 ) ( u 8 ) : No, I don't think so.",
"My parents have o ered to take care of him, and I don't think he'd be happy in the city.",
"A ( u 9 ) ( u 9 ) ( u 9 ) : You're probably right.",
"But aren't you afraid of moving to such a big place, especially after living in a small village?",
"B ( u 10 ) ( u 10 ) ( u 10 ) : Not really.",
"I think I'll enjoy myself.",
"There's so much to do there; I expect I won't miss the countryside much and I can always come back and visit.",
"A ( u 11 ) ( u 11 ) ( u 11 ) : Well, I just hope you'll invite me to stay when you get settled.",
"B ( u 12 ) ( u 12 ) ( u 12 ) : Of course I will.",
"Target u 6 u 6 u 6 ; Inference: Cause ; Annotation: Being an expensive city, it is quite di cult to find an a ordable place to live in London.",
"Target u 10 u 10 u 10 ; Inference: Cause ; Annotation: Jinny realizes that a city like London will provide a great quality of life for her.",
"Target u 6 u 6 u 6 ; Inference: Subsequent Event ; Annotation: The listener gives an idea to Jenny to find the flat on some online portal for searching flatmates as well plenty of cheaper options.",
"Target u 10 u 10 u 10 ; Inference: Subsequent Event ; Annotation: Jenny inquired a social club in London and ask for their membership to utilize her free time.",
"Target u 4 u 4 u 4 ; Inference: Prerequisite ; Annotation: Jenny has completed her studies.",
"Following Table 3, a majority ( 59%) of the inferences in CICERO are causal in nature.",
"Again, roughly 80% of the inferences are speculative and context consonant.",
"CICERO is thus much more versatile in terms of its applications as compared to CIDER (Ghosal et al., 2021a) that only contains explicit contextual inferences.",
"CICERO also contains varied commonsense knowledge from general to physical and social commonsense (see Appendix B for more details).",
"We design generative and multi-choice question answering tasks on CICERO to evaluate dialogue-level commonsense-based reasoning capabilities of",
"The objective is to generate the answer to question q , representing one of the five inference types, for a target utterance u t in a dialogue D .",
"Each inference type has its respective q (illustrated in 4).",
"Task 1.1: Dialogue Causal Inference.",
"Causality pertains to causes and e ects of events and situations.",
"We formulate the dialogue causal inference task as generating the cause or subsequent event of an utterance as an answer to a causal question: 1. Cause: Given D , u t , generate the cause c t of u t .",
"2. Subsequent Event: Given D , u t , generate the subsequent event e t of u t .",
"3. Subsequent Event Clipped (Subsequent EC): Given u t , the dialogue up to u t : D : u t , generate the subsequent event e t of u t .",
"We consider two di erent scenarios for subsequent event , as the event often appear after the target utterance in the dialogue.",
"Hence, subtask 3 is more challenging to evaluate a models' ability to reason about unobserved e ects.",
"We extend subtasks 1, 2 to incorporate longer chains and formulate the chained generation task.",
"We consider utterances u t in our dataset that has both cause and subsequent event annotated i.e. c t u t e t .",
"The causal chain is considered as a triplet, and we formulate tasks where a missing segment has to be generated from the rest of the components: 4. Chained Cause : Generate c t from u t and e t .",
"5. Chained Subsequent Event (Chained SE) : Generate e t from u t and c t .",
"prerequisite / motivation / reaction of listener from a given D and u t .",
"The target u t is the final utterance of D for reaction generation.",
"Generating the prerequisite (task 1.2.1) requires an understanding of the dependency of events.",
"Generating the motivation (task 1.2.2) and reaction (task 1.2.3) is about learning basic human drives and emotions.",
"Note that, reaction generation is a di erent problem from dialogue response generation.",
"Responses follow utterance level distributions which are substantially di erent from emotional reactions.",
"Given dialogue D , target u t , one of the five questions (inference type) q , true answer a t , alternate choices F t = { f t 1 , f t 2 , f t 3 , f t 4 } , the CICEROMCQ task aims to select the correct answer a t (see Fig. 4) and additionally any answer among F t which might be correct.",
"The alternate choices F t are created through a combination of automated generation and human supervision as follows: We train a T5 large model on SNLI contradictory pairs (Bowman et al., 2015) and Time-Travel counterfactual pairs (Qin et al., 2019) to generate contradictions / counterfactuals from input sentences.",
"We use this model to generate a pool of alternate answers from the true annotated answers.",
"Alternate answers which have an embedding cosine similarity less than 0.9 with the true answer (from all-mpnet-base-v2 in Reimers and Gurevych (2019b)) and are contradictory w.r.t the true answer (from roberta-large-mnli ) are kept, and the rest are discarded.",
"The filtered set is termed N .",
"We use the adversarial filtering (AF) algorithm (Zellers et al., 2018) to select the four alternate answers F t from N .",
"For multi-choice QA tasks, AF is an e ective method to detect easily identifi-able alternate answers and replace them with more di cult candidates by detecting and reducing stylistic artifacts.",
"The algorithm is as follows:",
"(i) We start with annotated true answer a t and any four choices F t from N for all instances in our dataset to create D .",
"We randomly split D into D train (80%) and D test (20%) according to dialogue IDs.",
"(ii) A multi-choice QA model (discriminator) is trained on D train that scores all five choices for all instances in D test .",
"The highest scoring choice is considered as the predicted answer.",
"For a particular test instance, choices in F t that have lower scores than a t are replaced with other high scoring choices in N F t .",
"Answers in F t which are being replaced 5014 A: Can I help you?",
"(iii) F t now consists of relatively more di cult choices.",
"A new random split D train and D test is created, and we go back to step",
"(ii) .",
"The algorithm is terminated when the accuracy in successive D test reaches a convergence.",
"The final alternate choice set is termed as F t .",
"The AF algorithm ensures a robust final dataset D irrespective of the final train, validation, and test split.",
"We use a new roberta-large model to initialize the discriminator and train for 3 epochs before scoring and replacement in step",
"(ii) .",
"14 iterations were required for convergence in D test .",
"Annotators perform manual checking on the final AF selected choices F t .",
"They mark each of the alternate choices in F t in D to be speculatively correct or incorrect given the context.",
"Hence, instances might have correct answers in F t in addition to the originally annotated correct answer a t .",
"The final dataset statistics after this step are given in Table 3. Task 2.1: Single Answer Selection.",
"Consider instances where F t doesn't contain any correct answer.",
"The task is to select the correct answer a t among the five choices given D , u t , and q .",
"Task 2.2: All Answers Selection.",
"This task is performed on the entire dataset (including the subset of data which is used in Task 2.1.",
"There might be one or more correct answers for a particular instance resulting from the AF algorithm.",
"The task is to select all the correct answer(s) (including a t ) among the five choices given D , u t , and q .",
"We split our dataset in dialogue level where the training, validation and test instances are obtained from a total of 3477, 1097, 1098 distinct dialogues respectively.",
"This results in a 60:20:20 proportion of total annotation instances.",
"The three sets have 17365, 5370, and 5331 unique target utterances respectively.",
"We tune on the validation dataset and report results on the test dataset (average of 5 runs).",
"For the sake of brevity, the detailed hyperparame-ters are given in the supplementary material.",
"We use the following questions ( q ) for the five inference types for all the tasks: Cause : What is or could be the cause of target?",
"Subsequent Event : What subsequent event happens or could happen following the target?",
"Prerequisite : What is or could be the prerequisite of target?",
"Motivation : What is or could be the motivation of target?",
"Reaction : What is the possible emotional reaction of the listener in response to target?",
"CICERONLG (1.11.2).",
"We use large versions of T5 (Ra el et al., 2020) and GLUCOSE-T5 (Mostafazadeh et al., 2020) as our models.",
"GLUCOSE-T5 is a T5 large model that is pre-trained on the GLUCOSE dataset.",
"We concatenate q , u t , and the context c with separators to form the input to the model: q <sep> u t <sep> c .",
"The context c is formed by concatenating utterances of D : u t (subsequent event clipped) or D (all other tasks).",
"For the chained generation task, we additionally provide the cause / subsequent event as input.",
"The inputs are q <sep> u t <sep> subsequent event: e t <sep> c and q <sep> u t <sep> cause: c t <sep> c for cause and subsequent event generation, respectively.",
"The objective is to generate the answer as output in the sequence-to-sequence setup.",
"We use teacher forcing during training and beam search during inference.",
"CICEROMCQ Single Answer Selection (2.1).",
"We use RoBERTa-large , ELECTRA-large , T5-large , and Unified QA Large for this task.",
"The input to the models for RoBERTa-large , ELECTRA-large is the concatenation of question q , target u t , dialogue D , and candidate answers x j , j { 1 , ..., 5 } : <cls> q <sep> u t <sep> D <sep> x j .",
"Each score is predicted from the corresponding <cls> vector and the highest scoring one is selected as the answer.",
"For seq2seq 5015 models T5-large , and Unified QA Large , we use the following input q <sep> 1) x 1 2) x 2 3) x 3 4) x 4 5) x 5 <sep> u t <sep> D .",
"The output to be generated is the correct answer such as x 1 or x 2 .",
"CICEROMCQ All Answers Selection (2.2).",
"We use seq2seq models T5-large , and Unified QA Large as they can generate both single and multiple-answers (with separator tokens) as output.",
"The input is q <sep> 1) x 1 2) x 2 3) x 3 4) x 4 5) x 5 <sep> u t <sep> D .",
"The output to be generated are the correct answer(s), such as x 2 (single answer) or x 1 <sep> x 3 <sep> x 4 (multi-ple answers).",
"Here, x 1 x 5 denotes the five possible choices shu ed randomly.",
"Automatic Evaluation Metrics.",
"For generative tasks, we report the following metrics: BLEU (Pap-ineni et al., 2002), METEOR (Banerjee and Lavie, 2005), ROUGE (Lin, 2004), CIDEr (Vedantam et al., 2015), and Sem-Sim which computes the semantic cosine similarity of two sentences using the supervised RoBERTa-large sentence embedding model (Gao et al., 2021).",
"All scores are reported in the range of 0-1.",
"Human Evaluation Metrics.",
"Due to significant dissonance with human evaluation, automatic evaluation metrics are often considered not reliable for generation quality evaluation in literature.",
"Hence, we resort to human evaluation metrics.",
"The human annotators rate on an integer scale from 1 (worst) to 5 (best) on three coarse attributes: Creativity : As the majority of the inferences require speculation, this metric measures how creative the models and the annotators are.",
"Contextuality : Whether the generated or annotated inferences fit the context.",
"Fluency : Whether the generated or annotated inferences are grammatically correct.",
"Results of Automatic Evaluation.",
"The results for the generative tasks are reported in Table 4 and Table 5. We observe that the fine-tuned models perform quite similarly across various metrics in Table 4. The T5 model achieves the best performance in most of the experimental settings.",
"The results indicate that the causal types are more challenging to infer than the Motivation , and Reaction .",
"However, the models are posed to the most challenging instances in the case of Prerequisite type as inferring this type requires rich commonsense and back-Model BLEU2 METEOR ROUGE CIDEr Sem-Sim ( 1 . 1 . 1 ) C a u s e T5 0.1493 0.1630 0.2626 0.4560 0.6278 GLUCOSE-T5 0.1563 0.1634 0.2707 0.4915 0.6305 T5 0.0042 0.0200 0.0266 0.0237 0.3735 GLUCOSE-T5 0.0287 0.0560 0.0827 0.1332 0.4442 ( 1 . 1 . 2 ) SE T5 0.1619 0.1662 0.2760 0.4119 0.6276 GLUCOSE-T5 0.1611 0.1628 0.2778 0.4430 0.6297 T5 0.0045 0.0191 0.0264 0.0241 0.3865 GLUCOSE-T5 0.0001 0.0070 0.0024 0.0032 0.3073 ( 1 . 1 . 3 ) SEC li pp e d T5 0.1448 0.1549 0.2618 0.3099 0.6123 GLUCOSE-T5 0.1461 0.1523 0.2645 0.3238 0.6094 T5 0.0199 0.0439 0.0564 0.0762 0.4549 GLUCOSE-T5 0.0001 0.0066 0.0025 0.0034 0.3063 ( 1 . 2 . 1 ) P rere qu i s i t e T5 0.1002 0.1282 0.2176 0.3357 0.5902 GLUCOSE-T5 0.1001 0.1299 0.2197 0.3144 0.5896 T5 0.0043 0.0222 0.0279 0.0225 0.3541 GLUCOSE-T5 0.0108 0.0394 0.0625 0.0889 0.4392 ( 1 . 2 . 2 ) M o t i va t i o n T5 0.2503 0.1998 0.3781 0.7109 0.6973 GLUCOSE-T5 0.2582 0.2037 0.3840 0.7499 0.7048 T5 0.0033 0.0183 0.0257 0.0181 0.4038 GLUCOSE-T5 0.0174 0.0434 0.0632 0.0696 0.4053 ( 1 . 2 . 3 ) R e a c t i o n T5 0.2397 0.1939 0.3720 0.5177 0.6665 GLUCOSE-T5 0.2318 0.1903 0.3716 0.5364 0.6653 T5 0.0037 0.0201 0.0239 0.0167 0.3899 GLUCOSE-T5 0.0213 0.0459 0.0759 0.0719 0.4125 Table 4: Results of the CICERONLG task.",
"ground knowledge.",
"Hence, for this category, the models achieve a low score compared to rest of the inference categories.",
"We also notice that exposing the future utterances to the models help in attaining better inference performance for the relation type Subsequent Event .",
"The trained models perform worse when the future utterances are not available in the input as seen in the Subsequent Event Clipped task.",
"A significant drop of performance is noticed in the CIDEr metric.",
"For the chained generation tasks (1.1.4 and 1.1.5), we notice (refer to Table 5) a very similar trend in models' performance i.e., the models tend to perform better for these two experimental settings compared to only Cause (1.1.1) and Subsequent Event (1.1.2) predictions.",
"We can surmise that the additional cues from the available annotations of Subsequent Event type in the Chained Cause setting, and the Cause type in the Chained Subsequent Event setting are the key 5016 Metric Gold T5 GLUCOSE T5 GLUCOSE Creativity 4.7 3.8 3.9 2.4 1.9 Contextuality 4.8 4.1 4.3 2.1 2.1 Fluency 5.0 4.8 4.9 1.9 2.9 Table 6: Results of the human evaluation for CICERONLG .",
"Target u 1 u 1 u 1 ; Inference: Cause ; Answers: Gold The speaker did not eat breakfast.",
"T5 The speaker and the listener are going to a restaurant for lunch.",
"GLUCOSE-T5 The speaker and the listener are at a restaurant.",
"Target u 2 u 2 u 2 ; Inference: Cause ; Answers: Gold The restaurant down the street is famous for its delicious food.",
"T5 The listener is hungry and wants to eat something.",
"GLUCOSE-T5 The speaker is hungry and wants to order something to eat.",
"Target u 3 u 3 u 3 ; Inference: Motivation ; Answers: Gold The speaker is craving some savory fast food.",
"T5 The speaker is craving for a hamburger, fries and a chocolate sundae.",
"GLUCOSE-T5 The speaker is craving for a burger, fries and sundae.",
"Target u 6 u 6 u 6 ; Inference: Prerequisite ; Answers: Gold The speaker is a fitness freak and keeps track of his daily diet.",
"T5 The speaker is a healthy person.",
"GLUCOSE-T5 The speaker is a health conscious person.",
"Target u 7 u 7 u 7 ; Inference: Subsequent Event ; Answers: Gold The listener refused to eat anything that is unhealthy.",
"T5 The speaker and the listener decided to order some hot dogs.",
"GLUCOSE-T5 The speaker and the listener decided to order some hot dogs.",
"Target u 8 u 8 u 8 ; Inference: Reaction ; Answers: Gold The listener felt embarrassed by the statement of the speaker.",
"T5 The listener is shocked to hear the speaker's comment.",
"GLUCOSE-T5 The listener is disappointed with the speaker's decision.",
"A ( u 1 ) ( u 1 ) ( u 1 ) : I'm hungry, let's order up something to eat.",
"B ( u 2 ) ( u 2 ) ( u 2 ) : Ok, maybe we can order a soup and a salad from the restaurant down the street.",
"A ( u 3 ) ( u 3 ) ( u 3 ) : I was thinking of getting a hamburger, fries and a chocolate sundae.",
"B ( u 4 ) ( u 4 ) ( u 4 ) : You eat too much junk food.",
"That sort of stu clogs up your arteries and is very high in cholesterol.",
"A ( u 5 ) ( u 5 ) ( u 5 ) : Well I never seem to gain weight so I don't mind.",
"B ( u 6 ) ( u 6 ) ( u 6 ) : It's not only about getting fat or not, it's about being healthy.",
"You could really have some health problems later on.",
"A ( u 7 ) ( u 7 ) ( u 7 ) : How about pizza or maybe some fried chicken!",
"Better yet, let's order some hot dogs!",
"B ( u 8 ) ( u 8 ) ( u 8 ) : You are a lost cause.",
"to such performance improvement.",
"As depicted in Table 4 (and also Table 6), the non fine-tuned versions of T5 and GLUCOSE-T5 perform poorly as they produce gibberish outputs across all the five inference categories indicating the importance of fine-tuning on CICERO .",
"Results of Human Evaluation.",
"For each of the five inference types, we randomly sample 40 inferences generated by each model and their corresponding gold inferences.",
"These inferences are then manually rated by three independent annotators based on the human-evaluated metrics.",
"As suggested by Table 6, we observe that most of the fine-tuned models on CICERO perform similarly but fail to reach gold annotation performance.",
"Moreover, as expected, the fine-tuned models significantly outperform their non fine-tuned counterparts.",
"We provide some examples of the generated inferences in Table 7.",
"Inspection of the model generated inferences reveal that usage of keywords from the dialogue without generalizing the events is more frequent.",
"Generated inferences are significantly less diverse and creative than gold annotations.",
"Performance of GLUCOSE.",
"GLUCOSE contains contextual commonsense inferences on events in monologues.",
"Comparing the results (Table 4, Table 6) of fine-tuned and non fine-tuned checkpoints suggests that pre-training on a monologue-based contextual commonsense inference dataset does not ensure good performance on the same task for dialogues.",
"Akin to the non fine-tuned T5, non fine-tuned GLUCOSE-T5 produces gibberish outputs for all the commonsense inference types but the causal and motivation types.",
"We surmise this happens as these two commonsense types exist in the GLUCOSE dataset.",
"Although the generated text for these two commonsense inference types are grammatically correct and sometimes contain contextual words, they are far from the desired quality, semantically very much dissimilar from the annotated gold instances, and rated low in the qualitative evaluation, as shown in Table 6.",
"We also confirm the e cacy of fine-tuning the models on CICERO through human evaluation, as explained in 4.",
"Evaluation Metrics.",
"1) RoBERTa and ELECTRA : The accuracy of selecting the correct answer is used to evaluate the performance of these models.",
"2) T5 and Unified QA : The output is considered as a single answer if it doesn't contain any separator token.",
"Otherwise, the output is segmented at separator tokens to obtain multiple answers.",
"We then follow the method in Khashabi et al. (2020), where match is computed by comparing each of the generated answer(s) with the candidate choices based on their token-level overlap.",
"For each generated answer, the most similar candidate choice is considered as the corresponding output.",
"The prediction is considered as correct if the final output(s) is an exact match (EM) with the gold annotated answer(s).",
"Single Answer Selection (2.1).",
"We report the results of this setting in Table 8. The reported metric is accuracy of selecting the correct answer.",
"The overall score is 83.28% for RoBERTa and 86.82% for ELECTRA .",
"ELECTRA has an edge over RoBERTa on all the five inference types.",
"This could be a side e ect of using RoBERTa as the backbone model for the AF algorithm and subsequently as a solver for 5017 Model Cause SE Prereq.",
"the final CICEROMCQ task.",
"We think, this results expose the model dependency of the AF process.",
"In other words, the negative samples chosen by the backbone model X for the AF algorithm will be dif-ficult to distinguish from the human-annotated true samples using the same model X .",
"These negative samples, however, could be relatively easier to identify using another model Y .",
"The seq2seq models T5 and Unified QA perform significantly better than RoBERTa and ELECTRA as can be seen in Table 8. While models like RoBERTa, ELECTRA encode each candidate answer separately, T5 and Unified QA encode them together.",
"Thanks to this joint encoding of candidate answers, T5 and Unified QA can take advantage of more task-related information that RoBERTa and ELECTRA might miss due to the separate encoding scheme.",
"We surmise it could be one of the reasons why the seq2seq models have an edge over RoBERTa and ELECTRA for this particular task.",
"T5 and Unified QA attain almost the same score for single answer selection.",
"This is surprising as Unified QA is initialized from the T5-large checkpoint and then further trained on other QA datasets.",
"As such, we think, the di erent fine-tuned domains of Unified QA does not help in the CICEROMCQ task.",
"All Answers Selection (2.2).",
"We train and evaluate T5 and Unified QA on the entire dataset of both single and multiple correct answers and report the results in Table 9. Overall, T5 and Unified QA perform similarly.",
"The general performance, across the models, on instances with multiple correct answers is much worse than instances with a single correct answer.",
"We confirm this by reporting the results only on instances with multiple answers in Table 9, where T5 and Unified QA achieve only 3.38% and 3.60% exact match, respectively.",
"This could probably be attributed to the stark data imbalance of 86 / 14% between singleand multi-answer instances, respectively (see Table 3).",
"Commonsense knowledge has received more attention compared with factual knowledge, as it is usually not mentioned explicitly in the context.",
"It is demonstrated to be essential in open-ended generation tasks, such as story explanation generation (Mostafazadeh et al., 2020), story end generation (Guan et al., 2019) and abductive reasoning (Bhagavatula et al., 2019).",
"To infuse commonsense knowledge in NLP models, several approaches to tasks like sentence ordering (Ghosal et al., 2021b), emotion recognition (Ghosal et al., 2020), story generation (Guan et al., 2020; Xu et al., 2020) and dialogue generation (Zhou et al., 2018) use prevalent commonsense knowledge bases (CSKB) like ConceptNet (Speer et al., 2017) or ATOMIC (Sap et al., 2019).",
"However, ConceptNet is context-free, meaning that they only capture relationships around a selected set of entities, without paying attention to the context where the entity occurs.",
"Moreover, inference is often needed in discourse level, which do not always align with the entities in knowledge bases.",
"Knowledge models such as COMET (Bosselut et al., 2019) is a way to circumvent this issue and make inferences on an utterance (sentence) level.",
"But the generated knowledge still lacks the detail from the dialogue, as it is trained on the aforementioned knowledge base.",
"Our approach, instead, centers on the dialogue dataset and provides more detailed commonsense inference at an utterance level.",
"We introduced CICERO , a new dataset for dialogue reasoning with contextualized commonsense inference.",
"It contains 53K inferences for five commonsense dimensions cause, subsequent event, prerequisite, motivation, and emotional reaction collected from 5.6K dialogues.",
"To show the usefulness of CICERO for dialogue reasoning, we design several challenging generative and multi-choice answer selection tasks for state-of-the-art NLP models to solve.",
"This work is supported by the A*STAR under its RIE 2020 AME programmatic grant RGAST2003 and project T2MOE2008 awarded by Singapore's MoE under its Tier-2 grant scheme.",
"The annotators for CICERO were hired through a data annotation service.",
"The compensation was derived based on the country of residence of the annotators, as deemed by the company.",
"The study has been categorized as exempt by the IRB.",
"Annotators were strictly asked not to write any toxic content (hateful or o ensive toward any gender, race, sex, religion).",
"They were asked to consider gender-neutral settings in dialogues whenever pos-sible.The source dialogue datasets DailyDialog, MuTual, and DREAM are high quality multi-turn dialogue datasets manually annotated by experts in dialogue, communication theory and linguistics.",
"All three datasets have been extensively used and studied in the natural language processing literature.",
"The three source datasets and our annotations in CICERO do not contain any personal data or any information that can uniquely identify individual people or groups."
] | [
"abstain",
"abstain",
"objective",
"abstain",
"objective",
"result",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"objective",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"objective",
"objective",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"objective",
"abstain",
"objective",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method"
] |
[
"ERASER : A Benchmark to Evaluate Rationalized NLP Models Jay DeYoung , Sarthak Jain , Nazneen Fatema Rajani , Eric Lehman , Caiming Xiong , Richard Socher , and Byron C. Wallace Equal contribution.",
"Abstract State-of-the-art models in NLP are now predominantly based on deep neural networks that are opaque in terms of how they come to make predictions.",
"This limitation has increased interest in designing more interpretable deep models for NLP that reveal the reasoning' behind model outputs.",
"But work in this direction has been conducted on different datasets and tasks with correspondingly unique aims and metrics; this makes it difficult to track progress.",
"We propose the E valuating R ationales A nd S imple E nglish R easoning ( ERASER ) benchmark to advance research on interpretable models in NLP.",
"This benchmark comprises multiple datasets and tasks for which human annotations of rationales (sup-porting evidence) have been collected.",
"We propose several metrics that aim to capture how well the rationales provided by models align with human rationales, and also how faithful these rationales are (i.e., the degree to which provided rationales influenced the corresponding predictions).",
"Our hope is that releasing this benchmark facilitates progress on designing more interpretable NLP systems.",
"The benchmark, code, and documentation are available at https://www.eraserbenchmark.com/ 1 Introduction Interest has recently grown in designing NLP systems that can reveal why models make specific predictions.",
"But work in this direction has been conducted on different datasets and using different metrics to quantify performance; this has made it difficult to compare methods and track progress.",
"We aim to address this issue by releasing a standardized benchmark of datasets repurposed and augmented from pre-existing corpora, spanning a range of NLP tasks and associated metrics for measuring different properties of rationales.",
"We refer to this as the E valuating R ationales A nd S imple E nglish R easoning ( ERASER ) benchmark.",
"In curating and releasing ERASER we take inspiration from the stickiness of the GLUE (Wang et al., 2019b) and SuperGLUE (Wang et al., 2019a) benchmarks for evaluating progress in natural language understanding tasks, which have driven rapid progress on models for general language representation learning.",
"We believe the still somewhat nascent subfield of interpretable NLP stands to ben-efit similarly from an analogous collection of standardized datasets and tasks; we hope these will aid the design of standardized metrics to measure different properties of interpretability', and we propose a set of such metrics as a starting point.",
"Interpretability is a broad topic with many possible realizations (Doshi-Velez and Kim, 2017; Lipton, 2016).",
"In ERASER we focus specifically on rationales , i.e., snippets that support outputs.",
"All datasets in ERASER include such rationales, explicitly marked by human annotators.",
"By definition, rationales should be sufficient to make predictions, but they may not be comprehensive .",
"Therefore, for some datasets, we have also collected comprehensive rationales (in which all evidence supporting an output has been marked) on test instances.",
"The quality' of extracted rationales will depend on their intended use.",
"Therefore, we propose an initial set of metrics to evaluate rationales that are meant to measure different varieties of inter-pretability'.",
"Broadly, this includes measures of agreement with human-provided rationales, and assessments of faithfulness .",
"The latter aim to capture the extent to which rationales provided by a model in fact informed its predictions.",
"We believe these provide a reasonable start, but view the problem of designing metrics for evaluating rationales especially for measuring faithfulness as a topic for further research that ERASER can facilitate.",
"And while we will provide a leaderboard', this is better viewed as a results board'; we do not privilege any one metric.",
"Instead, ERASER permits comparison between models that provide rationales with respect to different criteria of interest.",
"We implement baseline models and report their performance across the corpora in ERASER.",
"We find that no single off-the-shelf' architecture is readily adaptable to datasets with very different instance lengths and associated rationale snippets (Section 3).",
"This highlights a need for new models that can consume potentially lengthy inputs and adaptively provide rationales at a task-appropriate level of granularity.",
"ERASER provides a resource to develop such models.",
"In sum, we introduce the ERASER benchmark ( www.eraserbenchmark.com ), a unified set of diverse NLP datasets (these are repurposed and augmented from existing corpora, 1 including sentiment analysis, Natural Language Inference, and QA tasks, among others) in a standardized format featuring human rationales for decisions, along with starter code and tools, baseline models, and standardized (initial) metrics for rationales.",
"Interpretability in NLP is a large, fast-growing area; we do not attempt to provide a comprehensive overview here.",
"Instead we focus on directions particularly relevant to ERASER, i.e., prior work on models that provide rationales for their predictions.",
"1 We ask users of the benchmark to cite all original papers, and provide a BibTeX entry for doing so on the website.",
"rationales (marked by humans) are provided during training.",
"However, such direct supervision will not always be available, motivating work on methods that can explain (or rationalize) model predictions using only instance-level supervision.",
"In the context of modern neural models for text classification, one might use variants of attention (Bahdanau et al., 2015) to extract rationales.",
"Attention mechanisms learn to assign soft weights to (usually contextualized) token representations, and so one can extract highly weighted tokens as rationales.",
"However, attention weights do not in general provide faithful explanations for predictions (Jain and Wallace, 2019; Serrano and Smith, 2019; Wiegreffe and Pinter, 2019; Zhong et al., 2019; Pruthi et al., 2020; Brunner et al., 2020; Moradi et al., 2019; Vashishth et al., 2019).",
"This likely owes to encoders entangling inputs, complicating the interpretation of attention weights on inputs over contextualized representations of the same.",
"2 By contrast, hard attention mechanisms discretely extract snippets from the input to pass to the classifier, by construction providing faithful explanations.",
"Recent work has proposed hard attention mechanisms as a means of providing explanations.",
"Lei et al. (2016) proposed instantiating two models with their own parameters; one to extract rationales, and one that consumes these to make a prediction.",
"They trained these models jointly via REINFORCE (Williams, 1992) style optimization.",
"Recently, Jain et al. (2020) proposed a variant of this two-model setup that uses heuristic feature scores to derive pseudo-labels on tokens comprising rationales; one model can then be used to perform hard extraction in this way, while a second (independent) model can make predictions on the basis of these.",
"Elsewhere, Chang et al. (2019) introduced the notion of classwise rationales that explains support for different output classes using a game theoretic framework.",
"Finally, other recent work has proposed using a differentiable binary mask over inputs, which also avoids recourse to REINFORCE (Bastings et al., 2019).",
"Post-hoc explanation .",
"Another strand of interpretability work considers post-hoc explanation methods, which seek to explain why a model made a specific prediction for a given input.",
"Commonly 2 Interestingly, Zhong et al. (2019) find that attention sometimes provides plausible but not faithful rationales.",
"Elsewhere, Pruthi et al. (2020) show that one can easily learn to deceive via attention weights.",
"These findings highlight that one should be mindful of the criteria one wants rationales to fulfill.",
"these take the form of token-level importance scores.",
"Gradient-based explanations are a standard example (Sundararajan et al., 2017; Smilkov et al., 2017).",
"These enjoy a clear semantics (describing how perturbing inputs locally affects outputs), but may nonetheless exhibit counterintuitive behaviors (Feng et al., 2018).",
"Gradients of course assume model differentiability.",
"Other methods do not require any model properties.",
"Examples include LIME (Ribeiro et al., 2016) and Alvarez-Melis and Jaakkola (2017); these methods approximate model behavior locally by having it repeatedly make predictions over perturbed inputs and fitting a simple, explainable model over the outputs.",
"Acquiring rationales .",
"Aside from interpretability considerations, collecting rationales from annotators may afford greater efficiency in terms of model performance realized given a fixed amount of annotator effort (Zaidan and Eisner, 2008).",
"In particular, recent work by McDonnell et al. (2017, 2016) has observed that at least for some tasks, asking annotators to provide rationales justifying their categorizations does not impose much additional effort.",
"Combining rationale annotation with active learning (Settles, 2012) is another promising direction (Wallace et al., 2010; Sharma et al., 2015).",
"Learning from rationales .",
"Work on learning from rationales marked by annotators for text classification dates back over a decade (Zaidan et al., 2007).",
"Earlier efforts proposed extending standard discriminative models like Support Vector Machines (SVMs) with regularization terms that penalized parameter estimates which disagreed with provided rationales (Zaidan et al., 2007; Small et al., 2011).",
"Other efforts have attempted to specify generative models of rationales (Zaidan and Eisner, 2008).",
"More recent work has aimed to exploit rationales in training neural text classifiers.",
"Zhang et al. (2016) proposed a rationale-augmented Convolutional Neural Network (CNN) for text classification, explicitly trained to identify sentences supporting categorizations.",
"Strout et al. (2019) showed that providing this model with rationales during training yields predicted rationales that are preferred by humans (compared to rationales produced without explicit supervision).",
"Other work has proposed pipeline' approaches in which independent models are trained to perform rationale extraction and classification on the basis of these, respectively (Lehman et al., 2019; Chen et al., 2019), assuming Name Size (train/dev/test) Tokens Comp?",
"Rajani et al. (2019) fine-tuned a Transformer-based language model (Radford et al., 2018) on free-text rationales provided by humans, with an objective of generating open-ended explanations to improve performance on downstream tasks.",
"Evaluating rationales .",
"Work on evaluating rationales has often compared these to human judgments (Strout et al., 2019; Doshi-Velez and Kim, 2017), or elicited other human evaluations of explanations (Ribeiro et al., 2016; Lundberg and Lee, 2017; Nguyen, 2018).",
"There has also been work on visual evaluations of saliency maps (Li et al., 2016; Ding et al., 2017; Sundararajan et al., 2017).",
"Measuring agreement between extracted and human rationales (or collecting subjective assessments of them) assesses the plausibility of rationales, but such approaches do not establish whether the model actually relied on these particular rationales to make a prediction.",
"We refer to rationales that correspond to the inputs most relied upon to come to a disposition as faithful .",
"Most automatic evaluations of faithfulness measure the impact of perturbing or erasing words or tokens identified as important on model output (Ar-ras et al., 2017; Montavon et al., 2017; Serrano and Smith, 2019; Samek et al., 2016; Jain and Wallace, 2019).",
"We build upon these methods in Section 4.",
"Finally, we note that a recent article urges the community to evaluate faithfulness on a continuous scale of acceptability, rather than viewing this as a binary proposition (Jacovi and Goldberg, 2020).",
"For all datasets in ERASER we distribute both reference labels and rationales marked by humans as supporting these in a standardized format.",
"We delineate train, validation, and test splits for all corpora (see Appendix A for processing details).",
"We ensure that these splits comprise disjoint sets of source documents to avoid contamination.",
"3 We have made the decision to distribute the test sets publicly, 4 in part because we do not view the cor-rect' metrics to use as settled.",
"We plan to acquire additional human annotations on held-out portions of some of the included corpora so as to offer hidden test set evaluation opportunities in the future.",
"Evidence inference (Lehman et al., 2019).",
"A dataset of full-text articles describing randomized controlled trials (RCTs).",
"The task is to infer whether a given intervention is reported to either significantly increase , significantly decrease , or have no significant effect on a specified outcome , as compared to a comparator of interest.",
"Rationales have been marked as supporting these inferences.",
"As the original annotations are not necessarily exhaustive, we collected exhaustive rationale annotations on a subset of the validation and test data.",
"5 BoolQ (Clark et al., 2019).",
"This corpus consists of passages selected from Wikipedia, and yes/no questions generated from these passages.",
"As the original Wikipedia article versions used were not maintained, we have made a best-effort attempt to recover these, and then find within them the passages answering the corresponding questions.",
"For public release, we acquired comprehensive annotations on a subset of documents in our test set.",
"5 Movie Reviews (Zaidan and Eisner, 2008).",
"Includes positive/negative sentiment labels on movie reviews.",
"Original rationale annotations were not necessarily comprehensive; we thus collected comprehensive rationales on the final two folds of the original dataset (Pang and Lee, 2004).",
"5 In contrast to most other datasets, the rationale annotations here are span level as opposed to sentence level.",
"FEVER (Thorne et al., 2018).",
"Short for Fact Extraction and VERification; entails verifying claims from textual sources.",
"Specifically, each claim is to be classified as supported , refuted or not enough information with reference to a collection of source 3 Except for BoolQ, wherein source documents in the original train and validation set were not disjoint and we preserve this structure in our dataset.",
"Questions , of course, are disjoint.",
"4 Consequently, for datasets that have been part of previous benchmarks with other aims (namely, GLUE/superGLUE) but which we have re-purposed for work on rationales in ERASER, e.g., BoolQ (Clark et al., 2019), we have carved out for release test sets from the original validation sets.",
"5 Annotation details are in Appendix B. texts.",
"We take a subset of this dataset, including only supported and refuted claims.",
"MultiRC (Khashabi et al., 2018).",
"A reading comprehension dataset composed of questions with multiple correct answers that by construction depend on information from multiple sentences.",
"Here each rationale is associated with a question, while answers are independent of one another.",
"We convert each rationale/question/answer triplet into an instance within our dataset.",
"Each answer candidate then has a label of True or False .",
"Commonsense Explanations (CoS-E) (Rajani et al., 2019).",
"This corpus comprises multiple-choice questions and answers from (Talmor et al., 2019) along with supporting rationales.",
"The rationales in this case come in the form both of highlighted (extracted) supporting snippets and free-text, open-ended descriptions of reasoning.",
"Given our focus on extractive rationales, ERASER includes only the former for now.",
"Following Talmor et al. (2019), we repartition the training and validation sets to provide a canonical test split.",
"e-SNLI (Camburu et al., 2018).",
"This dataset augments the SNLI corpus (Bowman et al., 2015) with rationales marked in the premise and/or hypothesis (and natural language explanations, which we do not use).",
"For entailment pairs, annotators were required to highlight at least one word in the premise.",
"For contradiction pairs, annotators had to highlight at least one word in both the premise and the hypothesis; for neutral pairs, they were only allowed to highlight words in the hypothesis.",
"Human Agreement We report human agreement over extracted rationales for multiple annotators and documents in Table",
"2. All datasets have a high Cohen (Cohen, 1960); with substantial or better agreement.",
"In ERASER models are evaluated both for their predictive performance and with respect to the rationales that they extract.",
"For the former, we rely on the established metrics for the respective tasks.",
"Here we describe the metrics we propose to evaluate the quality of extracted rationales.",
"We do not claim that these are necessarily the best metrics for evaluating rationales, however.",
"Indeed, we hope the release of ERASER will spur additional research into how best to measure the quality of model explanations in the context of NLP.",
"The simplest means of evaluating extracted rationales is to measure how well they agree with those marked by humans.",
"We consider two classes of metrics, appropriate for models that perform discrete and soft' selection, respectively.",
"For the discrete case, measuring exact matches between predicted and reference rationales is likely too harsh.",
"6 We thus consider more relaxed measures.",
"These include Intersection-Over-Union (IOU), borrowed from computer vision (Evering-ham et al., 2010), which permits credit assignment for partial matches.",
"We define IOU on a token level: for two spans, it is the size of the overlap of the tokens they cover divided by the size of their union.",
"We count a prediction as a match if it overlaps with any of the ground truth rationales by more than some threshold (here, 0.5).",
"We use these partial matches to calculate an F1 score.",
"We also measure token -level precision and recall, and use these to derive token-level F1 scores.",
"Metrics for continuous or soft token scoring models consider token rankings, rewarding models for assigning higher scores to marked tokens.",
"In particular, we take the Area Under the Precision-Recall curve (AUPRC) constructed by sweeping a threshold over token scores.",
"We define additional metrics for soft scoring models below.",
"In general, the rationales we have for tasks are sufficient to make judgments, but not necessarily comprehensive .",
"However, for some datasets we have explicitly collected comprehensive rationales for at least a subset of the test set.",
"Therefore, on these datasets recall evaluates comprehensiveness directly (it does so only noisily on other datasets).",
"As discussed above, a model may provide rationales that are plausible (agreeable to humans) but that it did not rely on for its output.",
"In many settings one may want rationales that actually explain model predictions, i.e., rationales extracted for an instance in this case ought to have meaningfully influenced its prediction for the same.",
"We call these faithful rationales.",
"How best to measure rationale faithfulness is an open question.",
"In this first version of ERASER we propose simple metrics motivated by prior work (Zaidan et al., 2007; Yu et al., 2019).",
"In particular, following Yu et al. (2019) we define metrics intended to measure the comprehensiveness (were all features needed to make a prediction se-lected?) and sufficiency (do the extracted rationales contain enough signal to come to a disposition?) of rationales, respectively.",
"Comprehensiveness .",
"To calculate rationale comprehensiveness we create contrast examples (Zaidan et al., 2007): We construct a contrast example for x i , x i , which is x i with the predicted rationales r i removed.",
"Assuming a classification setting, let m ( x i ) j be the original prediction provided by a model m for the predicted class j .",
"Then we consider the predicted probability from the model for the same class once the supporting rationales are stripped.",
"Intuitively, the model ought to be less confident in its prediction once rationales are removed from x i .",
"We can measure this as: comprehensiveness = m ( x i ) j m ( x i / r i ) j (1) A high score here implies that the rationales were indeed influential in the prediction, while a low score suggests that they were not.",
"A negative value Where do you find the most amount of leafs?",
"here means that the model became more confident in its prediction after the rationales were removed; this would seem counter-intuitive if the rationales were indeed the reason for its prediction.",
"Sufficiency .",
"This captures the degree to which the snippets within the extracted rationales are adequate for a model to make a prediction.",
"These metrics are illustrated in Figure",
"2. As defined, the above measures have assumed discrete rationales r i .",
"We would also like to evaluate the faithfulness of continuous importance scores assigned to tokens by models.",
"Here we adopt a simple approach for this.",
"We convert soft scores over features s i provided by a model into discrete rationales r i by taking the top k d values, where k d is a threshold for dataset d .",
"We set k d to the average rationale length provided by humans for dataset d (see Table 4).",
"Intuitively, this says: How much does the model prediction change if we remove a number of tokens equal to what humans use (on average for this dataset) in order of the importance scores assigned to these by the model.",
"Once we have discretized the soft scores into rationales in this way, we compute the faithfulness scores as per Equations 1 and",
"2. This approach is conceptually simple.",
"It is also computationally cheap to evaluate, in contrast to measures that require per-token measurements, e.g., importance score correlations with leave-one-out' scores (Jain and Wallace, 2019), or counting how many important' tokens need to be erased before a prediction flips (Serrano and Smith, 2019).",
"However, the necessity of discretizing continuous scores forces us to pick a particular threshold k .",
"We can also consider the behavior of these measures as a function of k , inspired by the measurements proposed in Samek et al. (2016) in the context of evaluating saliency maps for image classification.",
"They suggested ranking pixel regions by importance and then measuring the change in output as they are removed in rank order.",
"Our datasets comprise documents and rationales with quite different lengths; to make this measure comparable across datasets, we construct bins designating the number of tokens to be deleted.",
"Denoting the tokens up to and including bin k for instance i by r ik , we define an aggregate comprehensiveness measure: 1 B + 1 ( B k = 0 m ( x i ) j m ( x i / r ik ) j ) (3) This is defined for sufficiency analogously.",
"Here we group tokens into k = 5 bins by grouping them into the top 1%, 5%, 10%, 20% and 50% of tokens, with respect to the corresponding importance score.",
"We refer to these metrics as Area Over the Perturbation Curve (AOPC).",
"7 These AOPC sufficiency and comprehensiveness measures score a particular token ordering under a model.",
"As a point of reference, we also report these when random scores are assigned to tokens.",
"7 Our AOPC metrics are similar in concept to ROAR (Hooker et al., 2019) except that we re-use an existing model as opposed to retraining for each fraction.",
"Our focus in this work is primarily on the ERASER benchmark itself, rather than on any particular model(s).",
"But to establish a starting point for future work, we evaluate several baseline models across the corpora in ERASER.",
"8 We broadly classify these into models that assign soft' (continuous) scores to tokens, and those that perform a hard' (discrete) selection over inputs.",
"We additionally consider models specifically designed to select individual tokens (and very short sequences) as rationales, as compared to longer snippets.",
"All of our implementations are in PyTorch (Paszke et al., 2019) and are available in the ERASER repository.",
"9 All datasets in ERASER comprise inputs, rationales, and labels.",
"But they differ considerably in document and rationale lengths (Table A).",
"This motivated use of different models for datasets, appropriate to their sizes and rationale granularities.",
"We hope that this benchmark motivates design of models that provide rationales that can flexibly adapt to varying input lengths and expected rationale granularities.",
"Indeed, only with such models can we perform comparisons across all datasets.",
"Models that perform hard selection may be viewed as comprising two independent modules: an encoder which is responsible for extracting snippets of inputs, and a decoder that makes a prediction based only on the text provided by the encoder.",
"We consider two variants of such models.",
"Lei et al. (2016) .",
"In this model, an encoder induces a binary mask over inputs x , z .",
"The decoder accepts the tokens in x unmasked by z to make a prediction y .",
"These modules are trained jointly via REINFORCE (Williams, 1992) style estimation, minimizing the loss over expected binary vectors z yielded from the encoder.",
"One of the advantages of this approach is that it need not have access to marked rationales; it can learn to rationalize on the basis of instance labels alone.",
"However, given that we do have rationales in the training data, we experiment with a variant in which we train the encoder explicitly using rationale-level annotations.",
"In our implementation of Lei et al. (2016), we drop in two independent BERT (Devlin et al., 2019) or GloVe (Pennington et al., 2014) base modules 8 This is not intended to be comprehensive.",
"with bidirectional LSTMs (Hochreiter and Schmid-huber, 1997) on top to induce contextualized representations of tokens for the encoder and decoder, respectively.",
"The encoder generates a scalar (de-noting the probability of selecting that token) for each LSTM hidden state using a feedfoward layer and sigmoid.",
"In the variant using human rationales during training, we minimize cross entropy loss over rationale predictions.",
"The final loss is then a composite of classification loss, regularizers on rationales (Lei et al., 2016), and loss over rationale predictions, when available.",
"Pipeline models .",
"These are simple models in which we first train the encoder to extract rationales, and then train the decoder to perform prediction using only rationales.",
"No parameters are shared between the two models.",
"Here we first consider a simple pipeline that first segments inputs into sentences.",
"It passes these, one at a time, through a Gated Recurrent Unit (GRU) (Cho et al., 2014), to yield hidden representations that we compose via an attentive decoding layer (Bahdanau et al., 2015).",
"This aggregate representation is then passed to a classification module which predicts whether the corresponding sentence is a rationale (or not).",
"A second model, using effectively the same architecture but parameterized independently, consumes the outputs (rationales) from the first to make predictions.",
"This simple model is described at length in prior work (Lehman et al., 2019).",
"We further consider a BERT-to-BERT' pipeline, where we replace each stage with a BERT module for prediction (Devlin et al., 2019).",
"In pipeline models, we train each stage independently.",
"The rationale identification stage is trained using approximate sentence boundaries from our source annotations, with randomly sampled negative examples at each epoch.",
"The classification stage uses the same positive rationales as the identification stage, a type of teacher forcing (Williams and Zipser, 1989) (details in Appendix C).",
"We consider a model that passes tokens through BERT (Devlin et al., 2019) to induce contextualized representations that are then passed to a bidirectional LSTM (Hochreiter and Schmidhuber, 1997).",
"The hidden representations from the LSTM are collapsed into a single vector using additive attention (Bahdanau et al., 2015).",
"The LSTM layer allows us to bypass the 512 word limit imposed by Perf.",
"BERT; when we exceed this, we effectively start encoding a new' sequence (setting the positional index to 0) via BERT.",
"The hope is that the LSTM learns to compensate for this.",
"Evidence Inference and BoolQ comprise very long ( > 1000 token) inputs; we were unable to run BERT over these.",
"We instead resorted to swapping GloVe 300d embeddings (Pennington et al., 2014) in place of BERT representations for tokens.",
"spans.",
"To soft score features we consider: Simple gradients, attention induced over contextualized representations, and LIME (Ribeiro et al., 2016).",
"Here we present initial results for the baseline models discussed in Section 5, with respect to the metrics proposed in Section 4.",
"We present results in two parts, reflecting the two classes of rationales discussed above: Hard' approaches that perform discrete selection of snippets, and soft' methods that assign continuous importance scores to tokens.",
"In Table 3 we evaluate models that perform discrete selection of rationales.",
"We view these as inherently faithful, because by construction we know which snippets the decoder used to make a prediction.",
"10 Therefore, for these methods we report only metrics that measure agreement with human annotations.",
"10 This assumes independent encoders and decoders.",
"Due to computational constraints, we were unable to run our BERT-based implementation of Lei et al. (2016) over larger corpora.",
"Conversely, the simple pipeline of Lehman et al. (2019) assumes a setting in which rationale are sentences, and so is not appropriate for datasets in which rationales tend to comprise only very short spans.",
"Again, in our view this highlights the need for models that can rationalize at varying levels of granularity, depending on what is appropriate.",
"We observe that for the rationalizing model of Lei et al. (2016), exploiting rationale-level supervision often (though not always) improves agreement with human-provided rationales, as in prior work (Zhang et al., 2016; Strout et al., 2019).",
"Interestingly, this does not seem strongly correlated with predictive performance.",
"Lei et al. (2016) outperforms the simple pipeline model when using a BERT encoder.",
"Further, Lei et al. (2016) outperforms the BERT-to-BERT' pipeline on the comparable datasets for the final prediction tasks.",
"This may be an artifact of the amount of text each model can select: BERT-to-BERT' is limited to sentences, while Lei et al. (2016) can select any subset of the text.",
"Designing extraction models that learn to adaptively select contiguous rationales of appropriate length for a given task seems a potentially promising direction.",
"In Table 4 we report metrics for models that assign continuous importance scores to individual tokens.",
"For these models we again measure downstream (task) performance (macro F1 or ac-curacy).",
"Here the models are actually the same, and so downstream performance is equivalent.",
"To assess the quality of token scores with respect to human annotations, we report the Area Under the Precision Recall Curve (AUPRC).",
"These scoring functions assign only soft scores to inputs (and may still use all inputs to come to a particular prediction), so we report the metrics intended to measure faithfulness defined above: comprehensiveness and sufficiency, averaged over bins' of tokens ordered by importance scores.",
"To provide a point of reference for these metrics which depend on the underlying model we report results when rationales are randomly selected (averaged over 10 runs).",
"Both simple gradient and LIME-based scoring yield more comprehensive rationales than attention weights, consistent with prior work (Jain and Wallace, 2019; Serrano and Smith, 2019).",
"Attention fares better in terms of AUPRC suggesting better agreement with human rationales which is also in line with prior findings that it may provide plausible, but not faithful, explanation (Zhong et al., 2019).",
"Interestingly, LIME does particularly well across these tasks in terms of faithfulness.",
"From the Random' results that we conclude models with overall poor performance on their final tasks tend to have an overall poor ordering, with marginal differences in comprehensiveness and sufficiency between them.",
"For models that with high sufficiency scores: Movies, FEVER, CoS-E, and e-SNLI, we find that random removal is particularly damaging to performance, indicating poor absolute ranking; whereas those with high comprehensiveness are sensitive to rationale length.",
"We have introduced a new publicly available resource: the Evaluating Rationales And Simple English Reasoning (ERASER) benchmark.",
"This comprises seven datasets, all of which include both instance level labels and corresponding supporting snippets (rationales') marked by human annotators.",
"We have augmented many of these datasets with additional annotations, and converted them into a standard format comprising inputs, rationales, and outputs.",
"ERASER is intended to facilitate progress on explainable models for NLP.",
"We proposed several metrics intended to measure the quality of rationales extracted by models, both in terms of agreement with human annotations, and in terms of faithfulness'.",
"We believe these metrics provide reasonable means of comparison of specific aspects of interpretability, but we view the problem of measuring faithfulness, in particular, a topic ripe for additional research (which ERASER can facilitate).",
"Our hope is that ERASER enables future work on designing more interpretable NLP models, and comparing their relative strengths across a variety of tasks, datasets, and desired criteria.",
"It also serves as an ideal starting point for several future directions such as better evaluation metrics for interpretability, causal analysis of NLP models and datasets of rationales in other languages.",
"This work was supported in part by the NSF (CA-REER award 1750978), and by the Army Research Office (W911NF1810328)."
] | [
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"objective",
"method",
"other",
"abstain",
"objective",
"abstain",
"method",
"objective",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"objective",
"abstain",
"abstain",
"method",
"result",
"abstain",
"method",
"result",
"abstain",
"abstain",
"objective",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"objective",
"abstain",
"objective",
"method",
"method",
"abstain",
"other"
] |
[
"We release FOOLMETWICE ( FM 2 for short), a large dataset of challenging entailment pairs collected through a fun multi-player game.",
"Gamification encourages adversarial examples, drastically lowering the number of examples that can be solved using shortcuts compared to other entailment datasets.",
"Players are presented with two tasks.",
"The first task asks the player to write a plausible claim based on the evidence from a Wikipedia page.",
"The second one shows two plausible claims written by other players, one of which is false, and the goal is to identify it before the time runs out.",
"Players pay to see clues retrieved from the evidence pool: the more evidence the player needs, the harder the claim.",
"Game-play between motivated players leads to diverse strategies for crafting claims, such as temporal inference and diverting to unrelated evidence, and results in higher quality data for the entailment and evidence retrieval tasks.",
"We open source the dataset and game code.",
"1 1 Introducing a Game of Challenging Claims Given a statementand a large collection of textual knowledgehow do you find evidence that shows a reader that the statement is true or false?",
"This problem takes on multiple forms in the natural language processing ( NLP ) community.",
"Given only a single statement and a single sentence, this decision process is called recognizing textual entailment (Dagan et al., 2010, RTE ) or natural language inference (Bowman et al., 2015; Williams et al., 2018, NLI ).",
"Given a single statement and a vast pool of possible evidence (e.g., all of Wikipedia), this problem is called verification (Thorne et al., 2018; Jiang et al., 2020).",
"Stage 1: Players write claims to fool others \"Venus's craters are difficult to measure due to erosion.\"",
"Stage 2: Players mark evidence that entails/refutes the claim On Earth it is caused by wind and rain erosion.",
"On Venus, about 85% of the craters are in pristine condition.",
"Venusian craters range from 3 to 280 km.",
"Stage 3: Players spot the refuted claim \"Snoop Dogg portrayed Moses in a rap battle\" \"Venus's craters are difficult to measure due to erosion.\"",
"Stage 4: Players get points Players that correctly spotted the refuted claim and authors of challenging claims are rewarded 100 Stage 5: Automatically build dataset Keep highest quality claims with their selected evidence FM2 R epea t Figure 1: Overview of the data generation pipeline.",
"We review existing resources for the latter task in Section 2 and how they have spawned a vibrant subcommunity around related tasks.",
"However, these datasets fail to challenge modern NLP models such as BERT (Devlin et al., 2019) or T5 (Raffel et al., 2020) that achieve super-human performance despite also exhibiting annotation artifacts that hurt their generalization potential (Gururangan et al., 2018; Tsuchiya, 2018).",
"Our goal is twofold: (1) to build a new, challenging dataset (statistics for FOOLMETWICE in Table 1) that tests models' abil-Claims Entailed Pages Avg.",
"# Tokens Proportion Claim Evidence Train 10,419 49.2% 1,811 15 30 Dev 1,169 51.0% 209 15 31 Test 1,380 49.4% 234 15 31 Total 12,968 49.4% 2,254 15 30 Table 1: Statistics of the FOOLMETWICE dataset.",
"ity to retrieve evidence and verify claims and (2) to show that engineering the incentive structure of data collection experiments can produce more accurate and realistic outcomes.",
"This dataset lends itself to automatic training and it characterizes what factual errors humans can most easily detect and which are most likely to fool them (Section 3.2).",
"This is analogous to the creation of unsupported or refuted claims in the wild, which are not random, but evolve as part of an information arms race (Rid, 2020).",
"Unlike previous datasets that rely on crowd-sourcing, we develop an online game to create a platform where motivated authors can create plausible sounding facts that other users must debunk.",
"Not only does this create more realistic claims the best must withstand human scrutinyit also creates a way to better evaluate the evidence that support or refute claims.",
"As we surface the evidence, humans use that evidence to decide which claims are true or false; these signals can further improve our systems (Figure 1).",
"We apply baseline models for retrieval and classification to our dataset (Section 4) and examine how their ability to detect wrong statements differs from humans' (Section 5).",
"Entailment is a key task in natural language understanding.",
"Dagan et al. (2010) describe it as an AI -complete task: solve it, and you can solve all of artificial intelligence.",
"Typically, entailment is presented as: given a premise (Brooklyn is the most populous of New York City's boroughs), decide whether a hypothesis (Manhattan has more residents than Brooklyn) is entailedsupported by the premise.",
"Even simple examples show the promise (and complexity) of this task.",
"To recognize that this hypothesis is contradicted, a model must: know that Manhattan is a borough of New York, recognize that X is the most populous bor-ough entails X has more residents than any other borough, and correctly combine this knowledge to recognize the contradiction.",
"Despite the promise of entailment, it has not been a silver bullet for the NLP community to solve artificial intelligence.",
"One possible explanation is highlighted by a line of work that shows existing entailment datasets have artifacts.",
"Poliak et al. (2018) show entailment can often be solved by looking only at the hypothesis, while Feng et al. (2019) show that artifacts can infect the premise as well.",
"This is especially common in the biggest datasets for NLI such as SNLI and MNLI (Gururangan et al., 2018).",
"While there are algorithmic solutions to addressing these issues (Utama et al., 2020), many have turned to building better datasets.",
"Both Bowman et al. (2020) and Vania et al. (2020) propose alternative methods for collecting entailment pairs from crowdworkers and measure success via improvements in other general tasks via transfer learning.",
"While the proposed methods prove to be ineffective for that goal, we view NLI is as an important end task in itself (e.g., for misinformation, QA , dialogue, generation evalu-ation).",
"Hence, we argue that constructing challenging entailment datasets is useful beyond just transfer learning.",
"Like this paper, Nie et al. (2020) focus on adversarial entailment, but their authors only see a single piece of evidence.",
"We expand this human-in-the-loop adversarial setting to include the essential retrieval component of fact verification.",
"Thus, authors have more strategies on hand; in addition to creating challenging examples through paraphrasing, they can make it difficult to find relevant information in the first place or distract with related but distinctinformation.",
"This is exactly the setting of a recent shared task, FEVER (Thorne et al., 2018, Fact Extraction and VERification), which creates a more general entailment setting: given a claim, find relevant evidence from Wikipedia, and determine whether the evidence has enough information to either support or refute the claim.",
"This generalizes the entailment problem to a large, broadly accepted set of premises (all sentences in Wikipedia) and adds an additional retrieval step to find relevant evidence.",
"FEVER has obvious connections to problems in education, journalism, and information science.",
"Thus, it has caught the attention of a subcommunity focused on building systems for FEVER shared tasks.",
"Despite this excitement, Schuster et al. (2019) show that FEVER has many of the same issues as entailment datasets.",
"FEVER has broad or nonsensical claims (Table 2) and many of the claims are generated from the very first line of source Wikipedia documents.",
"This is not just an artifact of crowd-sourcing; a more fundamental problem is that there is no clear definition of what makes a good FEVER example.",
"To date, adversarial FEVER example generation uses automatic rules to increase their difficulty (Thorne et al., 2019).",
"To address these identified weaknesses, Sections 3.1 and 3.2 define a game where the claim writers have a clear objective of fooling other human players.",
"Creating datasets through a fun interactive design is often called gamification .",
"Ipeirotis and Gabrilovich (2014) focus on multiple choice question answering in technical domains such as medicine and rely on redundancy and calibration questions to generate new knowledge.",
"The ESP game (von Ahn and Dabbish, 2004) asks users to write labels for an image that agree as much as possible with other players' labels.",
"tasks players to twist and bend protein structures, often besting computer algorithms and driving biological innovations (Khatib et al., 2011).",
"Crucially, these games are either individual or cooperative; in contrast, FOOLMETWICE exploits the adversarial nature of players fooling each other.",
"FOOLMETWICE most closely resembles Balderdash , a board game where players guess which definition of a word is legitimate that is used in information literacy courses (Hays and Hayse, 2017).",
"In all cases, the intrinsic motivation driven by these games can lead to better outcomes and fewer attempts to game the system (Kuznetsov, 2006; Yang and Lai, 2010).",
"Thus our approach constitutes a viable alternative to traditional isolated labelling tasks in crowd-sourcing platforms, where tying payment to completing tasks sometimes hurts final results (Gneezy and Rustichini, 2000).",
"This section outlines the two phases of the game: authoring claims (Section 3.1) and voting on those claims (Section 3.2).",
"While these sections present the game in its final form, this is the reflection of an iterative process.",
"We first began with a paper version (Nielsen, 1989) of the game, which showed that a time constraint made the game feel more fun and encouraged people to not read individual pieces of evidence too intently.",
"Without the timer, people tried to look for tiny clues in text that probably were not there (Wilkinson et al., 2012).",
"We then moved onto a version of the game presented via slides where we experimented with design choices such as the number of claims players distinguish between, and the number of evidence sentences they see while doing that.",
"Examples of the final web interface are shown in Appendix B. 3.1 Crafting Challenging Claims Our goal is to create a computer game that produces human-authored, interesting, challenging claims paired with evidence that either supports or refutes each claim.",
"One prerequisite for this is that claims avoid high lexical overlap with the knowledge corpus.",
"We thus need to encourage authors to craft claims that cannot be trivially matched to evidence.",
"While this approach has been used for question answering (Wallace et al., 2019; Bartolo et al., 2020), which has a similar retrieval step, to our knowledge it has not been applied to entailment or FEVER .",
"We recruit users employed at Google, all pro-ficient in English, to play-test the game.",
"At the beginning of each round, we ask each user to generate a true or false statement.",
"We randomly choose a Wikipedia page as a knowledge source and ask them to highlight one or two evidence spans that support (or refute) their claim.",
"They are instructed to write statements that would likely fool other players trying to determine the claim's veracity quickly and/or without looking at the evidence that support the claim.",
"The reward system defined in the next section is built to be aligned with this objective.",
"To help authors write hard claims, not entirely similar to the evidence, we show the user what evidence a TF-IDF retrieval system would select from the source and highlight the words that help IR systems select evidence.",
"This implicitly encourages them to craft the claims in a manner such that overlap with the evidence is low (Section 3.2).",
"We include screenshots of the user interface and more details about our design choices in the appendix.",
"Because the players see evidence selected by our retrieval systems, difficult claims for players are also challenging for computers.",
"See Table 3 for a comparison on highly predictive bigrams between FEVER and FOOLMETWICE (details about how these are computed are in the appendix).",
"In the game's second phase, players select the incorrect statement from claims written by other players (Table 4).",
"To separate these two phases of the game, we refer to players in this phase of the game as voters .",
"If a voter can correctly answer quickly (e.g., through their own world knowledge or artifacts), they get up to 120 points, the maximum possible.",
"3 The author and voter split the points: any points the 3 Each voting task should take at maximum two minutes, and each point corresponds to a second.",
"voter leaves on the table go to the author.",
"Challenging claims reward the author with more points but easy ones let the voter increase their total.",
"We do not want to keep claims that are easy to identify as true or false.",
"If the average player can tell through artifacts or common sense that a claim will not be supported, it is uninteresting as an entailment example.",
"For example, if someone sees the claim Tipper Gore was born in 1048 and remembers that Al Gore was the vice president of the United States in the twentieth century, they can identify that this claim is false.",
"We also want claims that require the voters to carefully read evidence from Wikipedia (Table 4).",
"Voters can ask for hints provided by our evidence selection system (Sec-tion 4.1).",
"For each piece of evidence shown, the number of points available to the voters decreases, and points decrease as time progresses as well.",
"All possible outcomes provide useful information: correct and incorrect choices, with and without evidence.",
"As mentioned before, if voters spot the wrong statement unaided, the claim has underlying issues.",
"When a voter can spot the wrong claim with the help of a particular piece of evidence, then this is a clue that the evidence (and the mechanism that selected it) is useful.",
"This allows us to specifically optimize for evidence that helps players better answer questions.",
"When voters go from confused to confident about the correct answer, that is a signal that the evidence was effective.",
"When voters select an incorrect answer, that is a signal that the evidence was not effective (or, indeed, misleading).",
"When voters need more time and evidence and are almost fooled (i.e., nearly think a true statement is incorrect), this is a sign that the statement is challenging for the humancomputer team seeking to verify entailment.",
"The statement must be convincingly written, consistent with voter's world knowledge, and also consistent with the evidence players see.",
"Our game setting helps create conditions where these tricky examples can be crafted.",
"We use two heuristics to ensure quality claims.",
"First, we search for easy examples that were consistently solved without inspecting the evidence however, we were not able to find any.",
"Next, we search for examples which are too difficult by computing a maximum a posteriori estimate of the Bernoulli distribution of correct and incorrect votes for each claim.",
"The prior distribution matches the overall accuracy of the dataset (80% of votes are correct) and is equivalent to adding five pseudo-counts (one wrong, four correct) for each question.",
"We use this smoothed estimate rather than the maximum likelihood estimate to account for claims lacking votes.",
"The expected value of that posterior given a Beta (4 , 1) prior is (Liu et al., 2012): Beta (4 , 1) | C i Beta (cid:32) 4 + (cid:88) i C i , 1 + (cid:88) i (1 C i ) (cid:33) , where i sums over the votes, and C i is one if the vote was correct and zero otherwise.",
"We analyze all twenty-five claims below a 0 .",
"5 threshold and identified three incorrect examples which we subsequently removed.",
"Players earn points in two ways: either spotting incorrect claims by voting as early as possible or authoring challenging claims.",
"They alternate between the two roles in every game session.",
"These two rewards are in opposition to each other.",
"Because the goal of the voters is to find the claim that is incorrect, claim authors (of either entailed or refuted claims) only get points when voters are not fooled and when the voters need evidence.",
"The total points are split between the voter and authors when the voter correctly guesses, making this a zero-sum game.",
"As a voter requests evidence or takes more time, a larger fraction of the total points will go to authors.",
"Thus, authors are encouraged to write difficult claims; voters are encouraged to select claims correctly.",
"When a voter guesses incorrectly, they get no points, to ensure the examples are valid.",
"While incorrect guesses can happen for impossible claims, writing claims that are merely difficult is a better strategy since easy claims that may be spotted quickly are awarded no points.",
"4 In addition to humans voting on claims, we also ask users which of the two claims they like more, independent of voters' accuracy.",
"People like true claims (0.39) more than false claims (0.35, t = 2 . 53 , p = 0 . 01 ), except for claims about science and technology, where people prefer false claims (0.46) more than true claims (0.32, t = 2 . 50 , p = 0 . 02 ).",
"Authors get points when voters like their claims; this additional incentive encourages authors to create interesting and surprising examples.",
"4 We also allow players to flag obscene, incorrect, or otherwise problematic claims.",
"Each of the instances in FOOLMETWICE is a tuple ( c, e, l ) : a natural language claim c , evidence e from a knowledge corpus K (in our case Wikipedia), and a binary label l (entailment / con-tradiction).",
"5 From this we define two sub-tasks, following Thorne et al. (2018).",
"The first sub-task, retrieval, requires systems to select candidate evidence from K (including, perhaps, the gold evidence e ).",
"The second sub-task is entailment, where systems given claim c and the gold evidence e need to make a final prediction for the label l .",
"We also consider an end-to-end setting.",
"Instead of the gold evidence, systems only have access to the retrieved evidence e at test time.",
"In the rest of this section we define baseline models for each of the sub-tasks.",
"Our setting resembles the retrieval setting in the KILT benchmark (Petroni et al., 2021), but the results are evaluated at the evidence level as opposed to the page level, to represent a more realistic use case.",
"The evidence corpus can be found online 6 and consists of twenty-two million text passages, each having a length of a hundred words, from five million pages of the English Wikipedia image from August 2019.",
"We align gold FOOLMETWICE evidence to this knowledge source by selecting the passage with highest overlap with each evidence sentence, according to the modified n -gram precision component of the BLEU (Papineni et al., 2002).",
"We remove 1598 examples 7 where the precision was less than 0 .",
"5 .",
"We evaluate two baselines.",
"The first one follows Chen et al. (2017) and uses a TF-IDF retrieval model with unigrams and bigrams and 2 20 hash buckets.",
"The title of page is added to the passage content for additional context.",
"The second baseline uses Dense Passage Retrieval (Karpukhin et al., 2020, DPR ), using the same fixed pre-trained passage embeddings and query encoder as the ones used in Petroni et al. (2021).",
"For the second component of the task, we follow state-of-the-art entailment models (Zhou et al., 2019; Liu et al., 2020; Eisenschlos et al., 2020):",
"5 Unlike FEVER , we do not allow authors to write claims that lack enough information.",
"6 http://github.com/facebookresearch/KILT/ 7 This happens because FOOLMETWICE was constructed from a more recent version of Wikipedia than KILT .",
"given the concatenated gold evidence and claim, a BERT -base model (Devlin et al., 2019) outputs a binary entailment / contradiction label.",
"For end-to-end label accuracy, we use the same models but test only retrieved (rather than gold) passages.",
"During training we include both the gold and the top two retrieved passages.",
"This section studies the performance of existing automatic methods on FM 2 for both the retrieval of evidence (Section 5.1) and for entailment once the results are retrieved (Section 5.2).",
"Retrieving evidence for FOOLMETWICE is considerably harder (Table 5); we also include comparable results on FEVER .",
"The documents retrieved by DPR are consistently better than the ones by a TF-IDF system for both of the datasets we tested, which is consistent with other work on dense text retrieval (Guu et al., 2020).",
"This section presents the results of training a BERT (Devlin et al., 2019) model for the entailment task of FOOLMETWICE .",
"Given a claim and the gold evidence, does the evidence support or refute the claim?",
"To compare with FEVER , we discard all not enough evidence examples, because the lack of evidence for this class makes it trivial to classify correctly.",
"Following Gururangan et al. (2018), we first train a claim-only classifier, which ignores the evidence text.",
"FOOLMETWICE examples are harder to classify without looking at the evidence (Ta-ble 6), indicating that the claims contain fewer Dataset Claim-Only EASY HARD ALLFOOLMETWICE 61.9 86.1 66.4 78.1 FEVER 79.1 97.1 79.3 93.3 Table 6: Comparison of dev accuracy between FEVER and FOOLMETWICE for different partitions of the data and when using only claims.",
"give away artifacts compared to FEVER as already suggested by Table",
"3. We provide additional discussion in Appendix C. Like the techniques proposed by Clark et al. (2019), the claim-only classifier can also be used on both FOOLMETWICE and FEVER to split the dev sets into easy and hard partitions: The EASY partition contains all examples correctly clas-sified by a claim-only classifier, and the HARD partition has everything else.",
"The similar accuracy of the FOOLMETWICE dev and HARD FEVER dev partitions further suggests that FOOLMETWICE is comparable to the harder and higher-quality subset of FEVER (Table 6).",
"We also train an end-to-end verification model that, rather than taking evidence as given, must use noisy passages from a retrieval system (Sec-tion 4.1).",
"At train time, we generate multiple training instances for each claim using either the gold evidence or the top two retrieved examples.",
"At prediction time, we average the logit scores of each of the topk retrieved passages (Table 7).",
"We include a so-called oracle setting for a fair comparison of the improvement margin.",
"This number differs from Table 6 in that it uses a single gold 100 word passage as evidence instead of short sentences, which might introduce noise.",
"While the previous section focuses on how well automatic methods can detect false claims, this section focuses on human ability.",
"Voters are usually right and were fooled 20 .",
"40% of the time.",
"This section addresses how players are fooled and how this compares to computers.",
"To provide a better picture of the strategies players use to craft challenging claims, we manually sample fifty instances from the development set that both models and humans answer incorrectly.",
"We focus on these examples because they are the most difficult and are the emphasis of our adversarial technique.",
"Two claims were mislabeled and two more lacked a necessary evidence span.",
"Table 8 shows examples of each of the strategies, which we discuss in more detail in this section.",
"Temporal Many of the most challenging claims require an inference about time: whether one event happened before another, how long an event happened, or whether an event happened during a period.",
"While many of these are based on years, centuries, or other explicit markers of time, some authors use narrative time.",
"For example, the page for the novel As I Lay Dying describes the plot in order, so it's difficult for either a system or a human given sentences (without knowing where they appear in the original page) to know when Addie Bundren dies.",
"This shows some of the limitations of the setup: not only must voters reason across multiple pieces of evidence, this reasoning is only possible if they know the order in the underlying evidence.",
"Other markers of time include the pilot for the first episode of The Office; readers must realize that if Kelly Kapoor was introduced in the episode Diversity Day , that implies Mindy Kaling's character did not appear in the pilot.",
"Reasoning A related, but more general, strategy requires the reader to reason: mathematically, applying definitions, or understanding hyponomy.",
"For example, knowing that the child of your cousin is your second cousin or recognizing that This mirrors the Disney Parks East regional division consisting of Shanghai Disney Resort, Hong Kong Disneyland and Walt Disney Attractions Japan. . . implies that there are more than two Walt Disney resorts outside of the United States.",
"Paraphrase A well-known strategy to confuse entailment systems is to change words so that there are fewer exact matches.",
"Some of these are straightforward: Titration is used when doctors test how much sugar is in a patient's liquid waste is almost a direct paraphrase of glucose in urine may indicate diabetes in a patient.",
"Other paraphrases are more poetic: Charles Evans Hughes shuffled off this mortal coil in Massachusetts, and then was taken to New York to be submerged in soil paraphrasing Hughes died in what is now the Tiffany Cottage of the Wianno Club in Osterville, Massachusetts. He is interred at Woodlawn Cemetery in the Bronx, New York City.",
"These paraphrases are realistic, similar to how humans might restate facts to make them more accessible or more interesting to a reader.",
"Diversion An interesting strategy to fool the retrieval phase of FEVER systems is to create claims that point to specific text but not the text that refutes or supports the claim .",
"For example, Following his retirement from the MLB , Prince Hal became a top executive of a company retrieves information about how Hal Newhouser earned the nickname Prince Hal and his later business investments but not his post-baseball career in banking.",
"Controversy A more fundamental issue with entailment systems is that even trusted sources such as Wikipedia contain contradictory evidence.",
"This is most prominent with interpretations of works of fiction, where there are multiple theories about the same work.",
"A skillfully written claim can retrieve one viewpoint while using an opposing viewpoint as the gold evidence.",
"For example, one claim strongly took the position that the end of the film Inception was a dream.",
"Voters saw evidence to the contrary and thought the claim was refuted.",
"Because systems focus on the highest scoring retrieved passages (as do the human voters), this lead both humans and computers to overlook the disputed interpretations.",
"The amount of evidence a human needs is a unique metric of how difficult a claim is for humans (al-though incremental evidence is recommended for question answering systems in Boyd-Graber and Brschinger (2020), to the best of our knowledge it has not been applied to entailment or validation).",
"The claims that most challenge humans typically use diversion (e.g., The Quiet Man was a song by Bing Crosby about a soldier who lost his voice from a bomb in World War 2), which is particularly challenging for retrieval systems.",
"Other common strategies for the claims most challenging for humans were paraphrase , which can hide the relevant evidence and prevent retrieval, and reasoning , which often requires multiple pieces of evidence to reach a conclusion.",
"While this paper seeks to advance the ability of humans and computers to support or refute statements entailed from a static, reliable source, the goal of examining arbitrary statements remains elusive.",
"By construction, we have focused on statements that are incorrect because of factual errors.",
"Other datasets that use human-sourced obfuscations or deception are more nuanced and use framing or shading (Pan and Kosicki, 1993), which models trained on this dataset cannot detect.",
"Our goal is to focus on clear facts that can be recognized by computers, which is already challenging enough.",
"Further improving verification likely requires creating targeted datasets that focus on specific strategies for creating statements that are refuted by evidence, perhaps selecting different explanations for particular users (Feng and Boyd-Graber, 2019).",
"Likewise, a more complicated task likely requires more nuanced incentives and instructions for authors.",
"However, this dataset provides a foundation to build these richer, more challenging datasets for entailment.",
"As our work involves human participants, all players provided informed consent and no personally identifiable information ( PII ) was collected or will be released.",
"The collected data have been vetted for presence of PII as well as offensive language through heuristics and random sampling.",
"Some participants received fair compensation in the United States in exchange for playing the game, but that compensation was not tied to speed or accuracy to prevent distorting the motivation of players.",
"Intrinsic motivation, such as curiosity, competitiveness, creative drive and fun, rather than extrinsic motivation has been shown to produce higher quality results (Gneezy and Rustichini, 2000).",
"The released data and the experiments we conducted are in English, therefore we do not claim generalization of our findings across languages.",
"However, we believe that the proposed methods could be applied in other languages using other available corpora as a source of evidence.",
"First and foremost, we would like to specially thank Connie Tao for her guidance and assitance in managing the project.",
"The project would also have been impossible without the FM 2 players.",
"We also would want to thank Thomas Mller, William Cohen, Dipanjan Das, Slav Petrov, Pedro Rodriguez, Massimiliano Ciaramita, and Christian Buck for comments on the drafts and testing the game.",
"We also thank the anonymous reviewers for their time, constructive feedback, useful comments and suggestions about this work.",
"Boyd-Graber is supported by NSF Grant IIS-1822494."
] | [
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"result",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"method",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"other",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"other",
"abstain",
"method",
"method",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"objective",
"method",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"other",
"other",
"other",
"other"
] |
[
"When evaluating an article and the claims it makes, a critical reader must be able to assess where the information presented comes from, and whether the various claims are mutually consistent and support the conclusion.",
"This motivates the study of claim provenance , which seeks to trace and explain the origins of claims.",
"In this paper, we introduce new techniques to model and reason about the provenance of multiple interacting claims, including how to capture fine-grained information about the context.",
"Our solution hinges on first identifying the sentences that potentially contain important external information.",
"We then develop a query generator with our novel rank-aware cross attention mechanism, which aims at generating metadata for the source article, based on the context and signals collected from a search engine.",
"This establishes relevant search queries, and it allows us to obtain source article candidates for each identified sentence and propose an ILP based algorithm to infer the best sources.",
"We experiment with a newly created evaluation dataset 1 , Politi-Prov, based on fact-checking articles from www.politifa ct.com ; our experimental results show that our solution leads to a significant improvement over baselines.",
"Misinformation is on the rise, and people are fight-ing it with fact checking.",
"However, most of the work in the current literature (Thorne et al., 2018; Zhang et al., 2019; Barron-Cedeno et al., 2020; Hidey et al., 2020) focuses on automating fact-checking for a single claim.",
"In reality, a claim can be complex, and proposed as a conclusion of an article.",
"Therefore, understanding what information supports the article , especially information 1 The data and the code will be available at http://co gcomp.org/page/publication view/944 Figure 1: An example of a claim (in the red box) with its article.",
"that was not originated within the same article, and where it originates from , are very important for readers who want to determine whether they can believe the claim.",
"Figure 1 shows an example of such a claim, Marco Rubio says Anthony Fauci lies about masks. Fauci didn't. 2 with its article from politifact.com .",
"A critical reader of the content will find that several major sources support the author's claim: Source article 1 in the figure is CBS News,60 Minutes interview with Anthony Fauci, on March 8, 2020 , which reveals that Dr. Fauci's main point was to preserve masks for those who were already ill and people providing care.",
"If readers can validate all sources used in the article, they will be able to determine whether the article is trustworthy.",
"In this paper, our goal is to automatically find these sources for a given article.",
"This is a different problem from fact-checking: Fact-checking seeks evidence for a claim, while here we only care about the information sources the authors 2 https://www.politifact.com/factcheck s/2020/dec/28/marco-rubio/marco-rubio-says-anthony-fauci-lied-about-masks-fa/ used when they were writing.",
"Furthermore, the problem we address is critical also to authors who want to give credit to those who have contributed to their article, and it enables a recursive analysis that can trace back to the starting points of an article.",
"This motivates the study of provenance for natural language claims, which describes where a specific claim may have come from and how it has spread.",
"Early work (Zhang et al., 2020) proposed a formulation to model, and a solution to infer, the provenance graph for the given claim.",
"However, that model is insufficient to capture the provenance of an article , because (1) an article consists of multiple claims, and it leverages information from other sources, therefore the provenance of all claims should be included in the article's provenance; (2) the inference solution they proposed can only extract domain-level provenance information, e.g., cbsnews.com , while it can not directly link the claim to its source article , e.g., https://www.cbsnews.com/news/preventing-coronavirus-facemask-60-minutes-2020-03-08/ .",
"Such fine-grained provenance information is important because it can help people understand the original context that influenced the information they read.",
"Therefore, in this work, we argue that the notion of a provenance graph should be extended to incorporate provenance for articles, and that we need a more comprehensive solution that can identify important external information used in the article and infer its corresponding source article: namely, its fine-grained provenance information.",
"Technically, capturing fine-grained provenance for an article is challenging because (1) there may be large numbers of sentences in an article, and not all are from external sources nor important (thus, their provenance may not be worth considering); (2) a sentence in an article is usually just a textual fragment of its source article, and simply looking for other articles with related content may result in low precision with regards to finding the correct original article.",
"In our running example, sentence2 in Figure 1 is On March 29, President Donald Trump and the coronavirus task force briefed the press on steps underway to increase ... , whose source is White House's coronavirus task force press briefing on March 29, 2020 .",
"If we directly search for the sentence on the web, it is hard to find this among popular articles from the news.",
"Instead, we need a model that can generate better keywords for a more focused search.",
"The key contributions of this paper are (1) we introduce and formalize the problem of inferring fine-grained provenance for an article; (2) we propose a general framework to infer the source articles that have provided important information for the given article, including",
"(a) a ranking module that can identify sentences that contain important external information based on the main topic and the main entities in the article;",
"(b) a query generator that can generate possible metadata for the source article, e.g., the title, the published date, the source website, based on the context of the selected sentences;",
"(c) an integer linear program (ILP) based algorithm to jointly identify the source articles from all of the candidates.",
"(3) to evaluate our solutions, we collect a new dataset Politi-Prov from politifact.com , and our experimental results show that the solution we proposed can lead to a significant improvement compared with baselines.",
"Given an article d , we are to capture its fine-grained provenance, by inferring k source articles SA k ( d ) that provide the most important information for d .",
"We adopt the notion of provenance from (Zhang et al., 2020), while in this paper, we focus on inferring provenance for a claim based on the information from the given article.",
"To find SA k ( d ) , there are three subproblems we need to solve.",
"First, we need to locate the important external information in d , which means we need a sentence ranking module that can estimate a score i for each sentence in d = { s i } ni =1 , based on how likely s i contains external information.",
"Then we will choose topk sentences based on their score, and try to find source articles for those sentences.",
"Second, for each selected sentence, we need to generate a list of candidate links, which can be its source articles.",
"To achieve this goal, we take advantage of a search engine, based on which we can access all of the articles on the web.",
"As we have discussed in Section 1, directly searching the identified sentence on a search engine may result in a low precision of finding the correct source article.",
"Therefore, we propose to develop a query generator to generate the possible metadata of the target source article as new search keywords, so that the search engine is more likely to recall source articles.",
"We then collect all of the search results as the candidates for a selected sentence.",
"Finally, we need to infer the correct source article from the candidates, for each identified sentence.",
"Figure 2 depicts the three steps we need to conduct to infer the fine-grained provenance, which correspond to the three subproblems listed above.",
"We will elaborate the details of each step in Section 4.",
"To the best of our knowledge, there is no existing dataset that can support inferring fine-grained provenance for an article, therefore we create a new dataset based on the fact-checks from politifact",
".com to support the training and the evaluation of this problem.",
"Specifically, we crawled all of the fact-check questions from politifact.com on 4 different issues: Coronavirus, Health Care, Immigration, Taxes in September, 2020.",
"For each question, we further crawled its webpage to obtain (1) the title, which is actually the fact-check question itself, (2) the sections of the main text and (3) the Our Sources section listing all of the articles (including urls) that provide important information mentioned in the fact-check article.",
"Figure 3 shows an example of such a section.",
"as the given article, and the source articles listed in the section of Our Sources as the ground truth our system wants to return.",
"We want to note it is possible that there may be some sources missing in the ground truth we can obtain, therefore, we focus more on the recall in the evaluation.",
"Overall, we collected data from 1765 articles, where we use 883 of them for training, and 441 and 441 for validation and testing respectively.",
"On average, each article has 9.8 source articles.",
"In this section, we will elaborate how we solve the problems proposed in Section",
"2. 4.1 Sentence Ranking Given an article, the first step is to identify the sentences that are most likely to contain important external information.",
"To develop a general data-driven solution, rather than design a ranking function by domain-specific feature engineering, we take advantage of the hyperlinks inserted in the article, so that we can find where the source articles are mentioned.",
"The hyperlink is helpful here because it is standard for the author to provide external information on related topics to the reader.",
"If the hyperlink refers one at the listed source articles, it means the sentence is the one that we are looking for.",
"Then our problem is to learn a model that can distinguish those sentences from the regular ones in the article.",
"Specifically, we first extract all of the hyperlinks with their corresponding sentences in the given article d , and denote the output as Hp ( d ) = { ( l, s ) | s d } , where l represents the link of the article and s represents the sentence.",
"Then, we create a list of positive sentences for d denoted as P ( d ) by finding the intersection between the articles in Hp ( d ) and those in SA k ( d ) , i.e., P ( d ) = { s | s d, ( l, s ) Hp ( d ) , s.t., l SA k ( d ) } .",
"Meanwhile, we create a list of negative sentences for d by randomly sampling from the rest of its sentences, denoted as N ( d ) .",
"When a new article is given, the job of the model turns out to estimate a score i of how likely each sentence s i in d refers to important external information.",
"Since the sentences referring to important external information are always either directly related to the main topic or about the main entities mentioned in the article, we will leverage them to build our model.",
"Denote the title of d as t d , and the most important entities mentioned in the article as E d .",
"Here, we simply use tf-idf to determine the importance of an entity to an article.",
"We build our model by leveraging Roberta (Liu et al., 2019).",
"Using the same notation in the paper, we concatenate t d and each e E d , feeding it to the model as sentence A, and s P ( d ) or N ( d ) as sentence B, as the input of Roberta.",
"We then use Roberta as a binary classification model, that is, we use its [CLS] vector as input to a two layer neural network to obtain the probability of s referring to important external information.",
"Instead of learning the features independently for each example, we want to help the model better capture the discriminative feature between the positive and negative examples.",
"Therefore, we add a margin ranking loss to the learning objective, so that it can enforce the model to distinguish the representations between positive and negative examples.",
"We start training from a pre-trained Roberta model and fine-tune it to our ranking task using the following loss, given s i P ( d ) and s j N ( d ) : L i,j = log i log (1 j ) + max (cid:0) 0 , ( s j ) ( s i ) + (cid:15) (cid:1) (1) where ( s i ) and ( s j ) are the representations, obtained by the output of a single layer neural network on top of the [CLS] vector of Roberta.",
"Identifying the sentences that are describing external information provides us with a clue to finding the source articles.",
"The next step is to find candidate articles that can be the source articles based on the identified sentences.",
"However, as we have described in Section 1, it is hard to find the source article by directly searching the sentence on the web, since so many articles may be talking about the related information.",
"Therefore, we argue that besides using the sentence as the query, we need a query generator that can generate a better query for searching, so that it can increase the possibility that we can recall the correct source article.",
"To generate a query that can improve the recall, the question here is what search keywords are good for finding the source articles besides the identified sentences themselves?",
"In this work, we argue that the metadata of the target article, including its source domain, title and published date is a good choice.",
"Since most of those information may be revealed in the sentence or its context, it is possible that we train a model where we can feed the context of the sentence, and generate a combination of the possible source domain, title and published date of the article it refers to.",
"In our running example in Figure 1, the sentence identified (sentence 2 in the figure) is ... On March 29, President ... .",
"The source domain of the article it refers to (source article 2 in the figure) is white house , the title of the article is coronavirus task force press briefing , and the published date is March 29, 2020 .",
"It is obvious that most of those information has been somehow mentioned in the context or at least can be very easily associated with.",
"Therefore, we treat this problem as a text generation problem, where we feed the identified sentence with its context, and try to generate its metadata.",
"As a baseline, we train this model via fine-tuning BART (Lewis et al., 2020), a pre-trained text generation model.",
"Besides the metadata to generate, the content of the identified sentence itself should be useful for searching, when there is an overlap between the sentence and the content of the target article.",
"In this case, if we search for the identified sentence on a search engine, the results returned can be related articles, and their metadata may provide additional useful information that can tell the model what should be included in the target output.",
"In our running example mentioned in the last section, if we search that sentence on Google, one result it returned is cspan 's article President Trump with Coronavirus Task Press Briefing , which has been very close to the title of the target article.",
"Therefore, our generation model should leverage those signals, which consist of metadata of related articles to the target article.",
"To incorporate the signals, we first issue the identified sentence as a query to the search engine and collect its top5 returned urls.",
"Then, as what we do to the identified sentence, we crawl its metadata, i.e., the source domain , title , and published date , and put them together as one document.",
"Then, our problem becomes to generating the metadata of the source article, when we are given the identified sentence, its context, and a concatenation of possible metadata outputs.",
"In this case, we actually have two types of inputs for the model.",
"One is the identified sentence with its context, where we are to infer the metadata from, and the other one is the concatenation of possible outputs, where we want to extract the correct metadata components directly from.",
"To solve this problem, we extend the BART baseline to incorporate two sources of inputs, by first feeding the text inputs independently to the BART's encoders, then concatenating the outputs of the encoders together, and finally feeding the unified representations to the BART's decoder.",
"We collect multiple possible metadata for each source article, so that the integration can help us generate better keywords for the search.",
"However, treating the multiple possible metadata as a single document neglects the rank of the urls returned, which reflects the different possibility for each candidate to be the right metadata.",
"Therefore, we propose a rank-aware multi-head cross-attention to relieve this problem.",
"The basic idea is when BART's decoders are performing cross-attention over the text input of the sentences and the possible metadata, we require that each set of attention heads (Vaswani et al., 2017) derives different attention scores based on different metadata.",
"Concretely, each set of attention heads will explicitly pay attention to different parts of the input corresponding to different pieces of metadata, and neglect the others.",
"Therefore, after training, each set of attention heads can be used to project the input embeddings into different representation subspaces but focusing on a specific set of candidate metadata.",
"For example, we will have a set of attention heads do cross-attention only over the positions of the sentences and the meta-data from the first url, another set do it only over the positions of the sentences and the meta-data from the first and the second urls, and so on.",
"Note that the candidate metadata from the urls ranked higher will always receive more attention than the others in this case.",
"Figure 4 summarizes our final design of the generation model.",
"Given the identified sentence and the query keywords generated, we can search for them on a search engine and collect a set of links that are the candidates of the source articles.",
"The next problem is to infer the correct ones from them.",
"Based on our observations, the author is very likely to leverage the external information coming from the same source websites.",
"In our running example introduced in Section 1, the author cited 8 articles in total, and among those articles, two of them come from whitehouse.gov and another two come from politicfact.com , which are actually two claims they have done fact-check before.",
"Besides the sources, the titles of the articles are also very likely to be related.",
"In the same example, some of them are all talking about the interviews done by Anthony Fauci at different time, and some of them are talking about the white house's Coronavirus Task Force in Press Briefing .",
"Therefore, we propose an algorithmic inference framework that can take advantage of those relations between the source articles to determine the correct source articles of identified sentences jointly.",
"We formulate the inference as an Integer Linear Program (ILP) (Roth and tau Yih, 2004; Cheng and Roth, 2013), that allows us to jointly determine the best candidate for each identified sentence.",
"Formally, we introduce two types of Boolean variables: x ki , which represents if the k th candidate is the source article of the i th sentence, and z klij , which represents if the source article of the i th sentence and the source article of the j th sentence are related, which means either they come from related source websites or provide related content.",
"To infer the value of the Boolean variables, our objective is to assign the best candidate to each identified sentence that can (1) maximize the overall relatedness of the source articles to the query document, and (2) maximize the relatedness between the source articles.",
"To compute the relatedness, we introduce w ki , which represents the relatedness score of the candidate article to the identified sentence, klij , which represents the similarity score between the representations of the source domain of the i th article's k th candidate and the source domain of the j th article's l th candidate, and klij , which represents the similarity score between the representations of the title of the i th article's k th candidate and the source domain of the j th article's l th candidate.",
"Then, the optimization goal to find the best assignments d of candidates for the identified sentences is as follows: d = argmax (cid:88) i (cid:88) k ki x ki + (cid:88) i,j (cid:88) k,l (cid:0) klij + klij (cid:1) z klij (2) s.t. x ki { 0 , 1 } , z klij { 0 , 1 } i, (cid:88) k x ki = 1 2 z klij x ki + x lj (3) Here, (cid:80) k x ki = 1 means only one candidate will finally be chosen as the source article of the i th sentence, and 2 z klij x ki + x lj means only if the k th candidate of the i th sentence and the l th candidate of the j th sentence have been chosen, we need to consider the relations between them.",
"In our experiments, we use the last hidden layer of BERT-large (Devlin et al., 2019) as the representation for titles and source domains, and use cosine similarity to compute the similarity score.",
"The ILP problem is solved using an off-the-shelf high-performance package 3 .",
"research questions: RQ1 Can we correctly identify the sentences that refer to important external information in the given article?",
"RQ2 Given the identified sentences, can we generate the metadata of the target articles from the context?",
"RQ3 Given a list of candidates for each identified sentence in the article, can we assign the correct candidate to each identified sentence?",
"3 https://www.python-mip.com/ RQ4 Given the identified sentences, can we use the query we generated to find candidates, and successfully use them to improve the inference of source articles?",
"Among those questions, RQ1-RQ3 are to evaluate a specific component of our solution, and RQ4 is to evaluate the joint performance of candidate generation and source article inference.",
"In the following part, we will elaborate the answers to those questions, and for each question, we will start with describing its experimental setting, baselines and the metrics.",
"Setup We use Politi-Prov dataset introduced in Section",
"3. Concretely, we train and validate our models on the articles in the training and validation set, and try to predict the score of a sentence referring to a source article from the article belonging to the test set.",
"To compare the performance, we implement our solution (SR-TE) as described in Section 4.1, and compare it with (1) a retrieval baseline that simply computes the cosine similarity between the embedding vectors (using Roberta) of the title and the sentence in the article (SR).",
"This retrieval baseline only captures the relatedness between the sentence and the main topic of the article; (2) a retrieval baseline similar to SR, but computing the cosine similarity between the embedding vectors of the concatenation of the title and the most important entities (top-50) and the sentence in the article (SR-E), where we want to show the effect of considering important entities; (3) our learning solution without considering entities (SR-T).",
"We report the mean precision and recall of the topk results respectively.",
"The gaps between SR, SR-E, and SR-T, SR-TE show that considering important entities always results in an improvement on both precision and recall, which reveals that the sentences can not be identified based on their relatedness to the title (the main topic) only, but also requires other important information in the article.",
"Furthermore, the figure also shows that the learning method is significantly better than the retrieval baseline without a learning objective.",
"Setup We collect all of the sentences that correspond to the source articles in training, validation and test set of Politi-Prov serving as training, validation and testing respectively.",
"Overall, there are 5279 cases for training, 1847 for validation, and 1538 for testing.",
"For each case, the source input is the identified sentence with its context (two sentences which are before and after the sentence respectively), and the target output to generate is the metadata of the corresponding source article in a form of a concatenation of its source domain, title and published date.",
"To evaluate the performance, we report Rouge 1, Rouge 2 and Rouge L score of the text generated, and compare with the performance produced by (1) the original BART, (2) our solution integrating signals from Google (BART-S), and (3) our solution integrating signals from Google with our rank-aware multi-head cross attention (BART-SR).",
"Results We report the results in Table 1.",
"As shown in the table, we can observe that integrating the signals from a search engine can significantly improve the performance of generating the metadata, and considering the ranking of the search results can further lead to an improvement.",
"Setup To conduct an isolated evaluation of the ILP based inference, in this experiment, we generate the candidates for each identified sentence based on its metadata from the ground truth.",
"Concretely, we assume there is an oracle that can generate the metadata based on the context for each identified sentence, and we directly search the metadata on Google, and fetch its top-5 results returned as candidates for each identified sentence.",
"Then, our inference algorithm is to find the correct source article for each sentence from those candidates.",
"To evaluate the performance, we report the mean recall of source articles for each article, and compare it with results provided by the baselines, including (1) simply choosing the top-1 article from the results returned by directly searching the identified sentence on Google (SS1), (2) choosing the top-1 article from the results returned by searching the metadata on Google (MS1), (3) our proposed solution, which conducts ILP inference to find the source article from the search results returned by searching the metadata on Google (MS-ILP).",
"To have a better understanding of the performance, we also report two upper bounds.",
"The first one is the upper bound of the mean recall of the results by directly searching the identified sentence on Google (SS-UB), and the second one is the upper bound of the mean recall of the results by directly searching the meta-data on Google (MS-UB).",
"To compute the upper bounds, if one of the articles returned by Google is correct, then we consider the sentence is correctly assigned.",
"Actually, they are equivalent to the mean recall of the top5 results, since we only request Google for its top5 search results.",
"Results We report the performance in Figure 6.",
"In the figure, we can observe that the mean recall of SS1 is only 0.067, and even its upper bound SS-UB can only achieve 0.15, which reveals that directly searching the identified sentence on a search engine to find the source article is not feasible.",
"Using the metadata of the source article to search can improve the mean recall to around 0 .",
"3 , and considering the relatedness between the source articles by ILP can further improve it to around 0.37.",
"It demonstrates that the ILP inference is useful for capturing the relatedness between the source articles, and the result has been very close to the mean recall of its top5 results (MS-UB), which is the upper bound of the performance that the inference can achieve with searching by metadata.",
"Setup In this experiment, we issue the queries generated by the query generation module to Google, and fetched the top-5 results returned.",
"We combine these results with the top-5 links returned by searching the identified sentence directly, as the candidate pool for each identified sentence.",
"Then, we conduct ILP inference to assign the candidate to each sentence.",
"We report the mean recall of Figure 6: The performance of inferring source articles for each article, MS-ILP is our ILP based solution, and MS-UB is the best possible performance that can be achieved when the candidates are the top-5 results returned by searching for metadata on Google.",
"the source articles, varying k , which represents the number of the links we returned for each identified sentence.",
"Note that finding the topk assignments in ILP is actually relaxing the unique solution constraint in Eq 3 to be i, (cid:80) j x ji = k , which makes the problem require an additional significant amount of time to solve.",
"Therefore, here we greedily select the best assignment for each variable as an approximate topk solution.",
"Results As shown in Figure 7, we can observe when k = 3 , it has already beaten the performance of SS-UB reported in Figure 6, which reveals that the candidates found by the queries generated by our query generator are helpful.",
"When k = 5 , the mean recall can achieve around 0.21, which is much better than 0.15, the best performance achieved by searching the identified sentence directly.",
"However, as what we can observe in the figure, there is still a gap to the performance of MS-UB in Figure 6.",
"This may result from the in-sufficiency of the query generation, which implies that a better text generation model may be necessary to further improve the performance, which we think is an interesting topic for future work.",
"Our work builds on earlier work on Claim Provenance (see Section 2 for a discussion).",
"Beyond that, we discuss below additional related work.",
"Fact-checking Fact-checking is related to our problem, since there is usually a document retrieval step to find articles that may provide evidence in most of the solutions (Wang et al., 2018; Thorne et al., 2018; Nadeem et al., 2019).",
"Typically, the input of fact-checking is a single claim instead of an article, therefore it is hard to directly extend their solutions to our problem.",
"Even though fact-checking may find various evidentiary articles for the claim, the source articles we are looking for are those that have been used by the author, which is actually a specific subset of the articles that fact-checking targets to, and the size is also much smaller.",
"Furthermore, we try to extract the metadata of the source articles from the text to support a better search, which is not considered in the document retrieval step of fact-checking.",
"Recommending Citations Recommending citations for scholarly articles has similarities to our work.",
"The source articles we are looking for can be considered as the citations of the given news article that should be recommended.",
"However, the meaning of the reference is different in these two problems.",
"When recommending citations for a paper, the system is to look for previous works that are related to the arguments in the given paper.",
"The argument was created by the author, and the criteria of the recommendation is the relatedness.",
"While inferring provenance is to do reverse engineering to the given article, so that we can find the articles whose information or claims were actually used when the author was writing.",
"Technically, there are two types of citation recommendation systems (Bhagavatula et al., 2018).",
"One is called local (Huang et al., 2012, 2015), that is, a system takes a few sentences (and an optional placeholder for the candidate citation) as input and recommends citations based on the context of the input sentences.",
"Another one is called global (Kataria et al., 2010; Ren et al., 2014; Bhagavatula et al., 2018), that is, a system takes the entire article (and its meta-data which is optional) as input and recommends citations for the paper.",
"Our solution is more related to local recommendation systems, while we do not assume we can access all of the articles that can be cited and have a way to represent them to be vectors.",
"Therefore, we propose to learn a query generator, which is different with previous works.",
"Furthermore, we do joint inference for all of the identified sentences in the article, which is actually a global inference.",
"We propose new techniques to infer fine-grained provenance for an article that contains multiple claims; this is important for a critical reader to understand what information supports the article he/she is reading and what its origins are.",
"The inference consists of models that can identify the sentences that refer to important external information, generate the metadata that can make it more likely to recall the source articles using a search engine, and do an ILP inference to jointly determine the correct source articles from the candidates.",
"We create a new dataset, Politi-Prov, for this task, and our evaluation on it demonstrates the effectiveness of each component, and shows a big improvement compared with the baselines of finding source articles.",
"However, the problem has not been solved yet.",
"As shown in the analysis, a better text generation model would further improve the performance.",
"Furthermore, it has also been revealed in the experiments that the gold metadata can only recall only around 40% of the source articles, which actually becomes a bottleneck.",
"Therefore, it would be an interesting future work direction to explore what other information should be added to the query, besides the target metadata, so that we can recall more source articles.",
"Our dataset Politi-Prov is collected from www.poli tifact.com .",
"The executive director of PolitiFact, based at the Poynter Institute for Media Studies, granted us permission to use their data for this research and to make the new dataset available.",
"The collection process is automatic without additional manual work.",
"Our collection involves fact-check articles with sources in 4 topics, i.e., coronavirus, health care, immigration and taxes, which were written by the website's journalists.",
"The website seeks to present the true facts, unaffected by agenda or biases, but journalists set their own opinions aside as they work to uphold principles of independence and fairness.",
"Furthermore, the website emphasizes primary sources and original documentation when listing sources, for example direct access to government reports, academic studies and other data, rather than second-hand sources.",
"The authors would like to thank Aaron Sharock-man, the executive director of PolitiFact, for kindly granting access to data from the website for academic research.",
"This work is supported in part by the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), via IARPA Contract No. 2019-19051600006 under the BETTER Program and by a Google Focused Award."
] | [
"abstain",
"abstain",
"objective",
"objective",
"objective",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"other",
"objective",
"abstain",
"abstain",
"other",
"abstain",
"objective",
"other",
"abstain",
"result",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"result",
"result",
"other",
"abstain",
"abstain",
"abstain",
"result",
"result",
"result",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"method",
"method",
"method",
"other",
"other",
"other",
"abstain",
"other",
"other",
"other",
"method",
"objective",
"method",
"objective",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other"
] |
[
"Recognizing lexical semantic relations between word pairs is an important task for many applications of natural language processing.",
"One of the mainstream approaches to this task is to exploit the lexico-syntactic paths connecting two target words, which reflect the semantic relations of word pairs.",
"However, this method requires that the considered words co-occur in a sentence.",
"This requirement is hardly satisfied because of Zipf's law, which states that most content words occur very rarely.",
"In this paper, we propose novel methods with a neural model of P ( path | w 1 , w 2 ) to solve this problem.",
"Our proposed model of P ( path | w 1 , w 2 ) can be learned in an unsupervised manner and can generalize the co-occurrences of word pairs and dependency paths.",
"This model can be used to augment the path data of word pairs that do not co-occur in the corpus, and extract features capturing relational information from word pairs.",
"Our experimental results demonstrate that our methods improve on previous neural approaches based on dependency paths and successfully solve the focused problem.",
"The semantic relations between words are important for many natural language processing tasks, such as recognizing textual entailment (Dagan et al., 2010) and question answering (Yang et al., 2017).",
"Moreover, these relations have been also used as features for neural methods in machine translation (Sennrich and Haddow, 2016) and relation extraction (Xu et al., 2015).",
"This type of information is provided by manually-created semantic taxonomies, such as WordNet (Fellbaum, 1998).",
"However, these resources are expensive to expand manually and have limited domain coverage.",
"Thus, the automatic detection of lexico-semantic relations has been studied for several decades.",
"One of the most popular approaches is based on patterns that encode a specific kind of relationship (synonym, hypernym, etc.) between adjacent words.",
"This type of approach is called a path-based method.",
"Lexico-syntactic patterns between two words provide information on semantic relations.",
"For example, if we see the pattern, animals such as a dog in a corpus, we can infer that animal is a hypernym of dog .",
"On the basis of this as-sumption, Hearst (1992) detected the hypernymy relation of two words from a corpus based on several handcrafted lexico-syntactic patterns, e.g., X such as Y .",
"Snow et al. (2004) used as features indicative dependency paths, in which target word pairs co-occurred, and trained a classifier with data to detect hypernymy relations.",
"In recent studies, Shwartz et al. (2016) proposed a neural path-based model that encoded dependency paths between two words into low-dimensional dense vectors with recurrent neural networks (RNN) for hypernymy detection.",
"This method can prevent sparse feature space and generalize indicative dependency paths for detecting lexico-semantic relations.",
"Their model outperformed the previous state-of-the-art path-based method.",
"Moreover, they demonstrated that these dense path representations capture complementary information with word embeddings that contain individual word features.",
"This was indicated by the experimental result that showed the combination of path representations and word embeddings improved classification performance.",
"In addition, Shwartz and Dagan (2016) showed that the neural path-based approach, combined with word embeddings, is effective in recognizing multiple semantic relations.",
"Although path-based methods can capture the relational information between two words, these methods can obtain clues only for word pairs that 1123 co-occur in a corpus.",
"Even with a very large corpus, it is almost impossible to observe a co-occurrence of arbitrary word pairs.",
"Thus, path-based methods are still limited in terms of the number of word pairs that are correctly classified.",
"To address this problem, we propose a novel method with modeling P ( path | w 1 , w 2 ) in a neural unsupervised manner, where w 1 and w 2 are the two target words, and path is a dependency path that can connect the joint co-occurrence of w 1 and w 2 .",
"A neural model of P ( path | w 1 , w 2 ) can generalize co-occurrences of word pairs and dependency paths, and infer plausible dependency paths which connect two words that do not co-occur in a corpus.",
"After unsupervised learning, this model can be used in two ways: Path data augmentation through predicting dependency paths that are most likely to co-occur with a given word pair.",
"Feature extraction of word pairs, capturing the information of dependency paths as contexts where two words co-occur.",
"While previous supervised path-based methods used only a small portion of a corpus, combining our models makes it possible to use an entire corpus for learning process.",
"Experimental results for four common datasets of multiple lexico-semantic relations show that our methods improve the classification performance of supervised neural path-based models.",
"Supervised lexical semantic relation detection represents word pairs ( w 1 , w 2 ) as feature vectors v and trains a classifier with these vectors based on training data.",
"For word pair representations v , we can use the distributional information of each word and path information in which two words co-occur.",
"Several methods exploit word embeddings (Mikolov et al., 2013; Levy and Goldberg, 2014; Pennington et al., 2014) as distributional information.",
"These methods use a combination of each word's embeddings, such as vector concatenation (Baroni et al., 2012; Roller and Erk, 2016) or vector difference (Roller et al., 2014; Weeds et al., 2014; Vylomova et al., 2016), as word pair representations.",
"While these distributional supervised methods do not require co-occurrences of two words in a sentence, Levy et al. (2015) notes that these methods do not learn the relationships between two words but rather the separate property of each word, i.e., whether or not each word tends to have a target relation.",
"In contrast, supervised path-based methods can capture relational information between two words.",
"These methods represent a word pair as the set of lexico-syntactic paths, which connect two target words in a corpus (Snow et al., 2004).",
"However, these methods suffer from sparse feature space, as they cannot capture the similarity between indicative lexico-syntactic paths, e.g., X is a species of Y and X is a kind of Y .",
"A neural path-based method can avoid the sparse feature space of the previous path-based methods (Shwartz et al., 2016; Shwartz and Dagan, 2016).",
"Instead of treating an entire dependency path as a single feature, this model encodes a sequence of edges of a dependency path into a dense vector using a long short-term memory network (LSTM) (Hochreiter and Schmidhuber, 1997).",
"A dependency path connecting two words can be extracted from the dependency tree of a sentence.",
"For example, given the sentence A dog is a mammal, with X = dog and Y = mammal , the dependency path connecting the two words is X/NOUN/nsubj/> be/VERB/ROOT/Y/NOUN/attr/< .",
"Each edge of a dependency path is composed of a lemma, part of speech (POS), dependency label, and dependency direction.",
"Shwartz et al. (2016) represents each edge as the concatenation of its component embeddings: e = [ v l ; v pos ; v dep ; v dir ] (1) where v l , v pos , v dep ,and v dir represent the embedding vectors of the lemma, POS, dependency label, and dependency direction respectively.",
"This edge vector e is an input of the LSTM at each time step.",
"Here, h t , the hidden state at time step t , is abstractly computed as: h t = LST M ( h t 1 , e t ) (2) where LST M computes the current hidden state given the previous hidden state h t 1 and the current input edge vector e t along with the LSTM architecture.",
"The final hidden state vector o p is 1124 treated as the representation of the dependency path p .",
"When classifying a word pair ( w 1 , w 2 ) , the word pair is represented as the average of the dependency path vectors that connect two words in a corpus: v ( w 1 ,w 2 ) = v paths ( w 1 ,w 2 ) = P p paths ( w 1 ,w 2 ) f p, ( w 1 ,w 2 ) o p P p paths ( w 1 ,w 2 ) f p, ( w 1 ,w 2 ) (3) where paths ( w 1 , w 2 ) is the set of dependency paths that connects w 1 and w 2 in the corpus, and f p, ( w 1 ,w 2 ) is the frequency of p in paths ( w 1 , w 2 ) .",
"The final output of the network is calculated as follows: y = softmax ( W v ( w 1 ,w 2 ) + b ) (4) where W R | c | d is a linear transformation matrix, b R | c | is a bias parameter, | c | is the number of the output class, and d is the size of v ( w 1 ,w 2 ) .",
"This neural path-based model can be combined with distributional methods.",
"Shwartz et al. (2016) concatenated v paths ( w 1 ,w 2 ) to the word embeddings of w 1 and w 2 , redefining v ( w 1 ,w 2 ) as: v ( w 1 ,w 2 ) = [ v w 1 ; v paths ( w 1 ,w 2 ) ; v w 2 ] (5) where v w 1 and v w 2 are word embeddings of w 1 and w 2 , respectively.",
"This integrated model, named LexNET, exploits both path information and distributional information, and has high generalization performance for lexical semantic relation detection.",
"All path-based methods, including the neural ones, suffer from data sparseness as they depend on word pair co-occurrences in a corpus.",
"However, we cannot observe all co-occurrences of semantically related words even with a very large corpus because of Zipf's law, which states that the frequency distribution of words has a long tail; in other words, most words occur very infrequently (Hanks, 2009).",
"In this paper, we refer to this phenomenon as the missing path problem.",
"This missing path problem leads to the fact that path-based models cannot find any clues for two words that do not co-occur.",
"Thus, in the neural path-based method, paths ( w 1 , w 2 ) for these word pairs is padded with an empty path, like UNK-lemma/UNK-POS/UNK-dep/UNK-dir .",
"However, this process makes path-based classi-fiers unable to distinguish between semantically-related pairs with no co-occurrences and those that have no semantic relation.",
"In an attempt to solve this problem, Necsulescu et al. (2015) proposed a method that used a graph representation of a corpus.",
"In this graph, words and dependency relations were denoted as nodes and labeled directed edges, respectively.",
"From this graph representation, paths linking two target words can be extracted through bridging words, even if the two words do not co-occur in the corpus.",
"They represent word pairs as the sets of paths linking word pairs on the graph and train a support vector machine classifier with training data, thereby improving recall.",
"However, the authors reported that this method still suffered from data sparseness.",
"In this paper, we address this missing path problem, which generally restricts path-based methods, by neural modeling P ( path | w 1 , w 2 ) .",
"We present a novel method for modeling P ( path | w 1 , w 2 ) .",
"The purpose of this method is to address the missing path problem by generalizing the co-occurrences of word pairs and dependency paths.",
"To model P ( path | w 1 , w 2 ) , we used the context-prediction approach (Collobert and Weston, 2008; Mikolov et al., 2013; Levy and Goldberg, 2014; Pennington et al., 2014), which is a widely used method for learning word embeddings.",
"In our proposed method, word pairs and dependency paths are represented as embeddings that are updated with unsupervised learning through predicting path from w 1 and w 2 (Section 3.1).",
"After the learning process, our model can be used to (1) augment path data by predicting the plausibility of the co-occurrence of two words and a dependency path (Section 3.2); and to (2) extract useful features from word pairs, which reflect the information of co-occurring dependency paths (Section 3.3).",
"There are many possible ways to model P ( path | w 1 , w 2 ) .",
"In this paper, we present a straightforward and efficient architecture, similar to the skip-gram with negative sampling (Mikolov et al., 2013).",
"We are able to extract many triples ( w 1 , w 2 , path ) from a corpus after dependency parsing.",
"We denote a set of these triples as D .",
"These triples are the instances used for the unsupervised learning of P ( path | w 1 , w 2 ) .",
"Given ( w 1 , w 2 , path ) , our model learns through predicting path from w 1 and w 2 .",
"h ( w 1 ,w",
"2) = tanh ( W 1 [ v w 1 ; v w 2 ] + b 1 ) (6) h ( w 1 ,w 2 ) = tanh ( W 2 h ( w 1 ,w",
"2) + b 2 ) (7) where [ v w 1 ; v w 2 ] is the concatenation of the word embeddings of w 1 and w 2 ; W 1 , b 1 , W 2 , and b 2 are the parameter matrices and bias parameters of the two linear transformations; and h ( w 1 ,w 2 ) is the representation of the word pair.",
"We associate each path with the embedding v path , initialized randomly.",
"While we use a simple way to represent dependency paths in this paper, LSTM can be used to encode each path in the way described in Section 2.2.",
"If LSTM is used, learning time increases but similarities among paths will be captured.",
"representations h ( w 1 ,w 2 ) and the dependency path representations v path , our model was trained to distinguish real ( w 1 , w 2 , path ) triples from incorrect ones.",
"The log-likelihood objective is as follows: L = X ( w 1 ,w 2 ,path ) D log ( v path h ( w 1 ,w 2 ) ) + X ( w 1 ,w 2 ,path 0 ) D 0 log ( v path 0 h ( w 1 ,w 2 ) ) (8) where, D 0 is the set of randomly generated negative samples.",
"We constructed n triples ( w 1 , w 2 , path 0 ) for each ( w 1 , w 2 , path ) D , where n is a hyperparameter and each path 0 is drawn according to its unigram distribution raised to the 3 / 4 power.",
"The objective L was maximized using the stochastic gradient descent algorithm.",
"After the unsupervised learning described above, our model of P ( path | w 1 , w 2 ) can assign the plausibility score ( v path h ( w 1 ,w 2 ) ) to the co-occurrences of a word pair and a dependency path.",
"We can then append the plausible dependency paths to paths ( w 1 , w 2 ) , the set of dependency paths that connects w 1 and w 2 in the corpus, based on these scores.",
"We calculate the score of each dependency path given ( X = w 1 , Y = w 2 ) and append the k dependency paths with the highest scores to paths ( w 1 , w 2 ) , where k is a hyperparameter.",
"We perform the same process given ( X = w 2 , Y = w 1 ) with the exception of swapping the X and Y in the dependency paths to be appended.",
"As a result, we add 2 k dependency paths to the set of dependency paths for each word pair.",
"Through this data augmentation, we can obtain plausible dependency paths even when word pairs do not co-occur in the corpus.",
"Note that we retain the empty path indicators of paths ( w 1 , w 2 ) , as we believe that this information contributes to classifying two unrelated words.",
"Our model can be used as a feature extractor of word pairs.",
"We can exploit h ( w 1 ,w 2 ) to represent the word pair ( w 1 , w 2 ) .",
"This representation captures the information of co-occurrence dependency paths of ( w 1 , w 2 ) in a generalized fashion.",
"Thus, h ( w 1 ,w 2 ) is used to construct the pseudo-path representation v p paths ( w 1 ,w 2 ) .",
"With our model, we represent the word pair ( w 1 , w 2 ) as 1126 datasets relations K&H+N hypernym, meronym, co-hyponym, random BLESS hypernym, meronym, co-hyponym, random ROOT09 hypernym, co-hyponym, random EVALution hypernym, meronym, attribute, synonym, antonym, holonym, substance meronym Table 1: The relation types in each dataset.",
"This representation can be used for word pair classification tasks, such as lexical semantic relation detection.",
"In this section, we examine how our method improves path-based models on several datasets for recognizing lexical semantic relations.",
"In this paper, we focus on major noun relations, such as hypernymy, co-hypernymy, and meronymy.",
"We relied on the datasets used in Shwartz and Dagan (2016); K&H+N (Necsulescu et al., 2015).",
"BLESS (Baroni and Lenci, 2011), EVALution (Santus et al., 2015), and ROOT09 (Santus et al., 2016).",
"These datasets were constructed with knowledge resources (e.g., WordNet, Wikipedia), crowd-sourcing, or both.",
"We used noun pair instances of these datasets.",
"1 Table 1 displays the relations in each dataset used in our experiments.",
"Note that we removed the two relations Entails and MemberOf with few instances from EVALution following Shwartz and Dagan (2016).",
"For data splitting, we used the presplitted train/val/test sets from Shwartz and Dagan (2016) after removing all but the noun pairs from each set.",
"For path-based methods, we used the June 2017 Wikipedia dump as a corpus and extracted ( w 1 , w 2 , path ) triples of noun pairs using the dependency parser of spaCy 2 to construct D .",
"In this process, w 1 and w 2 were lemmatized with spaCy.",
"We only used the dependency paths which oc-1 We focused only noun pairs to shorten the unsupervised learning time, though this restriction is not necessary for our methods and the unsupervised learning is still tractable.",
"curred at least five times following the implementation",
"3 Table 2 displays the number of instances and the proportion of the instances for which at least one dependency path was obtained.",
"implementation of Shwartz and Dagan (2016).",
"We conducted experiments with three neural path-based methods.",
"The implementation details below follow those in Shwartz and Dagan (2016).",
"We implemented all models using Chainer.",
"4 Neural Path-Based Model (NPB).",
"We implemented and trained the neural path-based model described in Section 2.2.",
"We used the two-layer LSTM with 60-dimensional hidden units.",
"An input vector was composed of embedding vectors of the lemma (50 dims), POS (4 dims), dependency label (5 dims), and dependency direction (1 dim).",
"Regularization was applied by a dropout on each of the components embeddings (Iyyer et al., 2015; Kiperwasser and Goldberg, 2016).",
"LexNET.",
"We implemented and trained the integrated model LexNET as described in Section 2.2.",
"The LSTM details are the same as in the NPB model.",
"LexNET h.",
"This model, a variant of LexNET, has an additional hidden layer between the output layer and v ( w 1 ,w 2 ) of Equation (5).",
"Because of this additional hidden layer, this model can take into account the interaction of the path information 3 https://github.com/vered1986/LexNET 4 https://chainer.org 1127 LSTM UNK-lemma/UNK-POS/UNK-dep/UNK-dir define/VERB/ROOT/-as/ADP/prep/< Y/NOUN/pobj/< X/NOUN/dobj/> X/NOUN/nsubj/> be/VERB/ROOT/-Y/NOUN/attri/< LSTM LSTM v w 1 v w 2 v paths v p \u0000 paths Output Our learned model LexNET P ( path | w 1 ,w 2 ) <latexit sha1_base64=\"1lOwjvqBRe6ZenWVOMd+aktFOEo=\">AAACmnichVHLSuRAFD3Gd/tqFUHQRWOjOCDNTeMjuhJno7hpH62CSkhiqcG8SKpbNPoD/oALVwqzUD/AD3DjD7jwE2SWDrhx4e10M4OIzi2q6tSpe27V4ZqBY0eS6KlOqW9obGpuaU21tXd0dqW7e9YivxRaomj5jh9umEYkHNsTRWlLR2wEoTBc0xHr5sHPyv16WYSR7Xur8igQ266x59m7tmVIpvR0X7xlupnCaGDI/ZNDXR071PM/TvV0lnLT0+PahJZhoE1qaj6j5iiJvyCLWhT89B22sAMfFkpwIeBBMnZgIOKxCRWEgLltxMyFjOzkXuAUKdaWOEtwhsHsAa97fNqssR6fKzWjRG3xKw7PkJUZDNMjXdMLPdAtPdPbl7XipEblL0e8m1WtCPSus/6V1/+qXN4l9v+pvlGYnP29J4ldaIkXm70FCVNxaVXrl4/PX1ZmlofjEbqi3+zvkp7onh165T/WryWxfIEUN+hTOz6DtXxOpZy6NJ6dnau1qgUDGMIo92MKs5hHAUV+N8YlbnCrDCpzyoKyWE1V6mqaXnwIZfUdpp+YBg==</latexit> <latexit sha1_base64=\"1lOwjvqBRe6ZenWVOMd+aktFOEo=\">AAACmnichVHLSuRAFD3Gd/tqFUHQRWOjOCDNTeMjuhJno7hpH62CSkhiqcG8SKpbNPoD/oALVwqzUD/AD3DjD7jwE2SWDrhx4e10M4OIzi2q6tSpe27V4ZqBY0eS6KlOqW9obGpuaU21tXd0dqW7e9YivxRaomj5jh9umEYkHNsTRWlLR2wEoTBc0xHr5sHPyv16WYSR7Xur8igQ266x59m7tmVIpvR0X7xlupnCaGDI/ZNDXR071PM/TvV0lnLT0+PahJZhoE1qaj6j5iiJvyCLWhT89B22sAMfFkpwIeBBMnZgIOKxCRWEgLltxMyFjOzkXuAUKdaWOEtwhsHsAa97fNqssR6fKzWjRG3xKw7PkJUZDNMjXdMLPdAtPdPbl7XipEblL0e8m1WtCPSus/6V1/+qXN4l9v+pvlGYnP29J4ldaIkXm70FCVNxaVXrl4/PX1ZmlofjEbqi3+zvkp7onh165T/WryWxfIEUN+hTOz6DtXxOpZy6NJ6dnau1qgUDGMIo92MKs5hHAUV+N8YlbnCrDCpzyoKyWE1V6mqaXnwIZfUdpp+YBg==</latexit> <latexit sha1_base64=\"1lOwjvqBRe6ZenWVOMd+aktFOEo=\">AAACmnichVHLSuRAFD3Gd/tqFUHQRWOjOCDNTeMjuhJno7hpH62CSkhiqcG8SKpbNPoD/oALVwqzUD/AD3DjD7jwE2SWDrhx4e10M4OIzi2q6tSpe27V4ZqBY0eS6KlOqW9obGpuaU21tXd0dqW7e9YivxRaomj5jh9umEYkHNsTRWlLR2wEoTBc0xHr5sHPyv16WYSR7Xur8igQ266x59m7tmVIpvR0X7xlupnCaGDI/ZNDXR071PM/TvV0lnLT0+PahJZhoE1qaj6j5iiJvyCLWhT89B22sAMfFkpwIeBBMnZgIOKxCRWEgLltxMyFjOzkXuAUKdaWOEtwhsHsAa97fNqssR6fKzWjRG3xKw7PkJUZDNMjXdMLPdAtPdPbl7XipEblL0e8m1WtCPSus/6V1/+qXN4l9v+pvlGYnP29J4ldaIkXm70FCVNxaVXrl4/PX1ZmlofjEbqi3+zvkp7onh165T/WryWxfIEUN+hTOz6DtXxOpZy6NJ6dnau1qgUDGMIo92MKs5hHAUV+N8YlbnCrDCpzyoKyWE1V6mqaXnwIZfUdpp+YBg==</latexit> <latexit sha1_base64=\"1lOwjvqBRe6ZenWVOMd+aktFOEo=\">AAACmnichVHLSuRAFD3Gd/tqFUHQRWOjOCDNTeMjuhJno7hpH62CSkhiqcG8SKpbNPoD/oALVwqzUD/AD3DjD7jwE2SWDrhx4e10M4OIzi2q6tSpe27V4ZqBY0eS6KlOqW9obGpuaU21tXd0dqW7e9YivxRaomj5jh9umEYkHNsTRWlLR2wEoTBc0xHr5sHPyv16WYSR7Xur8igQ266x59m7tmVIpvR0X7xlupnCaGDI/ZNDXR071PM/TvV0lnLT0+PahJZhoE1qaj6j5iiJvyCLWhT89B22sAMfFkpwIeBBMnZgIOKxCRWEgLltxMyFjOzkXuAUKdaWOEtwhsHsAa97fNqssR6fKzWjRG3xKw7PkJUZDNMjXdMLPdAtPdPbl7XipEblL0e8m1WtCPSus/6V1/+qXN4l9v+pvlGYnP29J4ldaIkXm70FCVNxaVXrl4/PX1ZmlofjEbqi3+zvkp7onh165T/WryWxfIEUN+hTOz6DtXxOpZy6NJ6dnau1qgUDGMIo92MKs5hHAUV+N8YlbnCrDCpzyoKyWE1V6mqaXnwIZfUdpp+YBg==</latexit> +Aug +Rep Predicted path Predicted path Averagepooling Figure 2: Illustration of +Aug and +Rep applied to LexNET.",
"The size of the additional hidden layer was set to 60.",
"Following Shwartz and Dagan (2016), we optimized each model using Adam (whose learning rate is 0.001) while tuning the dropout rate dr among { 0 .",
"0 , 0 .",
"2 , 0 .",
"4 } on the validation set.",
"The minibatch size was set to 100.",
"We initialized the lemma embeddings of LSTM and concatenated the word embeddings of LexNET with the pretrained 50-dimensional GloVe vector.",
"5 Training was stopped if performance on the validation set did not improve for seven epochs, and the best model for test evaluation was selected based on the score of the validation set.",
"We implemented and trained our model of P ( path | w 1 , w 2 ) , described in Section 3.1, as follows.",
"We used the most frequent 30,000 paths connecting nouns as the context paths for unsupervised learning.",
"We initialized word embeddings with the same pretrained GloVe vector as the baseline models.",
"For unsupervised learning data, we 5 https://nlp.stanford.edu/projects/ glove/ extracted ( w 1 , w 2 , path ) , whose w 1 and w 2 are included in the vocabulary of the GloVe vector, and whose path is included in the context paths, from D .",
"The number of these triples was 217,737,765.",
"We set the size of h ( w 1 ,w 2 ) , h ( w 1 ,w 2 ) , and v path for context paths to 100.",
"The negative sampling size n was set to 5.",
"We trained our model for five epochs using Adam (whose learning rate is 0.001).",
"The minibatch size was 100.",
"To preserve the distributional regularity of the pretrained word embeddings, we did not update the input word embeddings during the unsupervised learning.",
"With our trained model, we applied the two methods described in Section 3.2 and 3.3 to the NPB and LexNET models as follows: +Aug.",
"We added the most plausible 2 k paths to each paths ( w 1 , w 2 ) as in Section 3.2.",
"We tuned k { 1 , 3 , 5 } on the validation set.",
"+Rep.",
"We concatenated v p paths ( w 1 ,w 2 ) in Equation (9) with the penultimate layer.",
"To focus on the pure contribution of unsupervised learning, we did not update this component during supervised learning.",
"Figure 2 illustrates +Aug and +Rep applied to LexNET in the case where the two target words, w 1 and w 2 , do not co-occur in the corpus.",
"Models K&H+N BLESS ROOT09 EVALution NPB 0.495 0.773 0.731 0.463 NPB+Aug 0.897 0.842 0.778 0.489",
"In this section we examine how our methods improved the baseline models.",
"Following the previous research (Shwartz and Dagan, 2016), the performance metrics were the averaged F 1 of scikit-learn (Pedregosa et al., 2011), which computes the F 1 for each relation, and reports their average weighted by the number of true instances for each relation.",
"We examined whether or not our path data augmentation method +Aug contributes to the neural path-based method.",
"The results are displayed in Table 3.",
"Applying our path data augmentation method improved the classification performance on each dataset.",
"Especially for K&H+N, the large dataset where the three-fourths of word pairs had no paths, our method significantly improved the performance.",
"This result shows that our path data augmentation effectively solves the missing path problem.",
"Moreover, the model with our method outperforms the baseline on EVALution, in which nearly all word pairs co-occurred in the corpus.",
"This indicates that the predicted paths provide useful information and enhance the path-based classification.",
"We examine the paths that were predicted by our model of P ( path | w 1 , w 2 ) in Section 6.1.",
"We investigated how our methods using modeling P ( path | w 1 , w 2 ) improved the baseline integrated model, LexNET.",
"Table 4 displays the results.",
"Our proposed methods, +Aug and +Rep, improved the performance of LexNET on each dataset.",
"6 Moreover, the best score on each dataset was achieved by the model to which our methods were applied.",
"These results show that our methods are also effective with the integrated models based on path information and distributional information.",
"The table also shows that LexNET+Rep outperforms LexNET h, though the former has fewer parameters to be tuned during the supervised learning than the latter.",
"This indicates that the word pair representations of our model capture information beyond the interaction of two word embeddings.",
"We investigate the properties of our word pair representation in Section 6.2.",
"Finally, We found that applying both methods did not necessarily yield the best performance.",
"A possible explanation for this is that applying both methods is redundant, as both +Aug and +Rep depend on the same model of P ( path | w 1 , w 2 ) .",
"In this section, we investigate the properties of the predicted dependency paths and word pair representations of our model.",
"We extracted the word pairs of BLESS without co-occurring dependency paths and predicted the",
"6 The improvement for K&H+N is smaller than those for the others.",
"We think this owes to most instances of this dataset being correctly classified only by distributional information.",
"This view is supported by Shwartz and Dagan (2016), in which LexNET hardly outperformed a distributional method for this dataset.",
"plausible dependency paths of those pairs with our model of P ( path | w 1 , w 2 ) .",
"The examples are displayed in Table 5 at the top three paths.",
"We used the bold style for the paths that we believe to be indicative or representative for a given relationship.",
"Our model predicted plausible and indicative dependency paths for each relation, although the predicted paths also contain some implausible or unindicative ones.",
"For hypernymy, our model predicted variants of the is-a path according to domains, such as X is Y manufactured in the clothing domain and X is a species of Y in the animal domain.",
"For ( owl, rump ) , which is a meronymy pair, the top predicted path was X that Y represent .",
"This is not plausible for ( owl, rump ) but is indicative for meronymy, particularly memberof relations.",
"Moreover, domain-independent paths which indicate meronymy, such as all X have Y , were predicted.",
"For ( mug, plastic ) , one of the predicted paths, X is made from Y , is also a domain-independent indicative path for meronymy.",
"For co-hypernymy, our model predicted domain-specific paths, which indicate that two nouns are of the same kind.",
"For examples, given X leaf and Y and X specie and Y of ( carrot, beans ) , we can infer that both X and Y are plants or vegetables.",
"Likewise, given play X, guitar, and Y of ( cello, kazoo ) , we can infer that both X and Y are musical instruments.",
"These examples show that our path data augmentation is effective for the missing path problem and enhances path-based models.",
"We visualized the word pair representations v p paths ( w 1 ,w 2 ) to examine their specific properties.",
"In BLESS, every pair was annotated with 17 domain class labels.",
"For each domain, we reduced the dimensionality of the representations using t-SNE (Maaten and Hinton, 2008) and plotted the data points of the hypernyms, co-hyponyms, and meronyms.",
"We compared our representations with the concatenation of two word embeddings (pre-trained 50-dimensional GloVe).",
"The examples are displayed in Figure 3.",
"We found that our representations (the top row in Figure",
"3) grouped the word pairs according to their semantic relation in some specific domains based only on unsupervised learning.",
"This property is desirable for the lexical semantic relation detection task.",
"In contrast to our representations, 1130 Figure 3: Visualization of the our word pair representations v p paths ( w 1 ,w 2 ) (top row) and the concatenation of two word embeddings (bottom row) using t-SNE in some domains.",
"the concatenation of word embeddings (the bottom row in Figure",
"3) has little or no such tendency in all domains.",
"The data points of the concatenation of word embeddings are scattered or jumbled.",
"This is because the concatenation of word embeddings cannot capture the relational information of word pairs but only the distributional information of each word (Levy et al., 2015).",
"This visualization further shows that our word pair representations can be used as pseudo-path representations to alleviate the missing path problem.",
"In this paper, we proposed the novel methods with modeling P ( path | w 1 , w 2 ) to solve the missing path problem.",
"Our neural model of P ( path | w 1 , w 2 ) can be learned from a corpus in an unsupervised manner, and can generalize co-occurrences of word pairs and dependency paths.",
"We demonstrated that this model can be applied in the two ways: (1) to augment path data by predicting plausible paths for a given word pair, and (2) to extract from word pairs useful features capturing co-occurring path information.",
"Finally, our experiments demonstrated that our methods can improve upon the previous models and successfully solve the missing path problem.",
"In future work, we will explore unsupervised learning with a neural path encoder.",
"Our model bears not only word pair representations but also dependency path representations as context vectors.",
"Thus, we intend to apply these representations to various tasks, which path representations contribute to.",
"This work was supported by JSPS KAKENHI Grant numbers JP17H01831, JP15K12873."
] | [
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"objective",
"method",
"result",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"other",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"result",
"abstain",
"abstain",
"method",
"abstain",
"method",
"result",
"result",
"abstain",
"method",
"method",
"result",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"objective",
"objective",
"objective",
"abstain",
"objective",
"other"
] |
[
"Semantic similarity detection is a fundamental task in natural language understanding.",
"Adding topic information has been useful for previous feature-engineered semantic similarity models as well as neural models for other tasks.",
"There is currently no standard way of combining topics with pretrained contextual representations such as BERT.",
"We propose a novel topic-informed BERT-based architecture for pairwise semantic similarity detection and show that our model improves performance over strong neural baselines across a variety of English language datasets.",
"We find that the addition of topics to BERT helps particularly with resolving domain-specific cases.",
"Modelling the semantic similarity between a pair of texts is a crucial NLP task with applications ranging from question answering to plagiarism detection.",
"A variety of models have been proposed for this problem, including traditional feature-engineered techniques (Filice et al., 2017), hybrid approaches (Wu et al., 2017; Feng et al., 2017; Koreeda et al., 2017) and purely neural architectures (Wang et al., 2017; Tan et al., 2018; Deriu and Cieliebak, 2017).",
"Recent pretrained contextu-alised representations such as ELMo (Peters et al., 2018) and BERT (Devlin et al., 2019) have led to impressive performance gains across a variety of NLP tasks, including semantic similarity detection.",
"These models leverage large amounts of data to pretrain text encoders (in contrast to just individual word embeddings as in previous work) and have established a new pretrain-finetune paradigm.",
"While large improvements have been achieved on paraphrase detection (Tomar et al., 2017; Gong et al., 2018), semantic similarity detection in Community Question Answering (CQA) remains a challenging problem.",
"CQA leverages user-generated content from question answering websites (e.g. StackExchange) to answer complex real-world questions (Nakov et al., 2017).",
"The task requires modelling the relatedness between question-answer pairs which can be challenging due to the highly domain-specific language of certain online forums and low levels of direct text overlap between questions and answers.",
"Topic models may provide additional signals for semantic similarity, as earlier feature-engineered models for semantic similarity detection successfully incorporated topics (Qin et al., 2009; Tran et al., 2015; Mihaylov and Nakov, 2016; Wu et al., 2017).",
"They could be especially useful for dealing with domain-specific language since topic models have been exploited for domain adaptation (Hu et al., 2014; Guo et al., 2009).",
"Moreover, recent work on neural architectures has shown that the integration of topics can yield improvements in other tasks such as language modelling (Ghosh et al., 2016), machine translation (Chen et al., 2016), and summarisation (Narayan et al., 2018; Wang et al., 2018).",
"We therefore introduce a novel architecture for semantic similarity detection which incorporates topic models and BERT.",
"More specifically, we make the following contributions:",
"1. We propose tBERT a simple architecture combining topics with BERT for semantic similarity prediction (section 3).",
"1 2. We demonstrate that tBERT achieves improvements across multiple semantic similarity prediction datasets against a finetuned vanilla BERT and other neural models in both F1 and stricter evaluation metrics (section 5).",
"3. We show in our error analysis that tBERT's gains are prominent on domain-specific cases, such as those encountered in CQA (section 5).",
"We select popular benchmark datasets featuring different sizes (small vs. large), tasks (QA vs. paraphrase detection) and sentence lengths (short vs. long) as summarised in Table",
"1. Examples for each dataset are provided in Appendix A. MSRP The Microsoft Research Paraphrase dataset (MSRP) contains pairs of sentences from news websites with binary labels for paraphrase detection (Dolan and Brockett, 2005).",
"SemEval The SemEval CQA dataset (Nakov et al., 2015, 2016, 2017) comprises three subtasks based on threads and posts from the online expat forum Qatar Living .",
"2 Each subtask contains an initial post as well as 10 possibly relevant posts with binary labels and requires to rank relevant posts above non-relevant ones.",
"In subtask A, the posts are questions and comments from the same thread, in an answer ranking scenario.",
"Subtask B is question paraphrase ranking.",
"Subtask C is similar to A but comments were retrieved from an external thread, which increases the difficulty of the task.",
"Quora The Quora duplicate questions dataset contains more than 400k question pairs with binary labels and is by far the largest of the datasets.",
"3 The task is to predict whether two questions are paraphrases.",
"The setup is similar to SemEval subtask B, but framed as a classification rather than a ranking problem.",
"We use Wang et al. (2017)'s train/dev/test set partition.",
"All of the above datasets provide two short texts (usually a sentence long but in some cases consisting of multiple sentences).",
"From here onward we will use the term sentence' to refer to each short text.",
"We frame the task as predicting the semantic Dataset Task Len Size Quora paraphrase detection 13 404K MSRP paraphrase detection 22 5K SemEval (A) internal answer ranking 48 26K (B) paraphrase ranking 52 4K (C) external answer ranking 45 47K Table 1: Text pair similarity data sets.",
"2 Following convention, we use the 2016 test set as development set and 2017 test set as test set.",
"3 https://engineering.quora.com/Semantic-QuestionMatching-with-Deep-Learning similarity between two sentences in a binary classification task.",
"We use a binary classification setup as this is more generic and applies to all above datasets.",
"In this paper, we investigate if topic models can further improve BERT's performance for semantic similarity detection.",
"Our proposed t opic-informed BERT -based model (tBERT) is shown in Figure",
"1. We encode two sentences S 1 (with length N ) and S 2 (with length M ) with the uncased version of BERTBASE (Devlin et al., 2019), using the C vector from BERT's final layer corresponding to the CLS token in the input as sentence pair representation: C = BERT ( S 1 , S 2 ) R d (1) where d denotes the internal hidden size of BERT (768 for BERTBASE ).",
"While other topic models can be used, we experiment with two popular topic models: LDA (Blei et al., 2003) and GSDMM (Yin and Wang, 2014), see section 3.2 for details.",
"Based on previous research which successfully combined word and document level topics with neural architectures (Narayan et al., 2018), we further experiment with incorporating different topic types.",
"For document topics D 1 and D 2 , all tokens in a sentence are passed to the topic model to infer one topic distribution per sentence: D 1 = TopicModel ([ T 1 , ..., TN ]) R t (2) D 2 = TopicModel ([ T (cid:48) 1 , ..., T (cid:48) M ]) R t (3) where t indicates the number of topics.",
"Alternatively, for word topics W 1 and W 2 , one topic distri-w M ' w 1 ' w N w 1 W 2 W 1 C Topic model [CLS] [SEP] BERTC F 1 FNFSEP F' 1 F' MECLSE 1 ENESEP E' 1 E' M ... ... ... ... ... ...",
"w i = TopicModel ( T i ) R t (4) We combine the sentence pair vector with the sentence-level topic representations similar to Os-tendorff et al. (2019) as F = [ C ; D 1 ; D 2 ] R d +2 t (7) for document topics and as",
"for word topics (where ; denotes concatenation).",
"This is followed by a hidden and a softmax classification layer.",
"We train the model for 3 epochs with early stopping and cross-entropy loss.",
"Learning rates are tuned per dataset and random seed.",
"4 3.2 Choice of Topic Model Topic number and alpha value The number of topics and alpha values are important topic model hyper-parameters and dataset dependent.",
"We use the simple topic baseline (section 4) as a fast proxy (on average 12 seconds per experiment on CPU) to identify useful topic models for each dataset without expensive hyper-parameter tuning on the full tBERT model.",
"In our experiments, 70 to 90 topics with alpha values of 1 or 10 worked well.",
"5 MSRP Quora SemEval A B C BERT .906 .906 .714 .754 .414 tBERT with LDA + word topics .905 .911 .744 .766 .439 + doc topics .907 .909 .748 .761 .419 tBERT with GSDMM + word topics .918 .908 .752 .760 .447 + doc topics .915 .909 .751 .760 .424 Table 2: F1 scores of BERT-based models with different topic settings on development set.",
"Topic model and topic type LDA (Blei et al., 2003) is the most popular and widely used topic model, but it has been reported to be less suitable for short text (Hong and Davison, 2010).",
"Therefore, we also experiment with the popular short text topic model GSDMM (Yin and Wang, 2014).",
"To select the best setting for our final model (in Table 3), we evaluated different combinations of tBERT with LDA vs. GSDMM and word ( W 1 and W 2 ) vs. document topics ( D 1 and D 2 ) on the development partition of the datasets (Table 2).",
"The tBERT settings generally scored higher than BERT, with word topics ( W 1 and W 2 ) usually outperforming document topics.",
"Topic baselines As a simple baseline, we train a topic model (LDA or GSDMM) on the training portion of each dataset (combining training sets for SemEval subtasks) and calculate the Jensen-Shannon divergence (Lin, 1991) (JSD) between the topic distributions of the two sentences.",
"The model predicts a negative label if JSD is larger than a threshold and a positive label otherwise.",
"We tune threshold, number of topics and alpha value based on development set F1.",
"5 Previous systems For SemEval, we compare against the highest performing system of earlier work based on F1 score.",
"As these models rely on hand-crafted dataset-specific features (providing an advantage on the small datasets), we also include the only neural system without manual features (Deriu and Cieliebak, 2017).",
"For MSRP, we show a neural matching architecture (Pang et al., 2016).",
"For Quora, we compare against the Interactive Inference Network (Gong et al., 2018) using accuracy, as no F1 has been reported.",
"Siamese BiLSTM Siamese networks are a common neural baseline for sentence pair classification tasks (Yih et al., 2011; Wang et al., 2017).",
"We embed both sentences with pretrained GloVe embeddings (concatenated with ELMo for BiLSTM + ELMo) and encode them with two weight-sharing BiLSTMs, followed by max pooling and hidden layers.",
"BERT We encode the sentence pair with BERT's C vector (as in tBERT) followed by a softmax layer and finetune all layers for 3 epochs with early stopping.",
"Following Devlin et al. (2019), we tune learning rates on the development set of each dataset.",
"4 5 Results Evaluation We evaluate systems based on F1 scores ( Table 3) as this is more reliable for datasets with imbalanced labels (e.g. SemEval C) than accuracy.",
"We further report performance on difficult cases with non-obvious F1 score (Peinelt et al., 2019) which identifies challenging instances in the dataset based on lexical overlap and gold labels.",
"Dodge et al. (2020) recently showed that early stopping and random seeds can have considerable im-pact on the performance of finetuned BERT models.",
"We therefore use early stopping during finetuning and report average model performance across two seeds for BERT and tBERT models.",
"Overall trends The BERT-based models outperform the other neural systems, while closely competing with the feature-engineered system on the relatively small SemEval A dataset.",
"The simple topic baselines perform surprisingly well in comparison to much more sophisticated models, indicating the usefulness of topics for the tasks.",
"Do topics improve BERT's performance?",
"Adding LDA topics to BERT consistently improves F1 performance across all datasets.",
"Moreover, it improves performance on non-obvious cases over BERT on all datasets (except for Quora which contains many generic examples and few domain-specific cases, see Table 4).",
"The addition of GSDMM topics to BERT is slightly less stable: improving performance on MSRP, Semeval A and B, while dropping on Semeval C. The largest perfor-MSRP Quora SemEval A B C F1 on cases with named entities (total: 230/500) BERT .20 .54 .50 .53 .32 tBERT .35 .49 .52 .21 .56 (# of cases) (23) (31) (58) (60) (58) F1 on cases with domain-specific words (total: 159/500) BERT .18 .00 .36 .36 .26 tBERT .67 .50 .62 .40 .58 (# of cases) (14) (7) (36) (41) (61) F1 on cases with non-standard spelling (total: 53/500) BERT .00 N/A .20 .71 .43 tBERT .00 N/A .80 .00 .62 (# of cases) (1) (0) (20) (19) (13) Table 4: F1 for BERT and tBERT on annotated development set examples (100 cases per dataset) by manually annotated properties.",
"mance gains regardless of the chosen topic model are observed in the internal question-answering task (SemEval A).",
"Where can topics help?",
"We randomly sampled 100 examples (half only correct by BERT, half only correct by LDA-tBERT) from the development set of each dataset and manually annotated them (500 in total) with binary labels regarding three properties that may be associated with topic-related gains or losses (Table 4).",
"Named entities (e.g. iPhone ) and domain-specific words (e.g. murabaha ) occurred frequently in the datasets, while there were too few examples with non-standard spelling (e.g. thanx ) for meaningful comparisons.",
"tBERT generally performed better than BERT on examples with domain-specific cases.",
"Overall patterns were F1 non-obvious F1 MSRP Quora SemEval MSRP Quora SemEval A B C A B C Previous systems Filice et al. (2017) feature-based --.506 --.199 Wu et al. (2017) feature-based -.777 --.707 -Koreeda et al. (2017) feature-based --.197 --.028 Deriu and Cieliebak (2017) neural -.433 --.352 -Pang et al. (2016) neural .829 ----Gong et al. (2018) (accuracy) neural -(.891) ----Our implementation LDA topic baseline .799 .736 .684 .436 .096 .780 .606 .684 .172 .019 GSDMM topic baseline .796 .679 .663 .403 .102 .769 .448 .488 .130 .015 Siamese BiLSTM .763 .813 .671 .349 .126 .781 .740 .597 .168 .049 Siamese BiLSTM + ELMo .765 .832 .661 .345 .149 .775 .754 .599 .180 .073 BERT .876 .902 .704 .473 .268 .827 .860 .656 .243 .085 tBERT with LDA topics .884 .905 .768 .524 .273 .866 .859 .708 .258 .100 tBERT with GSDMM topics .883 .905 .766 .518 .233 .844 .856 .714 .266 .081 Table 3: Model performance on test set.",
"We reason that for domain-specific words which are unlikely to have occurred in pretraining (e.g. Fuwairit in Table 5), BERT may not have learned a good representation (even after finetuning) and hence can't make a correct prediction.",
"Here, topic models could serve as an additional source for dataset-specific information.",
"The usefulness of topics for such cases is also supported by previous work, which successfully leveraged topics for domain adaptation in machine translation (Hu et al., 2014) and named entity recognition (Guo et al., 2009).",
"Could we just finetune BERT longer?",
"Based on our observation that tBERT performs better on dataset-specific cases, one could assume that BERT may simply need to be finetuned longer than the usual 3 epochs to pick up more domain-specific information.",
"In an additional experiment, we finetuned BERT and tBERT (with LDA topics) for 9 epochs (see Figure 2 and Appendix G).",
"On most datasets, BERT reached peak performance within the first 3 epochs.",
"Although training for 4 or 7 s1 Are there good beaches in the Northern part of Qatar?",
"epochs achieved marginal gains on Semeval A and C, longer finetuning of BERT could not exceed tBERT's best performance from the first 3 epochs (dotted line) on any dataset.",
"We conclude that longer finetuning does not considerably boost BERT's performance.",
"Adding topics instead is more effective, while avoiding the burden of greatly increased training time (compare Appendix F).",
"In this work, we proposed a flexible framework for combining topic models with BERT.",
"We demonstrated that adding LDA topics to BERT consistently improved performance across a range of semantic similarity prediction datasets.",
"In our qualitative analysis, we showed that these improvements were mainly achieved on examples involving domain-specific words.",
"Future work may focus on how to directly induce topic information into BERT without corrupting pretrained information and whether combining topics with other pretrained contextual models can lead to similar gains.",
"Another research direction is to investigate if introducing more sophisticated topic models, such as named entity promoting topic models (Krasnash-chok and Jouili, 2018) into the proposed framework can further improve results.",
"This work was supported by Microsoft Azure and The Alan Turing Institute under the EPSRC grant EP/N510129/1."
] | [
"abstain",
"abstain",
"abstain",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"other",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"other",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"objective",
"objective",
"result",
"abstain",
"abstain",
"other"
] |
[
"Albert Y.S. Lam Department of Computing, The Hong Kong Polytechnic University, Hong Kong S.A.R. 1 University of California, San Diego 2",
"Abstract New intent discovery aims to uncover novel intent categories from user utterances to expand the set of supported intent classes.",
"It is a critical task for the development and ser-vice expansion of a practical dialogue system.",
"Despite its importance, this problem remains under-explored in the literature.",
"Existing approaches typically rely on a large amount of labeled utterances and employ pseudo-labeling methods for representation learning and clustering, which are label-intensive, inefficient, and inaccurate.",
"In this paper, we provide new solutions to two important research questions for new intent discovery: (1) how to learn semantic utterance representations and (2) how to better cluster utterances.",
"Particularly, we first propose a multi-task pre-training strategy to leverage rich unlabeled data along with external labeled data for representation learning.",
"Then, we design a new contrastive loss to exploit self-supervisory signals in unlabeled data for clustering.",
"Extensive experiments on three intent recognition benchmarks demonstrate the high effectiveness of our proposed method, which outperforms state-of-the-art methods by a large margin in both unsupervised and semi-supervised scenarios.",
"The source code will be available at https://github.",
"com/zhang-yu-wei/MTP-CLNN .",
"Why Study New Intent Discovery (NID)?",
"Recent years have witnessed the rapid growth of conversational AI applications.",
"To design a natural language understanding system, a set of expected customer intentions are collected beforehand to train an intent recognition model.",
"However, the pre-defined intents cannot fully meet customer needs.",
"This implies the necessity of expanding the intent recognition model by repeatedly integrating new intents discovered from unlabeled user utterances (Fig. 1).",
"To reduce the effort in manually identifying unknown intents from a mass of utterances, previous works commonly employ clustering algorithms to group utterances of similar intents (Cheung and Li, 2012; Hakkani-Tr et al., 2015; Padmasundari, 2018).",
"The cluster assignments thereafter can either be directly used as new intent labels or as heuristics for faster annotations.",
"Research Questions (RQ) and Challenges.",
"Current study of NID centers around two basic research questions: 1) How to learn semantic utterance representations to provide proper cues for clustering?",
"2) How to better cluster the utterances?",
"The study of the two questions are often interwoven in existing research.",
"Utterances can be represented according to different aspects such as the style of language, the related topics, or even the length of sentences.",
"It is important to learn semantic utterance representations to provide proper cues for clustering.",
"Simply applying a vanilla pre-trained language model (PLM) to generate utterance representations is not a viable solution, which leads to poor performance on NID as shown by the experimental results in Section 4.2.",
"Some recent works proposed to use labeled utterances of known intents 256 for representation learning (Forman et al., 2015; Haponchyk et al., 2018; Lin et al., 2020; Zhang et al., 2021c; Haponchyk and Moschitti, 2021), but they require a substantial amount of known intents and sufficient labeled utterances of each intent, which are not always available especially at the early development stage of a dialogue system.",
"Further, pseudo-labeling approaches are often exploited to generate supervision signals for representation learning and clustering.",
"For example, Lin et al. (2020) finetune a PLM with an utterance similarity prediction task on labeled utterances to guide the training of unlabeled data with pseudo-labels.",
"Zhang et al. (2021c) adopt a deep clustering method (Caron et al., 2018) that uses k -means clustering to produce pseudo-labels.",
"However, pseudo-labels are often noisy and can lead to error propagation.",
"Our Solutions.",
"In this work, we propose a simple yet effective solution for each research question.",
"Solution to RQ 1: multi-task pre-training.",
"We propose a multi-task pre-training strategy that takes advantage of both external data and internal data for representation learning.",
"Specifically, we leverage publicly available, high-quality intent detection datasets, following Zhang et al. (2021d), as well as the provided labeled and unlabeled utterances in the current domain, to fine-tune a pre-trained PLM to learn task-specific utterance representations for NID.",
"The multi-task learning strategy enables knowledge transfer from general intent detection tasks and adaptation to a specific application domain.",
"Solution to RQ 2: contrastive learning with nearest neighbors.",
"We propose to use a contrastive loss to produce compact clusters, which is motivated by the recent success of contrastive learning in both computer vision (Bachman et al., 2019; He et al., 2019; Chen et al., 2020; Khosla et al., 2020) and natural language processing (Gunel et al., 2021; Gao et al., 2021; Yan et al., 2021).",
"Contrastive learning usually maximizes the agreement between different views of the same example and minimize that between different examples.",
"However, the commonly used instance discrimination task may push away false negatives and hurts the clustering performance.",
"Inspired by a recent work in computer vision (Van Gansbeke et al., 2020), we introduce neighborhood relationship to customize the contrastive loss for clustering in both unsupervised (i.e., without any labeled utterances of known intents) and semi-supervised scenarios.",
"Intuitively, in a semantic feature space, neighboring utterances should have a similar intent, and pulling together neighboring samples makes clusters more compact.",
"Our main contributions are three-fold.",
"We show that our proposed multi-task pretraining method already leads to large performance gains over state-of-the-art models for both unsupervised and semi-supervised NID.",
"We propose a self-supervised clustering method for NID by incorporating neighborhood relationship into the contrastive learning objective, which further boosts performance.",
"We conduct extensive experiments and ablation studies on three benchmark datasets to verify the effectiveness of our methods.",
"New Intent Discovery.",
"The study of NID is still in an early stage.",
"Pioneering works focus on unsupervised clustering methods.",
"Shi et al. (2018) leveraged auto-encoder to extract features.",
"Perkins and Yang (2019) considered the context of an utterance in a conversation.",
"Chatterjee and Sengupta (2020) proposed to improve density-based models.",
"Some recent works (Haponchyk et al., 2018; Haponchyk and Moschitti, 2021) studied supervised clustering algorithms for intent labeling, yet it can not handle new intents.",
"Another line of works (Forman et al., 2015; Lin et al., 2020; Zhang et al., 2021c) investigated a more practical case where some known intents are provided to support the discovery of unknown intents, which is often referred to as semi-supervised NID.",
"To tackle semi-supervised NID, Lin et al. (2020) proposed to first perform supervised training on known intents with a sentence similarity task and then use pseudo labeling on unlabeled utterances to learn a better embedding space.",
"Zhang et al. (2021c) proposed to first pre-train on known intents and then perform k -means clustering to assign pseudo labels on unlabeled data for representation learning following Deep Clustering (Caron et al., 2018).",
"They also proposed to align clusters to accelerate the learning of top layers.",
"Another approach is to first classify the utterances as known and unknown and then uncover new intents with the unknown utterances (Vedula et al., 2020; Zhang et al., 2021b).",
"Hence, it relies on accurate classification in the first stage.",
"learning and a contrastive learning method for clustering.",
"In contrast to previous methods that rely on ample annotated data in the current domain for pre-training, our method can be used in an unsupervised setting and work well in data-scarce scenarios (Section 4.3).",
"Pre-training for Intent Recognition.",
"Despite the effectiveness of large-scale pre-trained language models (Radford and Narasimhan, 2018; Devlin et al., 2019; Liu et al., 2019; Brown et al., 2020), the inherent mismatch in linguistic behavior between the pre-training datasets and dialogues encourages the research of continual pre-training on dialogue corpus.",
"Most previous works proposed to pre-train on open domain dialogues in a self-supervised manner (Mehri et al., 2020; Wu et al., 2020; Henderson et al., 2020; Hosseini-Asl et al., 2020).",
"Recently, several works pointed out that pre-training with relavant tasks can be effective for intent recognition.",
"For example, Zhang et al. (2020) formulated intent recognition as a sentence similarity task and pre-trained on natural language inference (NLI) datasets.",
"Vulic et al. (2021); Zhang et al. (2021e) pre-trained with a contrastive loss on intent detection tasks.",
"Our multi-task pre-training method is inspired from Zhang et al. (2021d) which leverages publicly available intent datasets and unlabeled data in the current domain for pre-training to improve the performance of few-shot intent detection.",
"However, we argue that the method is more suitable to be applied for NID due to the natural existence of unlabeled utterances.",
"Contrastive Representation Learning.",
"Contrastive learning has shown promising results in computer vision (Bachman et al., 2019; Chen et al., 2020; He et al., 2019; Khosla et al., 2020) and gained popularity in natural language processing.",
"Some recent works used unsupervised contrastive learning to learn sentence embeddings (Gao et al., 2021; Yan et al., 2021; Kim et al., 2021; Giorgi et al., 2021).",
"Specifically, Gao et al. (2021); Yan et al. (2021) showed that contrastive loss can avoid an anisotropic embedding space.",
"Kim et al. (2021) proposed a self-guided contrastive training to improve the quality of BERT representations.",
"Giorgi et al. (2021) proposed to pre-train a universal sentence encoder by contrasting a randomly sampled text segment from nearby sentences.",
"Zhang et al. (2021e) demonstrated that self-supervised contrastive pre-training and supervised contrastive fine-tuning can benefit few-shot intent recognition.",
"Zhang et al. (2021a) showed that combining a contrastive loss with a clustering objective can improve short text clustering.",
"Our proposed contrastive loss is tailored for clustering, which encourages utterances with similar semantics to group together and avoids pushing away false negatives as in the conventional contrastive loss.",
"Problem Statement.",
"To develop an intent recognition model, we usually prepare a set of expected intents C k along with a few annotated utterances D labeledknown = { ( x i , y i ) | y i C k } for each intent.",
"After deployed, the system will encounter utterances D unlabeled = { x i | y i {C k , C u }} from both predefined (known) intents C k and unknown intents C u .",
"The aim of new intent discovery (NID) is to identify the emerging intents C u in D unlabeled .",
"NID can be viewed as a direct extension of out-of-distribution (OOD) detection, where we not only need to identify OOD examples but also discover the underlying clusters.",
"NID is also different from zero-shot learning in that we do not presume access to any kind of class information during training.",
"In this work, we consider both unsupervised and semi-supervised NID, which are distinguished by the existence of D labeledknown , following Zhang et al. (2021c).",
"Overview of Our Approach.",
"As shown in Fig. 2, we propose a two-stage framework that addresses the research questions mentioned in Sec. 1. In the first stage, we perform multi-task pre-training (MTP) that jointly optimizes a cross-entropy loss on external labeled data and a self-supervised loss on target unlabeled data (Sec. 3.1).",
"In the second stage, we first mine topK nearest neighbors of each training instance in the embedding space and then perform contrastive learning with nearest neighbors (CLNN) (Sec. 3.2).",
"After training, we employ a simple non-parametric clustering algorithm to obtain clustering results.",
"We propose a multi-task pre-training objective that combines a classification task on external data from publicly available intent detection datasets and a self-supervised learning task on internal data from the current domain.",
"Different from previous works (Lin et al., 2020; Zhang et al., 2021c), our pre-training method does not rely on annotated data ( D labeledknown ) from the current domain and hence can 258 Figure 2: The left part shows the overall workflow of our method where the training order is indicated by the red arrow.",
"Specifically, we first initialize the model with a pre-trained BERT encoder (Devlin et al., 2019).",
"Then, we employ a joint pre-training loss as in Zhang et al. (2021d).",
"The loss consists of a cross-entropy loss on external labeled data and a masked language modelling (MLM) loss on all available data from the current domain: L stg1 = L ce ( D labeled external ; ) (cid:124) (cid:123)(cid:122) (cid:125) supervised + L mlm ( D all internal ; ) (cid:124) (cid:123)(cid:122) (cid:125) self-supervised , (1) where are model parameters.",
"For the supervised classification task, we leverage an external public intent dataset with diverse domains (e.g., CLINC150 (Larson et al., 2019)), denoted as D labeled external , following Zhang et al. (2021d).",
"For the self-supervised MLM task, we use all available data (labeled or unlabeled) from the current domain, denoted as D all internal .",
"Intuitively, the classification task aims to learn general knowledge of intent recognition with annotated utterances in external intent datasets, while the self-supervised task learns domain-specific semantics with utterances collected in the current domain.",
"Together, they enable learning semantic utterance representations to provide proper cues for the subsequent clustering task.",
"As will be shown in Sec. 4.3, both tasks are essential for NID.",
"For semi-supervised NID, we can further utilize the annotated data in the current domain to conduct continual pre-training, by replacing D labeledexternal in Eq.",
"1 to D labeledknown .",
"This step is not included in unsupervised NID.",
"In the second stage, we propose a contrastive learning objective that pulls together neighboring instances and pushes away distant ones in the embedding space to learn compact representations for clustering.",
"Concretely, we first encode the utterances with the pre-trained model from stage 1. Then, for each utterance x i , we search for its top-K nearest neighbors in the embedding space using inner product as distance metric to form a neighborhood N i .",
"The utterances in N i are supposed to share a similar intent as x i .",
"During training, we sample a minibatch of utterances B = { x i } Mi =1 .",
"For each utterance x i B , we uniformly sample one neighbor x i from its neighborhood N i .",
"We then use data augmentation to generate x i and x i for x i and x i respectively.",
"Here, we treat x i and x i as two views of x i , which form a positive pair.",
"We then obtain an augmented batch B = { x i , x i } Mi =1 with all the generated samples.",
"To compute contrastive loss, we construct an adjacency matrix A for B , which is a 2 M 2 M binary matrix where 1 indicates positive relation (either being neighbors or having the same intent label in semi-supervised NID) and 0 indicates negative relation.",
"Hence, we can write the contrastive loss as: l i = 1 |C i | (cid:88) j C i log exp( sim ( h i , h j ) / ) (cid:80) 2 Mk = i exp( sim ( h i , h k ) / ) , (2) 259 L stg2 = 1 2 M 2 M (cid:88) i =1 l i , (3) where C i { A i,j = 1 | j { 1 , ..., 2 M }} denotes the set of instances having positive relation with x i and |C i | is the cardinality.",
"h i is the embedding for utterance x i .",
"is the temperature parameter.",
"sim ( , ) is a similarity function (e.g., dot product) on a pair of normalized feature vectors.",
"During training, the neighborhood will be updated every few epochs.",
"We implement the contrastive loss following Khosla et al. (2020).",
"Notice that the main difference between Eq.",
"2 and conventional contrastive loss is how we construct the set of positive instances C i .",
"Conventional contrastive loss can be regarded as a special case of Eq.",
"2 with neighborhood size K = 0 and the same instance is augmented twice to form a positive pair (Chen et al., 2020).",
"After contrastive learning, a non-parametric clustering algorithm such as k means can be applied to obtain cluster assignments.",
"Data Augmentation.",
"Strong data augmentation has been shown to be beneficial in contrastive learning (Chen et al., 2020).",
"We find that it is inefficient to directly apply existing data augmentation methods such as EDA (Wei and Zou, 2019), which are designed for general sentence embedding.",
"We observe that the intent of an utterance can be expressed by only a small subset of words such as suggest restaurant or book a flight.",
"While it is hard to identify the keywords for an unlabeled utterance, randomly replacing a small amount of tokens in it with some random tokens from the library will not affect intent semantics much.",
"This approach works well in our experiments (See Table 5 RTR).",
"Advantages of CLNN.",
"By introducing the notion of neighborhood relationship in contrastive learning, CLNN can 1) pull together similar instances and push away dissimilar ones to obtain more compact clusters; 2) utilize proximity in the embedding space rather than assigning noisy pseudo-labels (Van Gansbeke et al., 2020); 3) directly optimize in the feature space rather than clustering logits as in Van Gansbeke et al. (2020), which has been proven to be more effective by Rebuffi et al. (2020); and 4) naturally incorporate known intents with the adjacency matrix.",
"Datasets.",
"We evaluate our proposed method on three popular intent recognition benchmarks.",
"BANKING (Casanueva et al., 2020) is a fine-grained dataset with 77 intents collected from banking dialogues, StackOverflow (Xu et al., 2015) is a large scale dataset collected from online queries, M-CID (Arora et al., 2020) is a smaller dataset collected for Covid-19 services.",
"We choose CLINC150 (Larson et al., 2019) as our external public intent dataset in stage 1 due to its high-quality annotations and coverage of diverse domains.",
"The dataset statistics are summarized in Table 1. We use the same splits of BANKING and StackOverflow as in Zhang et al. (2021b).",
"Details about dataset splitting are provided in the Appendix.",
"Experimental Setup.",
"We evaluate our proposed method on both unsupervised and semi-supervised NID.",
"Notice that in unsupervised NID, no labeled utterances from the current domain are provided.",
"For clarity, we define two variables.",
"The proportion of known intents is defined as |C k | / ( |C k | + |C u | ) and referred to as known class ratio (KCR) , and the proportion of labeled examples for each known intent is denoted as labeled ratio (LAR) .",
"The labeled data are randomly sampled from the original training set.",
"Notice that, KCR = 0 means unsupervised NID, and KCR > 0 means semi-supervised NID.",
"In the following sections, we provide experimental results for both unsupervised NID and semi-supervised NID with KCR = { 25% , 50% , 75% } and LAR = { 10% , 50% } .",
"Evaluation Metrics.",
"We adopt three popular evaluation metrics for clustering: normalized mutual information (NMI), adjusted rand index (ARI), and accuracy (ACC).",
"Baselines and Model Variants.",
"We summarize the baselines compared in our experiments for both unsupervised and semi-supervised NID.",
"Our 260 BANKING StackOverflow M-CID Methods NMI ARI ACC NMI ARI ACC NMI ARI ACC unsupervised GloVe-KM 48.75 12.74 27.92 21.79 4.54 24.26 46.40 35.57 46.99 GloVe-AG 52.76 14.41 31.18 23.45 4.85 24.48 51.23 32.57 42.35 SAE-KM 60.12 24.00 37.38 48.72 23.36 37.16 51.03 43.51 52.95 SAE-DEC 62.92 25.68 39.35 61.32 21.17 57.09 50.69 44.52 53.07 SAE-DCN 62.94 25.69 39.36 61.34 34.98 57.09 50.69 44.52 53.07 BERT-KM 36.38 5.38 16.27 11.60 1.60 13.85 37.37 14.02 33.81 MTP (Ours) 77.32 47.33 57.99 63.85 48.71 66.18 72.40 53.04 68.94 MTP-CLNN (Ours) 81.80 55.75 65.90 78.71 67.63 81.43 79.95 66.71 79.14 Table 2: Performance on unsupervised NID.",
"implementation is based on Zhang et al. (2021b).",
"Unsupervised baselines.",
"(1) GloVe-KM and (2) GloVe-AG are based on GloVe (Pen-nington et al., 2014) embeddings and then evaluated with k -means (MacQueen et al., 1967) or agglomerative clustering (Gowda, 1984) respectively.",
"(3) BERT-KM applies k means on BERT embeddings.",
"(4) SAE-KM 1 For fair comparison, the baselines are re-run with TEX-TOIR: https://github.com/thuiar/TEXTOIR , and hence some results are different from those reported in Lin et al. (2020); Zhang et al. (2021c).",
"adopts k -means on embeddings of stacked auto-encoder.",
"(5) Deep Embedding Clustering (SAE-DEC) (Xie et al., 2016) and (6) Deep Clustering Network (SAE-DCN) (Yang et al., 2017) are unsupervised clustering methods based on stacked auto-encoder.",
"Semi-supervised baselines.",
"(1) BERT-KCL (Hsu et al., 2018) and (2) BERT-MCL (Hsu et al., 2019) employs pairwise similarity task for semi-supervised clustering.",
"(3) BERT-DTC (Han et al., 2019) extends DEC into semi-supervised scenario.",
"(4) CDAC+ (Lin 261",
"et al., 2020) employs a pseudo-labeling process.",
"(5) Deep Aligned Clustering (DAC) (Zhang et al., 2021c) improves Deep Clustering (Caron et al., 2018) by aligning clusters between iterations.",
"Our model variants include MTP and MTP-CLNN, which correspond to applying k means on utterance representations learned in stage 1 and stage 2 respectively.",
"Further, we continue to train a DAC model on top of MTP to form a stronger baseline MTP-DAC for semi-supervised NID.",
"Implementation.",
"We take pre-trained bert-base-uncased model from Wolf et al. (2019) 2 as our base model and we use the [CLS] token as the BERT representation.",
"For MTP, we first train until convergence on the external dataset, and then when training on D labeledknown , we use a development set to validate early-stopping with a patience of 20 epochs following Zhang et al. (2021c).",
"For contrastive learning, we project a 768-d BERT embedding to an 128-d vector with a two-layer MLP and set the temperature as 0 .",
"07 .",
"For mining nearest neighbors, we use the inner product method 2 https://github.com/huggingface/ transformers provided by Johnson et al. (2017) 3 .",
"We set neighborhood size K = 50 for BANKING and M-CID, and K = 500 for StackOverflow, since we empirically find that the optimal K should be roughly half of the average size of the training set for each class (see Section 4.4).",
"The neighborhood is updated every 5 epochs.",
"For data augmentation, the random token replacement probability is set to 0 .",
"25 .",
"For model optimization, we use the AdamW provided by Wolf et al. (2019).",
"In stage 1, the learning rate is set to 5 e 5 .",
"In stage 2, the learning rate is set to 1 e 5 for BANKING and M-CID, and 1 e 6 for StackOverflow.",
"The batch sizes are chosen based on available GPU memory.",
"All the experiments are conducted on a single RTX-3090 and averaged over 10 different seeds.",
"More details are provided in the Appendix.",
"Unsupervised NID.",
"We show the results for unsupervised NID in Table 2. First, comparing the performance of BERT-KM with GloVe-KM and SAE-KM, we observe that BERT embedding performs worse on NID even though it achieves better performance on NLP benchmarks such as GLUE, which manifests learning task-specific knowledge is important for NID.",
"Second, our proposed pre-3 https://github.com/facebookresearch/ faiss 262",
"training method MTP improves upon baselines by a large margin.",
"Take the NMI score of BANKING for example, MTP outperforms the strongest baseline SAE-DCN by 14 .",
"38% , which demonstrates the effectiveness of exploiting both external public datasets and unlabeled internal utterances.",
"Furthermore, MTP-CLNN improves upon MTP by around 5% in NMI, 10% in ARI, and 10% in ACC across different datasets.",
"Semi-supervised NID.",
"The results for semi-supervised NID are shown in Table 3. First, MTP significantly outperforms the strongest baseline DAC in all settings.",
"For instance, on M-CID, MTP achieves 22 .",
"57% improvement over DAC in NMI.",
"Moreover, MTP is less sensitive to the proportion of labeled classes.",
"From KCR = 75% to KCR = 25% on M-CID, MTP only drops 8 .",
"55% in NMI, as opposed to about 21 .",
"58% for DAC.",
"The less performance decrease indicates that our pretraining method is much more label-efficient.",
"Furthermore, with our proposed contrastive learning, MTP-CLNN consistently outperforms MTP and the combined baseline MTP-DAC.",
"Take BANKING with KCR = 25% for example, MTP-CLNN improves upon MTP by 4 .",
"11% in NMI while surpassing MTP-DAC by 2 .",
"63% .",
"A similar trend can be observed when LAR = 50% , and we provide the results in the Appendix.",
"Visualization.",
"In Fig. 3, we show the t-SNE visualization of clusters with embeddings learned by two strongest baselines and our methods.",
"It clearly shows the advantage of our methods, which can produce more compact clusters.",
"Results on other datasets can be found in the Appendix.",
"To further illustrate the effectiveness of MTP, we conduct two ablation studies in this section.",
"First, we compare MTP with the pre-training method employed in Zhang et al. (2021c), where only internal labeled data are utilized for supervised pretraining (denoted as SUP).",
"4 In Fig. 4, we show the results of both pre-training methods combined with CLNN with different proportions of known classes.",
"Notice that when KCR = 0 there is no pretraining at all for SUP-CLNN.",
"It can be seen that MTP-CLNN consistently outperforms SUP-CLNN.",
"Furthermore, the performance gap increases while KCR decreases, and the largest gap is achieved 4 Notice that we make a simple modification to their pretraining to optimize the entire model rather than the last few layers for fair comparison.",
"when KCR = 0 .",
"This shows the high effectiveness of our method in data-scarce scenarios.",
"Second, we decompose MTP into two parts: supervised pre-training on external public data (PUB) and self-supervised pre-training on internal unlabeled data (MLM).",
"We report the results of the two pre-training methods combined with CLNN as well as MTP in Table 4. We can easily conclude that either PUB or MLM is indispensable and multi-task pre-training is beneficial.",
"Number of Nearest Neighbors.",
"We conduct an ablation study on neighborhood size K in Fig. 5. We can make two main observations.",
"First, although the performance of MTP-CLNN varies with different K , it still significantly outperforms MTP (dashed horizontal line) for a wide range of K .",
"For example, MTP-CLNN is still better than MTP when K = 50 on StackOverflow or K = 200 on BANKING.",
"Second, despite the difficulty to search for K with only unlabeled data, we empirically find an effective estimation method, i.e. to choose K as half of the average size of the training set for each class 5 .",
"It can be seen that the estimated K 60 on BANKING and K 40 on M-CID (vertical dashed lines) lie in the optimal regions, which shows the effectiveness of our empirical estimation method.",
"Exploration of Data Augmentation.",
"We compare Random Token Replacement (RTR) used in our experiments with other methods.",
"For instance, dropout is applied on embeddings to provide data augmentation in Gao et al. (2021), randomly shuffling the order of input tokens is proven to be effective in Yan et al. (2021), and EDA (Wei and Zou, 2019) is often applied in text classification.",
"Furthermore, we compare with a Stop-words Replacement (SWR) variant that only replaces the stop-words with other random stop-words so it minimally af-5 We presume prior knowledge of the number of clusters.",
"There are some off-the-shelf methods that can be directly applied in the embedding space to determine the optimal number of clusters (Zhang et al., 2021c).",
"fects the intents of utterances.",
"The results in Table 5 demonstrate that (1) RTR and SWR consistently outperform others, which verifies our hypothesis in Section 3.2.",
"(2) Surprisingly, RTR and SWR perform on par with each other.",
"For simplicity, we only report the results with RTR in the main experiments.",
"We have provided simple and effective solutions for two fundamental research questions for new intent discovery (NID): (1) how to learn better utterance representations to provide proper cues for clustering and (2) how to better cluster utterances in the representation space.",
"In the first stage, we use a multi-task pre-training strategy to exploit both external and internal data for representation learning.",
"In the second stage, we perform contrastive learning with mined nearest neighbors to exploit self-supervisory signals in the representation space.",
"Extensive experiments on three intent recognition benchmarks show that our approach can significantly improve the performance of NID in both unsupervised and semi-supervised scenarios.",
"There are two limitations of this work.",
"(1) We have only evaluated on balanced data.",
"However, in real-world applications, most datasets are highly imbalanced.",
"(2) The discovered clusters lack interpretability.",
"Our clustering method can only assign a cluster label to each unlabeled utterance but cannot generate a valid intent name for each cluster.",
"We would like to thank the anonymous reviewers for their valuable comments.",
"This research was supported by the grants of HK ITF UIM/377 and PolyU DaSAIL project P0030935 funded by RGC."
] | [
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"objective",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"objective",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"result",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"other",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"objective",
"objective",
"method",
"result",
"abstain",
"method",
"abstain",
"abstain",
"method",
"other",
"other"
] |
[
"The uniform information density (UID) hypothesis, which posits that speakers behaving optimally tend to distribute information uniformly across a linguistic signal, has gained traction in psycholinguistics as an explanation for certain syntactic, morphological, and prosodic choices.",
"In this work, we explore whether the UID hypothesis can be operationalized as an inductive bias for statistical language modeling.",
"Specifically, we augment the canonical MLE objective for training language models with a regularizer that encodes UID.",
"In experiments on ten languages spanning five language families, we find that using UID regularization consistently improves perplexity in language models, having a larger effect when training data is limited.",
"Moreover, via an analysis of generated sequences, we find that UID-regularized language models have other desirable properties, e.g., they generate text that is more lexically diverse.",
"Our results not only suggest that UID is a reasonable inductive bias for language modeling, but also provide an alternative validation of the UID hypothesis using modern-day NLP tools.",
"Language has been hypothesized to follow certain information-theoretic constraints.",
"One of the most famous of these constraints is the uniform information density (UID) hypothesis (Fenk and Fenk, 1980; Jaeger, 2010), which states that, subject to the rules of the grammar, speakers aim to distribute information density across a linguistic signal as uniformly as possible.",
"That is, speakers behaving optimally should structure their utterances such that the differences between the peaks and troughs in information are minimized.",
"In the psycholinguistics literature, the UID hypothesis has been used to explain a variety of linguistic phenomena ranging from how we shorten the phonetic duration of more-predictable linguistic",
"units (Aylett and Turk, 2004) to when we decide to use optional syntactic relativizers (Levy and Jaeger, 2007), among other phenomena (Bell et al., 2003; Frank and Jaeger, 2008).",
"These studies often use language models to estimate the information density of linguistic units, taking observations of low variation of information density in well-formed utterances as evidence for the UID hypothesis.",
"In this paper, we propose a new experimental paradigm that uses modern-day NLP models to test the UID hypothesis.",
"Whereas prior work has used language modeling as a tool for observing UID, 1 we explore the conversecan UID be used as a tool to train better language models?",
"Specifically, if the UID hypothesis is true, then we should be able to operationalize UID as a regularizer to help train language models.",
"Moreover, observing lower perplexity in language models trained with this regularization would imply that the concept of UID is a good inductive bias for language modeling, thereby providing a new type of evidence for the UID hypothesis at scale .",
"In experiments, we indeed find such evidence: across a variety of languages and dataset sizes, UID regularization consistently improves performance, having a larger effect when training data is limited.",
"Moreover, we observe thatin comparison with their unregularized counterpartsUID-regularized language models are (1) higher entropy while achieving the same (or better) test set perplexity and (2) generate text that is longer and more lexically diverse.",
"Our work is the first to explore the interaction between UID and training modern-day neural language models, and our findingsthat a cognitively motivated objective can improve language model performanceopen up new avenues for testing other psycholinguistic hypotheses in a similar framework.",
"The task of language modeling aims to estimate a model of the probability of observing any given string in (a subset of) natural language.",
"Formally, a language model p is an (unconditional) probability distribution over sequences of words w = (cid:104) w 1 , w 2 , . . . (cid:105) , where w consists of tokens from some vocabulary and begins and ends with special tokens BOS and EOS , respectively.",
"Today's language models are typically parameterized by neural networks (e.g., transformers (Vaswani et al., 2017)), that follow a local-normalization scheme.",
"Specifically, the model provides a conditional distribution over the vocabulary at each time step; we can then compute the proba-1 On its own, the term UID' is formally an attribute of a linguistic signal.",
"bility of an entire sequence w as: p ( w ) = | w | (cid:89) t =1 p ( w t | w <t ) (1) where are the parameters of the model and we use w <t to represent the first t 1 tokens of w .",
"Parameters are estimated by optimizing over some objective L ( ) .",
"The standard objective for language modeling is the negative log-likelihood of a dataset W under the model: L ( ) = (cid:88) w W log p ( w ) (2) Subsequently, we drop explicit dependence on when it is obvious from context.",
"To assess the goodness of fit of a model p , we typically evaluate its perplexity on some held-out dataset W test , where perplexity (PPL) is defined as PPL( p ) = exp (cid:32) (cid:88) w W test 1 | w | log p ( w ) (cid:33) (3) Note that under this definition of perplexity, our evaluation metric is slightly different than the training objective; the former computes an average over each sequence while the later treats all tokens equally, regardless of the length of the sequence in which they are present.",
"Communication via natural language is a complicated and nuanced process that takes place under a host of cognitive and environmental constraints.",
"As a result, speakers have to make (perhaps subconscious) choices to best navigate this communicative dance.",
"A rational speaker would use these choices to optimize the communicative properties of their utterances.",
"One such locus of optimization is outlined by the Uniform Information Density (UID) hypothesis.",
"At its core, the UID hypothesis aims to explain certain phenomena in human language processing using an information-theoretic approach: we can view language as a transfer of information, which is transmitted with a certain density through a communication channel.",
"The UID hypothesis posits that speakers that behave optimally will structure their utterances to avoid peaks and troughs in this information density (Aylett and Turk, 2004; Levy and Jaeger, 2007; Jaeger, 2010).",
"More formally stated: Within the bounds defined by grammar, speakers prefer utterances that distribute information uniformly across the signal (information den-sity). Where speakers have a choice between several variants to encode their message, they prefer the variant with more-uniform information density (ceteris paribus) (Jaeger, 2010).",
"To better understand the UID hypothesis, consider the concrete example of syntactic reduction ( that mentioning) from Jaeger (2010), which we show graphically in Figure 1 and also describe below.",
"Ex.",
"A .",
"My boss confirmed [that] we are crazy.",
"Ex.",
"B .",
"My boss thinks [that] I am crazy.",
"In both these sentences, the use of the relativizer that is syntactically optionalat the onset of a relative clause (RC), speakers can, but do not have to, include the relativizer.",
"Many speakers, however, would argue that the sentence flows better with the relativizer included in Example A and the relativizer omitted in Example B. The UID hypothesis provides a potential explanation for this phenomenon.",
"When a RC is used without a relativizer, the first word of the RC conveys two pieces of information: both the onset of the RC, as well as part of the RC's internal contents.",
"In Example A, many speakers would find that the information density of the first word in the RC, we , is high, and so adding in the relative clause distributes the information over two words, making it easier to parse.",
"In Example B, the information density of the first word in the RC, I , is lower relatively, and so we do not need to (or it is not as beneficial to) include the relativizer.",
"Now that we better understand what the UID hypothesis attempts to explain, how might we operationalize UID and find quantitative evidence of the pressure for it in language?",
"First, to quantify the amount of information conveyed by a word, we turn to the most basic information-theoretic definition: the information conveyed by a word w in context is its Shannon information content (Shannon, 1948), also called surprisal .",
"Ideally, this surprisal would be measured using the true distribution over human language.",
"Because we do not have access to such a distribution, we often estimate it using a statistical language model.",
"That is, given a statistical language model p , which estimates the probability of a word given its context, the surprisal u ( w t ) of word w t is defined as the following: u ( w t ) = log p ( w t | w <t ) (4) This setup provides a natural approach to exploring how UID might manifestif the UID hypothesis is true, then we should observe that variation in surprisal, as estimated by a language model, is minimized in natural language.",
"Using this approach, prior work has accumulated evidence for UID across various levels of linguistic representation (Pluymaekers et al., 2005; Bell et al., 2009, inter alia ).",
"As some of the earliest examples, Aylett and Turk (2004) showed that linguistic units that had high surprisal according to a tri-gram language model were uttered with longer syllable durations, and Levy and Jaeger (2007) found that for RCs in which the first word had higher surprisal, relativizers were more likely to be used in the RC during actual speech.",
"Further examples are given in our related work section (7).",
"While prior work has shown evidence that UID can help explain many of the choices we make when generating language, to the best of our knowledge, operationalizations of UID have not been explicitly employed as part of the training objective in modern-day NLP models.",
"This raises the simple question that is central to our paper: Can UID serve as an inductive bias for training statistical language models?",
"In an effort to answer this question, we present a scheme for incorporating operationalizations of UID into the language model training objective.",
"Formally, we augment the canonical maximum likelihood estimation objective 2 in eq.",
"(2) with UID 2 Note that the maximum likelihood estimation objective minimizes (over w W ) log p ( w t | w <t ) , i.e., surprisal.",
"Although such an objective may indirectly minimize peaks and dips in surprisal across a sequence simply by pushing them towards 0, it does not explicitly include any sequence level penalty for even surprisal distribution.",
"where > 0 is the strength coefficient of the regularizer.",
"We consider two natural operationalizations of UIDinspired by Collins (2014)as regularizers for training language models: Variance Regularizer.",
"UID concerns the distribution of information in language production, and so a natural measure of this behavior is the variance of surprisals.",
"Thus, we first consider a regularizer that penalizes high variance among the surprisals of words in a given sequence: R ( ) = 1 | w | | w | (cid:88) t =1 ( u ( w t ) ) 2 (6) where = 1 | w | (cid:80) | w | t =1 u ( w t ) .",
"Note that here, and in our subsequent regularizers, we estimate u ( ) via eq.",
"(4) using our model p .",
"Local Consistency.",
"Next, we consider a local consistency regularizer that encourages the surprisals of adjacent words to have similar magnitude: R ( ) = 1 | w | 1 | w | 1 (cid:88) t =1 (cid:16) u ( w t ) u ( w t +1 ) (cid:17) 2 (7) This regularizer is also a reasonable operationalization of UIDif every surprisal is similar to its neighbor, then the density of information in the sequence will be close to uniform.",
"Though we focus on these two regularizers, other operationalizations of UID certainly exist.",
"For example, a similar variant of the above regularizers is the max regularizer (Meister et al., 2020a), which penalizes the highest surprisal in a sentence.",
"3 Furthermore, UID may also be defined in terms of parse steps (Hale, 2001) or structural integrations (Gibson, 2000), as well as in spoken language in the form of filler words like uh and um or word repetition during challenging lexical retrieval.",
"We consider these operationalizations (as well as the broader discussion of how to operationalize UID) as future work.",
"3 We also tried this operationalization in preliminary experiments, but results were not as strong as the variance or local consistency regularizers.",
"To empirically evaluate UID regularization, we train various language models with the UID-regularized objective (eq.",
"(5)) using the following experimental setup.",
"Datasets.",
"We employ datasets from multiple languages and of varying sizes.",
"We use the EuroParl corpus (Koehn, 2005)a multi-lingual dataset of discussions from the European Parliament that has been commonly used for language modeling (Cot-terell et al., 2018; Mielke et al., 2019)since it is roughly semantically controlled in that all utterances are presumably about the same topics.",
"We use EuroParl v7 download from the ACL 2014 SMT Workshop 4 and perform a 801010 train-dev-test split on all five languagesCzech, English, French, German, and Spanishwhich yields 46.7, 42.2, 47.2, 51.3, and 12.4 million training tokens for each language respectively.",
"Moreover, we experiment on languages from several language families; the five languages in Europarl that we consider are all Indo-European, and so we look to Wiki-40B (Guo et al., 2020), which contains Wikipedia dumps of a wide range of languages.",
"We choose a set of diverse languages with training set sizes relatively similar to that of EuroParl: Finnish (a Uralic language; 59.3M training tokens), Indonesian (an Austronesian language; 45.7M training tokens), and Turkish (a Turkic language; 38.1M training tokens).",
"To explore performance on lower-resource languages, we additionally experiment with Swahili 5 (a Niger-Congo language; 6.3M training tokens) and Tagalog (an Austronesian language; 4.2M training tokens).",
"For all languages, we performed tokenization using the MosesTokenizer.",
"6 Train, dev, and test set splits are shown in Table 5 in the Appendix.",
"Model Framework and Architecture.",
"For our experiments, we use the fairseq library (Ott et al., 2019), a standard sequence modeling toolkit in PyTorch.",
"As our model, we use fairseq 's default transformer (with six decoder layers and eight 4 http://statmt.org/wmt14/ translation-task.html 5 Since there are no Niger-Congo languages in Wiki-40B, we perform a 80-10-10 split on Swahili Wikidumps (see https://github.com/google-research/bert/blob/master/multilingual.md ).",
"6 https://pypi.org/project/ mosestokenizer/ attention heads), which achieves competitive 7 language modeling performance (although the purpose of our paper is not to achieve or compare with the state of the art).",
"For all experiments, we followed the data-preprocessing scripts and recommended hyperparameters provided in fairseq 's language modeling module; more detailed information can be found on the Github page.",
"8 UID Regularizers.",
"For UID regularization, we experiment with the variance (eq.",
"(6)) and local consistency regularizers (eq.",
"(7)).",
"We found in preliminary experiments that effective regularization strengths were often near = 0 .",
"01 , and so we performed a grid search over values within an order of magnitude around = 0 .",
"01 : { 0 .",
"006 , 0 .",
"008 , 0 .",
"01 , 0 .",
"02 , 0 .",
"03 , 0 .",
"04 , 0 .",
"05 } .",
"We choose the model with the lowest dev loss to evaluate on the test set.",
"In this section, we report results for models trained under the UID-regularized objective.",
"We find that UID regularization consistently improves perplexity for models trained on various languages (6.1) and dataset sizes (6.2).",
"Additionally, we examine properties of text generated by UID-regularized models (6.3) and analyze the relationship between our operationalization of UID and perplexity (6.4).",
"Table 1 shows the results of UID-regularized language models trained on various languages from EuroParl and Wiki-40B, and includes statistical significance of changes in perplexity, as compared with baselines, computed using permutation tests 9 (Efron and Tibshirani, 1994).",
"For all languages, UID regularization significantly improves perplexity for at least one of the two regularizers.",
"Further-7 On Wikitext-103, the largest dataset we train on (103 million tokens), we achieve a competitive perplexity of 29.89 (c.f. Merity et al. (2018)).",
"For smaller datasets, we tried a smaller transformer architecture of four decoder layers and four attention heads, but it did not perform better than the six decoder layer and eight attention heads version, suggesting that this architecture was not too large for the datasets we use in this paper (even the Tagalog dataset we use is larger than the commonly used Penn Treebank and WikiText-2).",
"8 https://github.com/pytorch/fairseq/ tree/master/examples/language_model 9 http://www2.stat.duke.edu/~ar182/rr/ examples-gallery/PermutationTest.html LANGUAGE (# train tokens) Perplexity CZECH (12.4M) Baseline (no UID) 47.47 + UID: variance 47.24 ( 0 . 5% ) + UID: local consistency 47.08 ( 0 . 8% ) ENGLISH (46.7M) Baseline (no UID) 21.34 + UID: variance 21.08 ( 1 . 2% ) + UID: local consistency 21.19 ( 0 . 7% ) FINNISH (59.3M) Baseline (no UID) 51.58 + UID: variance 51.30 ( 0 . 5% ) + UID: local consistency 51.49 ( 0 . 2% ) FRENCH (51.3M) Baseline (no UID) 17.08 + UID: variance 17.02 ( 0 . 4% ) + UID: local consistency 17.03 ( 0 . 3% ) GERMAN (42.3M) Baseline (no UID) 26.62 + UID: variance 26.50 ( 0 . 4% ) + UID: local consistency 26.45 ( 0 . 6% ) INDONESIAN (45.7M) Baseline (no UID) 53.96 + UID: variance 53.66 ( 0 . 6% ) + UID: local consistency 53.70 ( 0 . 5% ) SPANISH (47.2M) Baseline (no UID) 22.54 + UID: variance 22.37 ( 0 . 8% ) + UID: local consistency 22.44 ( 0 . 4% ) SWAHILI (6.3M) Baseline (no UID) 40.45 + UID: variance 39.79 ( 1 . 6% ) + UID: local consistency 39.44 ( 2 . 5% ) TAGALOG (4.2M) Baseline (no UID) 80.48 + UID: variance 78.40 ( 2 . 5% ) + UID: local consistency 78.12 ( 2 . 9% ) TURKISH (38.1M) Baseline (no UID) 66.13 + UID: variance 65.70 ( 0 . 7% ) + UID: local consistency 66.06 ( 0 . 1% ) Table 1: UID regularizers improve perplexity for multiple languages.",
"more, UID regularization (under the best performing ) never leads to worse perplexity.",
"These results suggest that incorporating UID operationalizations into a model's training objective leads to a better model of language, substantiating uniform information density as a valid inductive bias.",
"Moreover, the improvement for many languages corroborates the expectation that UID should, due to its information theoretic nature, hold across languages (Jaeger and Tily, 2011).",
"Notably, we observe the largest improvements (1.62.9%) in perplexity in Table 1 for the lowest resource languages, Tagalog and Swahili (with 4.2 and 6.3 million training tokens respectively).",
"Conversely, improvement was most marginal (0.2 0.5%) on the highest-resource languages, French and Finnish (51.3 and 59.3 million training tokens respectively).",
"To remove language as a confounding factor from this observation, we perform a controlled analysis of the effects of UID regularization as a function of dataset size.",
"We focus on English; in addition to the result on English EuroParl 2014 from Table 1, which contains 47.0 million training tokens, we experiment with the smaller monolingual English dataset from the 2006 NAACL Workshop on Statistical Machine Translation (WMT'06), 10 which has 17.0M tokens in its training set, as well as the larger Wikitext-103 benchmark (Merity et al., 2017), which contains 103 million tokens in its training set.",
"Table 2 shows the perplexities for models with and without UID regulariztion for these three datasets.",
"As suggested by earlier results, improvements were strongest for the WMT'06 dataset, with an improvement of 1.4 perplexity points for the variance regularizer and 0.9 PPL points for local consistency.",
"For the larger EuroParl and WT-103 datasets, on the other hand, improvement was more modest, ranging from 0.1 to 0.3 perplexity points.",
"As further confirmation that UID regularization has a greater impact on smaller datasets, we perform an ablation study that roughly controls for language content by training models on the subsets of the same dataset.",
"For this ablation, we take subsets of 2, 4, 8, 12, 16, 24, and 32 million sentences from the 47 million sentences in English EuroParl, 10 We downloaded the given train-dev-test splits from https://www.statmt.org/wmt06/ .",
"and observe how much the UID regularizers improve perplexity for each training dataset size.",
"As shown in Figure 2, the results tell the same story as Table 2UID regularization improves perplexity more for smaller datasets.",
"These results are consistent with the expectation that models trained on smaller datasets are more likely to overfit and could therefore benefit more from regularization (Melis et al., 2018).",
"As it is possible that the models trained on smaller datasets could benefit from any kind of regularization, we experiment with label smoothing (Szegedy et al., 2016), another regularization technique that similarly augments the training objective with a penalty.",
"Table 4 shows these results for models trained on WMT'06 and EuroParl with label smoothingour experiments indicate that, across the board, label smoothing leads to worse perplexity compared with baseline models.",
"11 We take this result as further evidence that the improvement from UID regularization stems from the UID hypothesis as a valid inductive bias, rather than simply a need for any kind of regularization when training on smaller datasets.",
"11 This negative result for applying label smoothing to language modeling is consistent with prior empirical findings (Mller et al., 2019; Gao et al., 2020; Meister et al., 2020b).",
"Unconditional models of language have been observed to produce generic text that can be short, bland, or repetitive (Fan et al., 2018; Kulikov et al., 2019; Holtzman et al., 2020), and so in this subsection we investigate how UID regularization might affect these characteristics in generated text.",
"For these experiments, we consider the baseline model, the variance-regularized model, and the local consistency-regularized model trained on English EuroParl.",
"To obtain text samples, we generate samples by sequentially sampling tokens according to the model's predicted distribution until the end-of-sequence ( EOS ) token is sampled, i.e., ancestral sampling.",
"Note that for language model p , this sampling scheme is equivalent to directly sampling y p .",
"We obtain 10,000 samples for each model and report statistics in Table 3.",
"We analyze each set of generated sentences for several metrics.",
"First, we compute the average length of generated sentences.",
"Next, we evaluate the lexical diversity of generated texts by computing the percent of unique n -grams for n { 2 , 3 , 4 } .",
"Finally, sampling from a model also gives us a means for estimating the language model's entropy: H( p ) = (cid:88) y supp( p ) p ( y ) log p ( y ) (8) = E y p (log p ( y )) (9) In the case of language models, supp( p ) is the set of all strings that can be generated from the model's vocabulary V .",
"As this is exponentially large in |V| , directly computing H( p ) is intractable.",
"We can use its equivalence to eq.",
"(9), however, to estimate H( p ) with a simple Monte-Carlo estimator: H( p ) = 1 KK (cid:88) k =1 log p ( y ( k ) ) (10) where we sample y ( k ) p for k = 1 , . . . , K .",
"Table 3 shows results from UID-regularized models compared with the baseline.",
"The models trained with the variance and local consistency regularizers exhibit a preference for longer sequence length and higher lexical diversity.",
"Additionally, the entropy estimates of these models are notably higher, which, following the principle of maximum entropy (Jaynes, 1957), 12 can be seen as an additional advantage of UID-regularized models over their unregularized counterparts.",
"To take a closer look at how UID regularization affects language models, we examine the relationship between minimizing perplexity and UID behavior, where we quantify UID behavior as the variance of models' surprisals.",
"We consider models trained on the English EuroParl dataset with the variance regularizer at strengths { 0 .",
"01 , 0 .",
"03 , 0 .",
"05 , 0 .",
"07 , 0 .",
"09 } and our baseline (which is equivalent to = 0 ), For further comparison, we also train a model with = 0 .",
"01 to observe the effects of penalizing UID behavior.",
"We report results on the EuroParl test set in Figure 3.",
"We observe that the model trained with a UID penalty (negative ) indeed exhibits worse perplexity and UID behavior (variance of surprisals) on the test set.",
"And as we might expect, models trained with higher exhibit UID behavior more strongly, as our quantification is part of their training objective.",
"Overall, from = 0 .",
"01 to = 0 .",
"05 , both 12 The principle of maximum entropy states that the probability distribution that best represents the current knowledge state is the one with the largest entropy.",
"perplexity and UID behavior are positively correlated with , but when we optimize too much for UID ( 0 . 07 ), there is a trade-off in which model perplexity begins to increase.",
"We also observe an intriguing phenomenon in Figure 3.",
"Models that achieve similar perplexity can have substantially different UID behavior values on the test set.",
"Specifically, the = 0 and = 0 .",
"07 models, which have almost the same perplexity, have variance of surprisals of 17.8 and 15.8a difference of more than ten percent!",
"If such models with similar perplexity can have varying definitions of what constitutes good UID behavior, then prior work, which has drawn conclusions on UID based on surprisals computed by a single model (Aylett and Turk, 2004; Levy and Jaeger, 2007; Jain et al., 2018), may need revisiting.",
"As this direction is outside the scope of the present paper, we leave it as future work.",
"We discussed how operationalizing UID for language modeling leads to better models in a wide variety of settings.",
"These results both provide a new form of evidence for the UID hypothesis and build on prior work exploring UID in modern-day NLP models.",
"Evidence for the UID hypothesis.",
"Our work extends the body of psycholinguistic research on uniform information density, which has largely corroborated the UID hypothesis by providing evidence that variation in surprisal, as estimated by a language model, is minimized in natural language.",
"In addition to early studies that used this approach to find evidence for UID in syntactic reduction (Levy and Jaeger, 2007), morphosyntactic contractions (Frank and Jaeger, 2008), and prosodic structure (Aylett and Turk, 2004), the same line of reasoning has been used by more recent work exploring a variety of other linguistic properties.",
"These studies have found that word duration can be predicted by syntactic surprisal (Demberg et al., 2012; Moore-Cantwell, 2013), construction probability (Kuper-man and Bresnan, 2012), informativity (Seyfarth, 2014), and contextual predictability (Jurafsky et al., 2001; Bell et al., 2003; Gahl and Garnsey, 2004).",
"They have also observed that word length is re-flected by conceptual complexity (Lewis and Frank, 2016); word order choice can be predicted by processing cost (Bloem, 2016; Sikos et al., 2017); phonological patterns can be shaped by word predictability (Hall et al., 2018); and UID computed at the sequence level predicts human preferences for syntactic alternatives of the same sentence.",
"Whereas the above prior work has used language modeling as a tool for measuring UID, our paper has explored the exact conversewe have asked whether UID, operationalized as a regularizer, can be used as a tool for training better language models.",
"We argue that if the UID hypothesis holds as a general principle, then we should be able to exploit it as a training criterion that improves language modeling.",
"And accordingly, our results show thatacross a variety of languages and dataset sizesregularization for UID did indeed improve perplexity, which we view as an alternative kind of evidence for the UID hypothesis at scale.",
"Notably, Figure 3 at first could appear to contradict the UID hypothesis, since models with better UID behavior did not always achieve better perplexity.",
"We do not consider this as evidence against the UID hypothesis, however.",
"Rather, we posit that when is too large, we may be optimizing for UID to the point of tending towards unnatural languagea perfectly uniform dispersion of information across an utterance may come at the cost of strange lexical choices.",
"In this light, such a trade-off should be somewhat expected.",
"UID in modern NLP.",
"In addition to the traditional line of psycholinguistic work, there have also been more-recent studies on UID in the context of modern NLP, although this work is relatively sparse.",
"Rubino et al. (2016) leverage information density encoded as surprisal at the word, part of speech, and syntax levels to help build a state-of-the-art model for mixed-domain translationese detection.",
"Jain et al. (2018) incorporate UID measures across sentences into models designed to detect natural versus manipulated text.",
"Perhaps the work that is most related to ours, Meister et al. (2020a), leverages UID to explain why beam search is an effective decoding algorithm and uses operationalizations of UID during beam search to alleviate problems with decoding poorly calibrated machine translation models.",
"Whereas Meister et al. (2020a) focuses on decoding, our work shows the first evidence that UID can be operationalized to aid training.",
"In closing, we have proposed encoding uniform information density as a regularizer for training language modelsa novel manner of incorporating an established psycholinguistic theory into modern statistical language modeling.",
"In experiments on a range of languages and dataset sizes, UID regularization consistently improves perplexity over baselines.",
"Our results suggest that UID is a valid inductive bias for improving the canonical maximum likelihood objective in language modeling, providing a new, alternative type of evidence that supports the UID hypothesis at scale.",
"Our work opens the door to future research directions such as using similar techniques to validate other psycholinguistic phenomena, applying UID regularization in conditional language generation tasks, and exploring how UID regularized models perform in downstream NLP applications.",
"Language models have various ethical, environmental, and financial concerns.",
"We cannot do justice to them here, but do see Bender et al. (2021) for a pointer.",
"We do not foresee any additional ethical concerns with the contributions made in our work beyond those discussed in Bender et al. (2021).",
"We thank Roger Levy for feedback in the middle stages of our work and Tiago Pimentel, David Re-itter, Tal Linzen, and Slav Petrov for feedback on the manuscript."
] | [
"abstain",
"objective",
"objective",
"result",
"result",
"result",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"objective",
"objective",
"method",
"abstain",
"result",
"result",
"objective",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"other",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"objective",
"other",
"other",
"other",
"objective",
"result",
"result",
"other",
"method",
"objective",
"other",
"other",
"abstain",
"other",
"other",
"other",
"abstain",
"objective",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"other"
] |
[
"Allowing users to interact with multi-document summarizers is a promising direction towards improving and customizing summary results.",
"Different ideas for interactive summarization have been proposed in previous work but these solutions are highly divergent and incomparable.",
"In this paper, we develop an end-to-end evaluation framework for interactive summarization, focusing on expansion-based interaction, which considers the accumulating information along a user session.",
"Our framework includes a procedure of collecting real user sessions, as well as evaluation measures relying on summarization standards, but adapted to reflect interaction.",
"All of our solutions and resources are available publicly as a benchmark, allowing comparison of future developments in interactive summarization, and spurring progress in its methodological evaluation.",
"We demonstrate the use of our framework by evaluating and comparing baseline implementations that we developed for this purpose, which will serve as part of our benchmark.",
"Our extensive experimentation and analysis motivate the proposed evaluation framework design and support its viability.",
"Large bodies of texts on a topic oftentimes contain extensive information that is challenging for a potential reader to handle.",
"Traditionally, information seeking tasks, like search, question-answering (QA) and multi-document summarization (MDS), are single-round input-output processes that can serve the information seeker only to a limited extent.",
"This calls for an interactive setting where a user can guide the information gathering process.",
"For search and QA, this type of research has been gaining momentum recently in areas such as exploratory search (Marchionini, 2006) and conversational QA (Reddy et al., 2019).",
"For MDS, where interaction would allow a user to affect summary content, only sporadic works have been seen over the years (e.g., Leuski et al., 2003; Lin et al., 2010; Yan et al., 2011; Baumel et al., 2014; Christensen et al., 2014; Shapira et al., 2017; Handler and O'Connor, 2017).",
"A key gap in the development and adoption of interactive summarization (denoted here INTSUMM ) solutions is the lack of evaluation methodologies and benchmarks for meaningful comparison of systems, similarly to those for static (non-interactive) summarization (e.g., NIST, 2014).",
"The previous works on interactive or customizable summarization of multi-document sets are distinct, with proprietary evaluations that do not admit comparison.",
"Furthermore, the evaluation processes are often not scalable and replicable, or do not give a comprehensive enough assessment.",
"In this paper we develop an end-to-end evaluation framework for INTSUMM systems.",
"The framework starts with real user session collection on a system, via a concrete process of controlled crowdsourcing that we designed for this task.",
"The sessions are then measured to produce absolute scores for the system, allowing for robust system comparison.",
"Our framework supports a general notion of expansion-based interactive summarization, where the textual summary gradually expands in response to user interaction.",
"Figure 1 presents an INTSUMM system that we implemented to illustrate this notion (5.1).",
"To ensure our evaluation framework is sound, we developed the framework in multiple cycles accompanied by user studies and extensive crowdsourcing experimentation.",
"Our main contributions are as follows.",
"(1) Evaluation measures.",
"We propose a set of automatic and manual evaluation measures for INTSUMM systems, which build upon a combination of established notions in static summarization and interactive systems, and enable utilizing available multi-document summarization (MDS) Figure 1: Our INTSUMM web application, implemented for testing our evaluation framework.",
"datasets.",
"Our measures are aggregated over multiple interactive sessions and document sets to obtain an overall system evaluation.",
"In contrast to static summarization, our measures apply to the steps along the interaction to reflect the progress of information acquirement rather than just its final result.",
"This is done by converting an interactive session to a sequence of incrementally growing static summaries, and measuring the accumulating information gain with a recall metric.",
"See 3.",
"(2)",
"Crowdsourced session collection process.",
"Adequate INTSUMM system evaluation and comparison requires collecting realistic user sessions in a consistent manner, on which the measurements are conducted.",
"Previous work mostly turn to in-house user-studies, which are less replicable, not scalable, and not always easily attainable.",
"In contrast, standard crowdsourcing induces noise and overly tolerates subjective behavior, hindering replicability and comparability.",
"We describe a controlled crowdsourcing procedure that overcomes the above obstacles, making the evaluation process reliable and much more accessible for researchers interested in pursuing INTSUMM research.",
"See 4.",
"We demonstrate the use of our full evaluation framework on two INTSUMM systems that we implemented, which apply different algorithms but share a common user interface, with the DUC 2006",
"(Dang, 2006)",
"MDS dataset.",
"Analysis shows favorable results in terms of internal consistency between sessions, users, and different evaluation measures, indicating that our solutions may serve as a promising benchmark for future INTSUMM research.",
"See 5.",
"The evaluation procedures and systems are available publicly.",
"1 2 Background Traditional MDS has been researched extensively",
"(e.g. Goldstein et al., 2000b; Radev et al., 2004; Haghighi and Vanderwende, 2009; Yin and Pei, 2015).",
"It encompasses variants of query-focused summarization",
"(Dang, 2005), orienting the output summary around a given query",
"(e.g. Daum III and Marcu , 2006; Zhao et al., 2009; Cao et al., 2016; Feigenblat et al., 2017; Baumel et al., 2018), and incremental update summarization",
"(Dang and Owczarzak, 2008), generating a summary of a document set with the assumption of prior knowledge on an earlier set",
"(e.g. Li et al., 2008; Wang and Li, 2010; McCreadie et al., 2014; Zopf et al., 2016).",
"Evaluation approaches predominantly include automatic ROUGE",
"(Lin, 2004)",
"measurement, i.e. word overlap against reference summaries, and manual responsiveness",
"(Dang, 2006)",
"scores or pairwise comparison",
"(Zopf, 2018)",
"between summaries.",
"In the related QA task",
"(Voorhees et al., 1999), a system extracts an answer for a targeted question.",
"Similarly, in the interactive setting, a conversational QA",
"(Reddy et al., 2019)",
"system extracts answers to a series of interconnected questions with a clear informational goal.",
"To check correctness in both cases, a system answer is simply compared to the true answer via text-comparison.",
"On the contrary, in the exploratory style of INTSUMM , where the knowledge desired is less certain, evaluation must consider dynamically accumulating information.",
"Exploratory search",
"(Marchionini, 2006; White and Roth, 2009)",
"addresses the need for converting big data to knowledge via human-machine cooperation.",
"For example, interactive information retrieval",
"(Ingwersen, 1992)",
"focuses on fine-tuning document retrieval interactively, and complex-interactive-QA",
"(ciQA)",
"(Kelly and Lin, 2007)",
"involves interacting with a system to generate a passage that answers a complex question.",
"Evaluation is a major challenge in dealing with these tasks",
"(White et al., 2008; Palagi et al., 2017; Hendahewa and Shah, 2017).",
"Firstly, real users must use the system being evaluated by completing a task-appropriate assignment, requiring large-scale user studies that highly increase the cost and complexity of evaluation.",
"Furthermore, varying user behavior could mean distorted session comparison.",
"Then, a system is measured on the basis of its final outputs, mostly disregarding the evolvement of the interactional session.",
"Among interactive summarization, in the query-chain focused summarization task",
"(Baumel et al., 2014), a chain of queries yields a sequence of short summaries, each refraining from repeating content.",
"The task's evaluation relies solely on pre-defined sequences of queries with a respective reference summary per iteration",
"(laboriously prepared by experts)",
"that disregards previous outputs by the system.",
"Other interactive summarization systems, such as Christensen et al.",
"(2014); Shapira et al.",
"(2017), present a preassembled summary with several levels of detail, allowing a user to drill down to or expand on information of interest.",
"These works do not evaluate in a manner that is comparable to others, and do not consider information variation due to interaction.",
"They perform small-scale user-studies for preference between their system and static variants, or a single automatic assessment of the fully expanded final summary.",
"evaluation issues, specifically targeting the INTSUMM task, where the interaction-induced outputs are purely textual summary snippets of the input document set.",
"An INTSUMM system is evaluated by measuring its performance on multiple sessions produced as a result of human operation.",
"The input of a session, , is a set of documents, D , on which to explore.",
"A session comprises an automatically generated initial summary, 0 , and a sequence of user-posed requests, q i , and corresponding output responses, r i .",
"The responses can be viewed as expansions of 0 .",
"Consequently, the overall interactive summary resulting from defines a sequence of incrementally expanding snapshots [ 0 , 1 , . . . , | | ] where i = 0 S ij =1 r j is the union of accumulative",
"(summarized)",
"information presented to the user after i interactions.",
"Each snapshot may thus be regarded as a static summary, allowing static summarization measures to be applied on it.",
"For compared INTSUMM systems S 1 , ..., S m , we require at least u sessions of distinct users interacting with S i on each test document set D { D 1 , ..., D n } .",
"Assuming such sessions, we next define automatic and manual evaluation measures, and defer details on adequate session collection to 4.",
"Importantly, all measures are based on established evaluation mechanisms used in static summarization and interactive systems, that we extend or adapt for the INTSUMM setting, and that are practically linear in time to the length of the session sequence.",
"Together, the set of measures we define provide an encompassing assessment adequate for the evaluation of interactive summary systems.",
"Viewing a session as a sequence of incrementally expanding static summary snapshots, we would first like to obtain comparable scores for each static summary that will capture the information gained along the session up to the current interaction.",
"Existing static MDS benchmarks provide reference summaries at a single length for the purpose of evaluating a summary at a similar length.",
"This presumably means we would require a series of reference summaries that differ by small length gaps for the sequence of lengthening snapshots, which is diffi-cult and costly to produce.",
"To address this obstacle, we leverage a finding by Shapira et al.",
"(2018)",
"show-0.12 0.17 0.22 0.27 0.32 0.37 0.42 0.47 70 120 170 220 270 320 370 420 470 ROUGE1 R eca ll Word Length Topic 1 Topic 2 L o w e r i n t e r s ec ti on U pp e r i n t e r s ec ti on Figure 2: Example recall-curves of two sessions on an INTSUMM system.",
"ing that a reference summary of a single length can be used to relatively evaluate varying length summaries on a topic with a recall measure such as ROUGE.",
"Thus, utilizing existing MDS datasets is indeed possible for measuring information gain throughout a session's snapshot sequence.",
"Based on this observation we now define three indicators for system performance, first over a single session and then aggregated over all sessions of a system.",
"Per-session indicators.",
"(1)",
"To illustrate the gradual information gain along a session we adopt a recall-by-length curve",
"(Kelly and Lin, 2007; Lin, 2007), see for example Figure",
"2. The curve's x-axis is the snapshot word-length, chosen as the dominant factor affecting quality, as opposed to number of queries or interaction time, which are not necessarily comparable between sessions.",
"The y-axis is a summary content recall score, such as ROUGE-recall against constant reference summaries.",
"2 For session with snapshots 0 , 1 , ..., | | , each i with word length x i and content recall score y i is plotted on the graph at",
"( x i , y i )",
".",
"(2)",
"We consider the area under the recall-curve",
"(AUC).",
"Intuitively, it is desirable for an INTSUMM system to generate more salient information earlier: assuming salient information is more relevant to users, this property means interaction is ceased sooner, as soon as the information needs are met.",
"Accordingly, AUC is higher when content is more relevant and is retrieved earlier.",
"AUC is defined between start and end x-values, fixed for comparable 2 Any standard summary content recall measure can be used as long as it is consistent, including, e.g., manual mechanisms like Pyramid or nugget-style scoring",
"(Nenkova and Passonneau, 2004; Lin and Demner-Fushman, 2006).",
"measurement",
"(see Figure 2), with y-value scores interpolated at these limits when a curve does not have a snapshot at the specific length(s).",
"(3)",
"We consider the Score@Length metric that reports a score, such as standard ROUGE F 1 , at pre-specified word-lengths, and demonstrates the informational effectiveness of a system at those lengths.",
"This metric enables fair comparison to static summaries at the specified lengths.",
"The inverse Length@Score measure is also examined, and detailed further in Appendix C. Aggregated indicators.",
"The average recall curve , illustrating overall gradual information gain, is computed from individual session recall-at-length curves by interpolating y-values at constant x-value increments and averaging correspondingly.",
"E.g., Figure",
"3. [ P. 1] is the average AUC computed from the individual session AUCs by first averaging per topic and then averaging the results over all topics, to give equal weight to each topic.",
"[ P. 2] is the average Score@Length computed similarly to average AUC from individual session Score@Lengths.",
"Automatic evaluation is convenient for fast assessment and consistent comparison, however manual appraisal more accurately forecasts the quality of a summarization system",
"(Owczarzak et al., 2012).",
"Thus, using manual metrics alongside automatic ones is important despite the higher cost it incurs.",
"Our evaluation framework allows doubly leveraging the involvement of human users by asking them to rate different system aspects during the session.",
"We propose the following rating layout, with each measure being scored on a 1-to-5 scale.",
"[ R. 1] After reading the initial summary, the user rates how informative it is for the given topic.",
"This resembles the DUC manual summary content responsiveness rating",
"(Dang, 2006).",
"[ R. 2] To measure the information gain throughout the session, the user rates how much useful information each interaction's response adds.",
"As this rating is scored per interaction, the session average measures overall ability to expose interesting information.",
"[ R. 3] After the session, the user rates how generally well the system responded to the requests throughout the session.",
"[ R. 4] As all human-involved systems should measure perceived usability, the user rates the two UMUX-Lite",
"(Lewis et al., 2013)",
"questionnaire statements: [ R. 4 a ] the system's capabilities meet the requirements and [ R. 4 b ] the system is easy to use.",
"The UMUX-Lite score is a function of these two scores",
"(although they are separately useful)",
"and shows high correlation to the popular, and longer, SUS questionnaire",
"(Brooke, 1996), thus offering a cheaper alternative.",
"Similarly to our automatic measures, these ratings are collected separately per session and then averaged, first per topic and then over all topics, to obtain comparable system scores.",
"The evident advantages of our proposed evaluation framework are:",
"(1)",
"scores are absolute and comparable from one session/system to another;",
"(2)",
"our framework fundamentally and conveniently extends upon prevailing static summarization evaluation practices and utilizes existing standard MDS dataset reference summaries.",
"The evaluation of interactive systems requires real user sessions , as explained in 3.",
"Using a prototype INTSUMM system, described in 5.1, we conducted several cycles of session collection which uncovered multiple user-related challenges, in line with previous work on user task design",
"(Christ-mann et al., 2019; Roit et al., 2020; Zuccon et al., 2013).",
"In particular, recruited users may make undue interactions due to insincere or experimental behavior, yielding noisy sessions that do not reflect realistic system use.",
"Additionally, without an objective informational goal, a user interacts with the system according to subjective interests, producing sessions that are objectively incomparable.",
"Controlled crowdsourcing method.",
"Employing experts to use an interactive system in a user study is usually unnecessary and hinders scalability and accessibility for researchers, making crowdsourcing an appealing and less expensive alternative.",
"While crowdsourcing is ordinarily used for annotation jobs, we show its suitability for system session collection.",
"We designed a three-stage controlled crowdsourcing protocol that mitigates the aforementioned session collection challenges, while filtering out unsatisfactory workers",
"(further details in Appendix B).",
"efficiently filter out insincere workers, and, conversely, discover workers with an ability to apprehend salient information within text.",
"The second stage assigns practice tasks that familiarize the workers to the INTSUMM system interface to prevent experimentation in the actual sessions to be evaluated.",
"Here, the users are also presented with a grounding use-case , or cover-story' as termed by Borlund",
"(2003).",
"The use-case states an objective common goal to follow in interacting with the system, to minimize the effect of subjective preferences, and allow comparison against respective reference summaries with a similar objective goal.",
"An example use-case to follow, applied in our experiments, is produce an informative summary draft text which a journalist could use to best produce an overview of the topic.",
"The use-case is strongly emphasized during practice sessions with integrated guidelines.",
"Workers completing two practice assignments with predominantly relevant interactions are invited to continue on to the final stage.",
"The evaluation session collection stage involves interacting with the evaluated system, for a minimum amount of time per session",
"(e.g., 150 seconds in our experiments), to produce a summary on a topic in light of the same assigned use-case as in the practice stage.",
"Each worker may explore a topic once, and the overall goal is recording sufficiently many sessions per combination of system and topic.",
"Generally in interactive tasks, systems are manually examined over a rather small number of instances",
"(e.g. topics), with only a few users per instance, due to the high cost and complexity of collecting such sessions with experimenters.",
"For example, Christensen et al.",
"(2014)",
"assessed their system on 10 topics, and the ciQA benchmark",
"(Kelly and Lin, 2007)",
"had 6 topics per tested subtask.",
"Our session collection technique provides a more scalable approach, facilitating larger collection processes",
"(e.g., in our experiments in 5 we used 20 topics and 3 sessions per system per topic).",
"We note that, in use cases or domains where experts are required, the proposed three-stage session collection protocol is still fully relevant.",
"It is not limited to the crowdsourcing setting, and can be applied within controlled user-studies if needed.",
"Wild versus controlled crowdsourcing.",
"We illustrate the benefit of the controlled crowdsourcing procedure described above by comparing its results with a wild crowdsourcing preliminary experiment.",
"The latter experiment applied basic worker-Measure Controlled Wild # interactions 12.3 7.0 Approx.",
"filtering",
"( 99% approval rate and 1000 approved assignments on Amazon Mechanical Turk 3",
"(AMT))",
"and did not apply the trap and practice tasks.",
"For quality control, a post-session questionnaire was assigned in order to catch insincere workers.",
"Analysis of the collected sessions showed a substantial improvement in querying behavior in controlled crowdsourcing over sincere wild crowdsourcing",
"(filtering out the insincere wild crowdworkers)",
"the former scored higher than the latter on every evaluation metric.",
"Table 1 presents some qualitative indications of this improvement: controlled users were more engaged",
"(more iterations and more time exploring)",
"and put more thought into their queries",
"(more free-text queries and less suggested queries).",
"Notably, unlike uncontrolled crowd-workers, controlled workers were able to do better than a comparable fully automated baseline, evident from the last table row: the percent difference in ROUGE-1 AUC score from a lower bound simulated baseline",
"(explained in 5.3), is positive",
"(better)",
"for controlled sessions and negative",
"(worse)",
"for wild ones.",
"Finally, the queries of controlled users almost exclusively adhered to the use-case and the many helpful comments from the workers indicated their attentiveness to the task",
"(see Appendix C).",
"We carried out experiments that assess our full evaluation framework and demonstrate its utility.",
"As the few existing INTSUMM systems were not readily available or suitable for adaptation to our experimental setup, we developed an INTSUMM system of our own, shown in Figure 1, with two different algorithmic implementations for comparison.",
"We gathered user sessions with our controlled crowdsourcing procedure and evaluated their quality with 3 https://www.mturk.com our defined measures.",
"We developed a web application, enabling session collection with real users, that follows the INTSUMM schema described in 3: for an input document set, it first presents an initial summary, and then iteratively outputs a summary expansion response per given user request.",
"Specifically, our application supports interactive requests in the form of textual queries from user free-text, summary span highlights and system suggested queries.",
"A system response aims to simultaneously maximize relevance to the query and salience, while refraining from repeating previously presented contents.",
"An initial version of the application was assessed via a small-scale user study of 10 users, with an SUS questionnaire",
"(Brooke, 1996)",
"and the think-aloud protocol",
"(Lewis, 1982)",
"for feedback.",
"Figure 1 displays the improved web application, on the topic Native American Challenges.",
"The left screenshot shows the initial summary with user rating [ R. 1] in [1], an example of a free-text query in the query box [2] and the list of suggested queries in [3].",
"The right screenshot shows the response to the query entered in the first screenshot.",
"[4] reiterates the last submitted query, with the system response and user rating [ R. 2] in [5].",
"The last query can also be repeated via a button [6], to obtain additional information on that query.",
"Users can highlight a span from the presented summary, to be automatically pasted to the query box.",
"Initial summaries and expansions are extractive and in bullet-style.",
"In accordance to this interaction flow presented, we implemented two back-end algorithm schemes, denoted S 1 and S 2 , to demonstrate comparison of two INTSUMM systems via our evaluation framework.",
"Each implementation consists of three components:",
"(1)",
"the initial summary generation,",
"(2)",
"the query-response generation and",
"(3)",
"extraction of suggested queries from the source documents.",
"All system outputs must comply to required interaction latency standards",
"(Anderson, 2020; Attig et al., 2017), e.g., a few seconds for the initial summary and a few hundred milliseconds for a query response.",
"While we experimented with some more advanced techniques for MDS generation",
"(e.g., Christensen et al., 2013; Yasunaga et al., 2017), sentence representation",
"(Reimers and Gurevych, 2019)",
"and sentence similarity",
"(Zhang* et al., 2020), we found that these are not practical for incorporation within the interactive low-latency setting, or that they could not handle the relatively large document set inputs.",
"Instead, we developed the two back-end schemas described next",
"(with further details in Appendix A).",
"S 1 runs a sentence clustering initial summary algorithm.",
"Query-responses are generated in MMR-style",
"(Goldstein et al., 2000a)",
"based on semantic similarity between query and sentences.",
"Suggested queries are frequent bigrams and trigrams.",
"S 2 uses TextRank",
"(Mihalcea and Tarau, 2004)",
"for both the initial summary and suggested queries, and a query-response generation approach combining semantic and lexical similarity between query and sentences.",
"The two systems enable experimentation on our evaluation framework and, as we show, demonstrates its viability.",
"Moreover, as apparent in our experimental results",
"(5.4 and user feedback in Appendix C), users attest to the real-world usefulness of these systems.",
"Using our framework, including the baseline systems, future work can develop and examine more advanced methods for INTSUMM , accounting for the latency and input-size challenges.",
"Following our controlled session collection procedure from 4, we released the trap task in AMT and found 48 of 231 workers qualified for the second stage, out of which 25 accepted.",
"10 workers passed the training stage, from which we recruited 8 highly qualified ones.",
"For the third stage, we collected sessions for 20 topics from DUC 2006, on S 1 and S 2 .",
"Each worker could explore 10 different topics on each system, amounting to 160 possible sessions of which 153 were completed",
"(with at least 3 sessions per combination of topic and system).",
"Since S 1 and S 2 share a common frontend application, users were unaware of which system they are exploring on, and the order was randomized.",
"A minimum exploration time constraint of 150 seconds was set.",
"Initial summaries were 75 tokens",
"(average of 85)",
"and responses were two sentences long.",
"The full controlled crowdsourcing process took one author-work-week, and cost $370.",
"In comparison, wild crowdsourcing described in 4 required a couple days' work and $240",
"(achieving, as discussed, inferior results), and running a non-crowdsourced user-study of the same magnitude would likely require more work time, and cost an estimated $480",
"(32 net hours of 16 workers at a commonly acceptable $15 hourly wage).",
"Furthermore, the results of a user study would not necessarily be of higher quality",
"(Zuccon et al., 2013).",
"To our judgement, the controlled crowdworkers are more suitable since they fathom the task before choosing to complete it.",
"In a user study, workers are often unaware of the task before commencing, and may not be fully qualified for or desiring of it.",
"In addition to real user experiments, we simulate each of our two systems on scripted query lists.",
"Simulated sessions provide a means for quick development cycles and quality estimation.",
"The first of two query lists, L Sug , is constructed fully automatically: it consists of the top-10 ordered phrases in the system's suggested queries component per topic.",
"This mimics a lower bound user who adopts the simplest strategy, namely, clicking the suggested queries in order without using judgment even to choose among these queries.",
"The second list, L Oracle , consists of 10 randomly chosen crowdsourced summary content units",
"(SCUs)",
"(Shapira et al., 2019)",
"for each of the topics.",
"Since the SCUs were extracted from the reference summaries of the corresponding topics, they mimic a user who searches for the exact information required to maximize similarity to the same reference summaries which we then evaluate against.",
"While this is not necessarily the optimal query list due to the randomized sampling of SCUs for queries, we consider it our",
"(non-strict)",
"upper bound for the sake of experimentation.",
"The two bounds are relative to the system on which the simulations are carried on.",
"Also, for fair comparison to real sessions, the simulation initial summary and response lengths are similarly set at 75 words and two sentences respectively.",
"We next present the results attained on the 153 sessions collected (5.2), with the purpose of analyzing our full evaluation framework.",
"We gain an understanding on the consistency between automatic and human measurement, and on the comprehensiveness of the full set of measures.",
"Figure 3 presents the average recall-curves and corresponding [ P. 1] averaged AUC scores of the S 1 bounds (5.3) and of the user sessions on S 1 and S 2 .",
"AUC is computed between word-lengths 105 to 333 (the maximum intersection of all ses-sions).",
"Table 2 shows [ P. 2] averaged ROUGE-1 180 200 220 240 260 280 300 320 340 Word Length S Controlled S L^Sug 73.7 72.5AUC 73.7AUC 73.8AUC 76.9AUC 40 45 50 55 60 65 70 75 80 0.18 0.2 0.22 0.24 0.26 0.28 0.3 0.32 0.34 0.36 0.38 0.4 0.42 0.44 0.46 0.48 100 125 150 175 200 225 250 275 300 325 AUC @ 105 333 ROUGE 1 R eca ll Word Length S L^Oracle S Controlled S Controlled S L^Sug Figure 3: The average recall-curves, along with corresponding AUC scores (unrelated to the x-axis) and their confidence intervals ( 95% ), of the upper and lower bound sessions and of user sessions of the two systems.",
"based Score@Length.",
"Scores rank consistently on ROUGE-2, ROUGE-L and ROUGE-SU (see Appendix C).",
"It is evident from Figure 3 and Table 2 that the results on collected sessions indeed fall between the two bounds in all measures.",
"This demonstrates the effectiveness of interactive summarization, even when using relatively simple algorithms: the algorithm enables fast information processing of input texts, and users effectively direct the algorithm to salient areas.",
"Additionally, the scores of S 1 and S 2 are close, providing no significant insights when comparing these two systems, which is surprising due to their distinct implementations.",
"Manually reviewing the results, we were convinced that the systems indeed happen to perform at similar quality overall .",
"However, when assessing the systems' separate components and inspecting user-provided ratings, we gain awareness of some interesting distinctions.",
"the ratings provided by the users.",
"The initial sum-maries' ROUGE-1 F 1 scores are computed against the reference summaries, with a slight advantage for S 1 over S 2 similar to the users' initial summary ratings.",
"For the query-response component, we compute the average ROUGE F 1 score of the independent responses to the queries in L Oracle , against the reference summaries.",
"Again, user ratings reflect a similar trend that the query-response component of S 2 slightly outscores that of S 1 .",
"Overall we see that S 1 provides a better initial summary while S 2 handles queries better.",
"Also, users tend to be more satisfied by S 2 , likely due to its ability to respond better to queries.",
"This claim is evident from the positive correlation between [ R. 3] and [ R. 4 a ] , r = 0 .",
"68 , p < 0 .",
"001 in S 1 and r = 0 .",
"63 , p < 0 .",
"001 in S 2 .",
"In terms of absolute UMUX-Lite scores [ R. 4] , 68 is considered average, and above 80 is considered excellent, meaning both S 1 and S 2 got high usability scores, with a preference for S 2 .",
"An additional analysis finds a positive correlation between per-iteration response [ R. 2] scores and the relative per-iteration increase in ROUGE recall (e.g. for ROUGE-1 r = 0 . 36 , p < 0 . 001 in S 1 and r = 0 . 33 , p < 0 . 001 in S 2 ), hinting at the credibility of correlation between human ratings and relative increase in ROUGE within sessions.",
"To conclude, our findings are favorable in terms of the framework's internal consistency of measures and soundness of the computed scores.",
"For a more conclusive appraisal of the full evaluation framework, additional systems are to be run through the process, regardless of the accidental similarity between our two baselines.",
"We proposed a comprehensive evaluation framework for user-guided expansion-based interactive",
"summarization a vital ingredient for the methodological advancement of interactive summarization research which was unaccounted for until now.",
"Our controlled crowdsourcing procedure makes INTSUMM system session collection accessible, scalable and replicable.",
"The evaluation measures in our framework provide a thorough assessment with absolute scores that enable comparison of INTSUMM systems.",
"Our framework provides the means to advance INTSUMM research on system development and improved evaluation.",
"All solutions, including our implemented baseline systems, are publicly available to enable comparison of new INTSUMM systems to ours on any MDS dataset.",
"In future work, it is worthwhile to separately assess the effectiveness of individual interaction modes, including ones incorporated in our implementation and others, e.g., full questions input by users.",
"These would require further experimentation, additional evaluation metrics, and the possible use of datasets from tasks other than MDS.",
"Within our expansion-based framework, we can consider additional measures of textual consistency, coherence, and relevance of responses to queries.",
"We may also test additional approaches for summarization: e.g., abstractive summarization for flexible synthetic summary generation, requiring further evaluation of factuality and truthfulness.",
"Beyond our framework, that targets objective quality, INTSUMM systems should also be evaluated according to their compatibility with personalized, subjective use.",
"User-study.",
"Our system-testing user-study (men-tioned in 5.1) was conducted on a university campus, and students within different age groups and from different backgrounds were recruited through a social media group for hiring for experiments and user studies.",
"We required a high level of English for participation.",
"People were accepted until the required amount of participants (10) was reached, without any targeted filtering.",
"An individual study lasted around 30 minutes for a payment of around $10.",
"Crowdsourcing.",
"There were several rounds of crowdsourcing, with varying tasks.",
"Due to the need for fluent English speaking workers, a location filter was set on the AMT platform for English (as primary language) speaking countries.",
"At least one of the authors tested each task before its release to estimate worst-case task completion duration.",
"The payment was then set according to $9 per hour for the estimated required time.",
"In practice, almost all tasks were completed in less than the time estimated, and payment was well above $9 per hour.",
"Very few assignments were rejected in cases of clear insencereness (unreasonably fast submission or senseless behavior).",
"Dataset usage.",
"As pointed out throughout the paper, the DUC 2006 dataset was utilized.",
"It was obtained through the required means on the DUC website ( duc.nist.gov ).",
"There was no possibility to reconstruct the dataset (document sets and reference summaries) within any of the conducted user study and crowdsourcing tasks.",
"Application.",
"Our INTSUMM systems' outputs are extracts from the input document sets.",
"As described in Appendix A, the algorithms for initial summary and query-response generation do not contain any intentional biasing.",
"The intended purpose of any INTSUMM system is to allow readers to make sense of large bodies of text through assisted exploration.",
"Future work may open the door to more personalized algorithms and abstractive outputs.",
"This would require extra care in making sure systems are ethically sound by adding targeted evaluation measures.",
"Compute time.",
"As emphasized in the paper, INTSUMM systems require low latency and are hence relatively computationally cheap.",
"During our research we ran some algorithms, to test for our systems, that required up to several hours of compute time per run, on a standard server.",
"We would like to thank Guiseppe Carenini for his helpful advice, and the anonymous reviewers for their constructive comments.",
"This work was supported in part by the German Research Foundation through the German-Israeli Project Cooperation (DIP, grants DA 1600/1-1 and GU 798/17-1); by the BIU Center for Research in Applied Cryptography and Cyber Security in conjunction with the Israel National Cyber Bureau in the Prime Minister's Office; by the Israel Science Foundation (grants 1157/16 and 1951/17); by a grant from the Israel Ministry of Science and Technology; by the NSF-CAREER Award #1846185; and by a Microsoft PhD Fellowship."
] | [
"abstain",
"abstain",
"objective",
"method",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"abstain",
"abstain",
"method",
"objective",
"objective",
"abstain",
"objective",
"abstain",
"result",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"method",
"method",
"method",
"objective",
"abstain",
"abstain",
"method",
"abstain",
"objective",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other"
] |
[
"Data filtering for machine translation (MT) describes the task of selecting a subset of a given, possibly noisy corpus with the aim to maximize the performance of an MT system trained on this selected data.",
"Over the years, many different filtering approaches have been proposed.",
"However, varying task defini-tions and data conditions make it difficult to draw a meaningful comparison.",
"In the present work, we aim for a more systematic approach to the task at hand.",
"First, we analyze the performance of language identification, a tool commonly used for data filtering in the MT community and identify specific weaknesses.",
"Based on our findings, we then propose several novel methods for data filtering, based on cross-lingual word embeddings.",
"We compare our approaches to one of the winning methods from the WMT 2018 shared task on parallel corpus filtering on three real-life, high resource MT tasks.",
"We find that said method, which was performing very strong in the WMT shared task, does not perform well within our more realistic task conditions.",
"While we find that our approaches come out at the top on all three tasks, different variants perform best on different tasks.",
"Further experiments on the WMT 2020 shared task for parallel corpus filtering show that our methods achieve comparable results to the strongest submissions of this campaign.",
"In recent years, neural machine translation (NMT) systems have greatly improved the quality of automatically generated translations, some argue even to the point of human parity (Hassan et al., 2018).",
"While there most definitely have been advancements in designing the NMT system architectures (Bahdanau et al., 2015; Vaswani et al., 2017), arguably the best (and easiest) way to improve an NMT system is to use more training data.",
"With an ever increasing amount of parallel data for NMT training, which often comes from web-crawling 1 and is quite noisy', the task of data filtering becomes increasingly important (Khayrallah and Koehn, 2018).",
"Data filtering in the context of machine translation (MT) describes a collection of approaches which select a subset of a given, possibly noisy corpus with the aim to maximize the performance of an MT system trained on this data.",
"There exist very simple approaches, the most prominent being based on language identification tools, to detect certain types of noise, e.g. sentences that are from a wrong language.",
"However, other types of noise are much harder to detect, for example when both source and target sentence are well formulated and in the correct language but are not translations of one another.",
"In some formulations of the data filtering task, for example in the WMT shared task for parallel corpus filtering (Koehn et al., 2018, 2019, 2020), the assumption is that there already exists a large amount of clean' data which can be used to detect bad training samples in a separated noisy' corpus.",
"However, such an assumption does typically not hold true in real-life scenarios.",
"Therefore, in this work, we make no such distinction between known-to-be-clean' and noisy' data.",
"We present novel approaches that use all the available data to filter that very same data in order to improve translation performance.",
"In the proposed methods, we use the structure of cross-lingual word embeddings to compare the words in a given source-target sentence pair to determine if the pair is of good' quality.",
"This is done in a variety of ways, including nearest neighbor search in the embedding space and an explicit calculation of alignment scores.",
"All proposed methods are specifically designed to detect the types of noise which cannot be detected by language identification tools.",
"Furthermore, we design our approaches 1 http://opus.nlpl.eu to not rely on the quality of the sentence pair alignments between the source and the target side of the data, since this information might be highly unreliable in a noisy' corpus.",
"The main contributions of this paper are summarized below: We perform a systematic analysis of noise-types' for a commonly used MT task and identify specific weaknesses of the commonly used filtering by language identification.",
"Building on our findings, we propose novel data filtering approaches using cross-lingual word embeddings.",
"We compare our approaches to other strong filtering systems from the literature on three real-life, high resource MT tasks and the WMT 2020 task on parallel corpus filtering.",
"Recently, a number of shared tasks for data filtering have been held, giving a good overview of current state-of-the-art methods.",
"Best known is the WMT shared task for parallel corpus filtering, which was held in 2018 (Koehn et al., 2018), 2019 (Koehn et al., 2019) and 2020 (Koehn et al., 2020) respectively.",
"In these tasks, the participants are asked to provide scores for every sentence pair in a noisy corpus.",
"Afterwards, a fixed amount of sentence pairs is selected according to that score.",
"The best performing submissions from past years use language identification tools as the first part of their setup (Junczys-Dowmunt, 2018; Chaudhary et al., 2019; Lu et al., 2020), removing sentence pairs where the language of either source or target sentence does not match the expectation.",
"Rossen-bach et al. (2018) and Junczys-Dowmunt (2018) use a combination of language model and translation model scores to sort the sentence pairs by quality.",
"Chaudhary et al. (2019) use the cosine distance between cross-lingual sentence embeddings of source and target sentence as score.",
"Wang et al. (2017) estimate the quality of a sentence pair using the euclidean distance between each sentence vector and two vectors representing in-domain and out-domain data.",
"Hangya and Fraser (2018) score the similarity between source and target sentence by averaging the word-pair similarity, which is calculated from cross-lingual word embeddings.",
"conditions, one can not easily make a statement about which approach works best.",
"However, all approaches have in common that they use known-to-be-clean' parallel data in order to train the models of their filtering pipeline.",
"Creating cross-lingual word embeddings from parallel and/or monolingual data is an active field of research (Ruder et al., 2019).",
"In addition to capturing semantic relationships within each language, these representations should be aligned in such a way that the embeddings of the same word in different languages are close together in the embedding space.",
"The standard approach for creating such embeddings is to first train embeddings for each language pair separately (Mikolov et al., 2013; Pennington et al., 2014) and then projecting them into the same vector space (Conneau et al., 2017; Artetxe et al., 2018), which is possible with or without the help of parallel data.",
"Word alignments between a source and a target sentence were an integral part in count-based statistical machine translation systems (Brown et al., 1993; Koehn et al., 2007) and it has been shown that they can be used to help certain aspects of NMT systems as well (Alkhouli et al., 2018).",
"For a long time, IBM-model-based frameworks like GIZA++ (Och and Ney, 2003) or fastalign (Dyer et al., 2013) produced the best word alignments.",
"However, recently Sabet et al. (2020) report equally good results by using a word similarity matrix calculated from cross-lingual word embeddings.",
"Applying language identification (language ID) is a well established first step in most high performing data filtering approaches.",
"During this step, all sentence pairs for which either the source or target sentence is not mapped to the correct language are discarded.",
"It can be argued that this step does not only remove sentence pairs in the wrong language, but also that language-agnostic noise, e.g. sequences of numbers, is almost completely removed.",
"In order to evaluate the effectiveness of the filtering by language ID approach, we decide to test the method on the popular De En data filtering task.",
"By manually checking the noisy corpus (see Section 5.1 for details) we find different types of noise patterns'.",
"For each of these noise patterns', we create a synthetic corpus (50k lines each), only consisting of sentence pairs with this specific noise.",
"We find/create the following noise patterns': trg to src: The source and target side of a valid sentence pair are swapped.",
"src to other: The sentence on the source side is from the correct language.",
"The sentence on the target side is a random sentence from a third language.",
"other to trg: The sentence on the source side is a random sentence from a third language.",
"The sentence on the target side is from the correct language.",
"overtranslation: Both sentences on the source and target side are from the correct language and translations of one another, but parts of the source sentence are missing.",
"undertranslation: Both sentences on the source and target side are from the correct language and translations of one another, but parts of the target sentence are missing.",
"Next, we use the langid.py toolkit (Lui and Baldwin, 2012) to filter each of these synthetic corpora and check which percentage of noise (ideally 100.0%) gets removed.",
"The results are shown in Table 1. We find that the language identification filtering approach does an outstanding job in detecting noise that comes from wrong language alignment.",
"Furthermore it also removes basically all of the random noise, represented by the random digits corpus.",
"However, we also see where this approach Noise Type Percentage removed trg to src 100.0% trg to trg 100.0% src to src 100.0% src to other 99.5% other to trg 99.8% other to other 100.0% sentence misalign 0.0% overtranslation 7.8% undertranslation 6.7% random digits 100.0% Table 1: Removal rate of different noise types by the language identification filtering method.",
"fails: it can not detect noise resulting from a semantic mismatch between source and target sentence.",
"Two conclusions can be drawn from this experiment: First, the filtering methods applied after language identification filtering can be language-agnostic, since all types of noise which originate from wrong languages can be detected by language identification very reliably.",
"Second, downstream filtering methods should focus on the alignment between source and target sentence, since this is where language identification filtering predictably fails.",
"Intuitively a bilingual sentence pair is appropriate for training if",
"a) both the source and the target sentence belong to the corresponding languages and",
"b) they are translations of each other.",
"We rely on established language identification methods (see Section 5.1) to verify the first condition.",
"Following state of the art filtering systems (Junczys-Dowmunt, 2018; Chaudhary et al., 2019) we predict the language for source and target sentence and keep the sentence only if both match the requirements of the task.",
"To check whether the sentences of a training pair ( f J 1 , e I 1 ) are indeed translations of each other we propose several approaches based on cross-lingual word embeddings.",
"For the details of how the cross-lingual word embeddings are constructed we refer to Section 5.1.",
"Here we assume that we are given a cross-lingual word embedding E : V src V trg R d embd that maps each word from the source vocabulary V src or the target vocabulary V trg to a joint space R d embd with a similarity measure .",
"For convenience we use E w := E ( w ) .",
"In practice all embedding vectors are length normalized, i.e. || E w || = 1 .",
"Many works investigate distances in the embedding space as an indicator of relatedness between words of the same language.",
"However we are interested in the relation between the words of the source sentence and the target sentence.",
"Specifically, we want to know whether the two sentences are translations of each other.",
"We assume a source word f is explained by a word e in the target sentence, if E ( f ) is one of the k nearest neighbours of E ( e ) i.e. if: ( E f , E e ) max-k (cid:110) (cid:16) E f , E e (cid:17) (cid:12)(cid:12)(cid:12) f V src (cid:111) where max-k yields the k -th biggest value.",
"Note that we only consider the source nearest neighbourhood around e .",
"To score a sentence pair ( f J 1 , e I 1 ) we calculate: explain( f J 1 | e I 1 ) := (cid:12) (cid:12) { f j | e i : e i explains f j } (cid:12) (cid:12) .",
"For data filtering we consider different variants of combining the forward and backward score: Accumulated Explanation Score: explain( e I 1 | f J 1 ) + explain( f J 1 | e I 1 ) I + J Explanation Disagreement Score: Note that being nearest neighbours in a multilingual embedding space is not a symmetric relation.",
"We compute the agreement of the forward and the backward score: (cid:12)(cid:12)(cid:12)(cid:12) explain( e I 1 | f J 1 ) I explain( f J 1 | e I 1 ) J (cid:12)(cid:12)(cid:12)(cid:12) Explanation Disagreement + Pre-Filtering: A sentence pair is removed if its score for either direction falls below a threshold : min { explain( e I 1 | f J 1 ) , explain( f J 1 | e I 1 ) } < the remaining sentences are scored via explanation disagreement score As similarity measure we choose cross-domain-similarity-scaling (CSLS) (Conneau et al., 2017): CSLS ( E f , E e ) = 2 cos( E f , E e ) 1 n (cid:88) f (cid:48) N f ( e,n ) cos( E f (cid:48) , E e ) 1 n (cid:88) e (cid:48) N e ( f,n ) cos( E f , E e (cid:48) ) where N f ( e, n ) is the neighborhood of size n across the word e in the space of the language of f .",
"The methods described so far are based on the neighbourhood of size k around each word to create a source target and a distinct target source alignment.",
"Alternatively we consider the source target similarity matrix: A i,j := A ( f J 1 , e I 1 ) i,j := E (cid:124) e i E f j where each entry expresses the similarity of a word pair from the source and target sentence.",
"Note that due to the construction of the cross-lingual word embeddings (see Section 5.1) all word embeddings are normalized.",
"This means that the scalar product above is equivalent to the cosine similarity.",
"We consider several options to compute a source target similarity score: Argmax Agreement: Considers alignment points where src trg and the trg src argmax are the same: M := (cid:8) ( i, j ) | i = argmax i A i,j and j = argmax j A i, j (cid:9) and sums up the corresponding weights 1 max { I, J } (cid:88) ( i,j ) MA i,j .",
"Maximum Matching (Score): On the complete bipartite graph induced from the similarity matrix A , i.e. the bipartite graph with vertices V := f J 1 e I 1 and edge weight function f := I J R : ( i, j ) (cid:55) A i,j .",
"We use the total weight of the maximum-weight matching divided by max { I, J } as a score.",
"Maximum Matching (Count): We construct a maximum-weight matching on the bipartite graph with vertices V and edge weights f however we prune the edges if the corresponding word similarity is below a threshold t , keeping only the edges E := { ( i, j ) I J | A i,j t } .",
"The number of matching points divided by max { I, J } is used as score for the sentence pair.",
"We would like to point out that parallel to the present work, Sabet et al. (2020) also introduced the first two of the four methods.",
"Since they aim to extract an explicit alignment between source and target they do not construct a score for a sentence pair and do not consider the use in a data filtering task.",
"Since we are interested in aligning the source and target sentence to obtain a score for data filtering we also use the IBM4 alignment scores provided from GIZA++ (Och and Ney, 2003) for filtering as a comparison.",
"We consider different ways to select training data given a noisy corpus where each sentence pair ( f J 1 , e I 1 ) has an associated score s ( f J 1 , e I 1 ) R :",
"(1) Top X % : Selecting the X % sentence pairs with the best score s .",
"(2) Top X % Transformed: Selecting the X % sentence pairs with the best transformed score: s t ( f J 1 , e I 1 ) = (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) s ( f J 1 , e I 1 ) (cid:88) ( F,E ) dev s ( F, E ) | dev | (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) .",
"(3) Dev set distribution : We score the dev set using s .",
"Empirically this yields a Gaussian distribution where some scores are more frequent than others.",
"We fit a Gaussian distribution and select a lower and an upper threshold such that 95% of the dev set distribution are selected.",
"All sentence pairs from the training corpus whose score falls between the two thresholds are selected.",
"We introduce Variants (2) and (3) since we observe that often the best scored sentence pairs exhibit a pattern that is easy to learn but not representative for translation at all, e.g. sentence pairs that are dominated by long dates on both sides, etc.",
"In particular the sentence pairs from the dev set are our best approximation of what valid training data' should look like.",
"A sentence pair that scores significantly better than the dev set is just as suspicious than one that scores significantly worse.",
"We evaluate the performance of the data filtering systems on three high-resource tasks, namely German English, English Turkish and English Czech.",
"The De En training data consists of the corpora Commoncrawl, Europarl, Rapid and ParaCrawl from the WMT 2019 news translation task 2 .",
"We use the czeng 1.7 corpus 3 from the WMT 2018 news translation task for En Cs.",
"For En Tr we test our systems on a real world corpus with a focus on the entertainment domain provided by a company.",
"We select these three data conditions because they provide high resource data that originates from very different sources and, hence, should express rather different data biases and noise patterns.",
"We choose to test the proposed methods in two settings of the WMT news translation task and not in the conditions defined by the WMT parallel corpus filtering task because we experienced in the past, that performance gains from data filtering on the very noisy corpora of the data filtering task do not carry over to the news translation task.",
"For the corpus data statistics, please refer to Table 2. Following state of the art filtering systems (Junczys-Dowmunt, 2018), we use the langid.py toolkit (Lui and Baldwin, 2012) as the first step in our filtering pipeline by removing source and target sentences where at least one side is not classified to be the correct language.",
"In order to obtain cross-lingual word embeddings we follow the method proposed by Artetxe et al. (2018).",
"In particular we first train GloVe Word Embeddings (Pennington et al., 2014) with a fixed vector size of 300 on the respective monolingual corpora after applying langid.py .",
"From these we select the embeddings of the 200k most common words in each language.",
"They form the base 2 http://www.statmt.org/wmt19/ translation-task.html 3 https://ufal.mff.cuni.cz/czeng/ czeng17 Filter Method Data Selection Training Data dev test Method #trg tokens #sent.",
"for the cross-lingual word embeddings, also with a fixed vector size of 300, which are created using the VecMap toolkit (Artetxe et al., 2018).",
"All of the cross-lingual word embeddings are normalized.",
"To be consistent with our filtering task definition, we do not use an initial seed dictionary to train the cross-lingual word embeddings.",
"For nearest neighbor search we set k equal to five and use cross-domain-similarity-scaling (Conneau et al., 2017) as the distance metric when computing the sentence pair scores.",
"The threshold is set to 0.1 for the pre-filtering step of the explanation disagreement score.",
"We compare our methods to another strong filtering method, that scores all sentence pairs by averaging the log probabilities of two language models (LMs) and two translation models (TMs) (Rossenbach et al., 2018).",
"Each method creates a subset from the corpus, which is used to train a base transformer model (Vaswani et al., 2017) with six encoder and decoder layers implemented using the RETURNN toolkit (Zeyer et al., 2018).",
"Machine translation performance is measured using BLEU scores (Pa-pineni et al., 2002) and TER scores (Snover et al., 2006) using the MtEval tool from the Moses toolkit (Koehn et al., 2007).",
"The development sets we use are newstest2015 for De En, newstest2016 for En Cs and a concatenation of development sets from multiple domains for En Tr.",
"In a first step we investigate the data selection strategies described in Section 4.3.",
"We consider two variants that select a fixed amount of training data plus an additional variant where the amount of selected data is dynamically determined in an automatic way.",
"Note that the amount of data is measured in target positions on the raw text.",
"However since for each MT training we train and apply a new subword splitting, the amount of target subwords in training varies slightly (we observe changes of less than 5%).",
"Results for the different data selection schemes can be found in Table 3. We observe that transforming the scores can be extremely helpful to get good filtering performance.",
"Selecting based on a dev set distribution yields similar strong results but is not as stable.",
"We select data corresponding to the Top 50% of target tokens according to the transformed score except for the GIZA method where we use the non-transformed score because the transformation resulted in unreliable scores due to precision issues.",
"First we consider the De En WMT 2019 news translation task.",
"Note that most of the training data comes from the news translation task ParaCrawl corpus which is smaller and of better quality than the ParaCrawl corpus used in the WMT 2018 parallel corpus filtering task.",
"We start with all the training data and apply language ID as initial filtering, i.e. if either the source or the target sentence of a training pair is not classified with the correct language we drop the sentence pair.",
"The result of this filtering can be seen in Table 4, Line 2. All further filtering methods are trained and applied on this pre-filtered corpus.",
"It is interesting to point out that the LM & TM comparison system does not even beat the language identification baseline.",
"For LM & TM we employ a slight simplification of a system that improved Filter Method Training dev (newstest2015) newstest2017 Data Ratio BLEU TER BLEU TER None (baseline) 1.00 33.5 53.3 34.6 52.7 Language ID 0.89 33.7 53.0 35.0 52.0 LM & TM (Rossenbach et al., 2018) 0.49 33.6 53.7 34.5 52.9 Accum.",
"translation performance by more than 8.0 BLEU and performed among the best on the WMT 2018 data filtering task (Rossenbach et al., 2018).",
"There are two crucial differences to consider: (1) We train the filtering system on the same data that it needs to filter afterwards.",
"This means the filtering pipeline might learn typical patterns from the data that are not actually relevant for translation, like copying the input sentence.",
"(2) The ParaCrawl corpus used here is a newer version of better quality and we add the established training data for the WMT news translation task so that the complete training data is generally of significantly higher quality.",
"Note that the ParaCrawl corpus still provides 80% of the training data and the benefits of doing data filtering diminish quite clearly.",
"We conclude that it is highly important how exactly the data filtering task is phrased.",
"The best performance on the De En WMT task is achieved by the Accumulated Explanation Scores' method which yields an average improvement of 0 .",
"5% with respect to both BLEU and TER across the dev and test set.",
"All other methods except for GIZA' are on par with the language identification baseline, however they achieve a significant reduction of the training data.",
"We experiment with a variant of the Maximum Matching method for scores and counts that is built on top of cross-lingual subword embeddings without any effect in translation performance.",
"The behaviour of the filtering systems is quite different for the company data set of the En Tr task.",
"We report results on three openly available test sets from different domains.",
"In this scenario language identification helps quite clearly on two out of three data sets while LM & TM data filtering significantly reduces the translation performance.",
"With our methods, we observe very clear improvements on the TED test set as well as newstest2018.",
"The Explanation Disagreement Score with pre-filtering gains an average of 0.7 BLEU [%] over the language identification filtering.",
"If we apply Maximum Matching filtering on BPE level we even observe improvements of 2.2 and 5.1 BLEU [%] on TED and newstest2018, however we lose 0.9 BLEU [%] and 0.7 TER [%] on the OpenSubtitles test set.",
"In practice, this minor degradation is out weighted by the significantly stronger performance on the other domains, proofing the usefulness of data filtering in this scenario.",
"The scores based on GIZA alignments result in a very poor performance on all domains except subtitles.",
"By analyzing the selected data, we find that the GIZA' method selects on average shorter sequences than other methods which is detrimental for the news and talks domain but not so much for subtitles.",
"For the En Cs task we observe no significant improvement with any of the methods over even the training on the full training data, even though 10% of the data is removed by simple language identification filtering.",
"Here we observe that LM & TM filtering becomes actively hurtful to the translation performance while the methods proposed in this paper reduce the training data by a factor of two without losing in translation performance.",
"The proposed filtering methods all provide very similar filtering performances except for the scores based on GIZA alignments which decrease the system performance by more than one BLEU [%] .",
"As an additional experiment, we also test our methods on the WMT 2020 shared task for parallel corpus filtering in the Khmer English setting.",
"Although some conditions of this task are quite artifi-cial as discussed before, it provides the opportunity to compare different filtering approaches in the same framework.",
"The task consists of selecting sentence pairs that amount to 5.0M English words from a noisy parallel corpus with a total of 58.3M English words.",
"The quality of the selected data is evaluated by training an NMT system (Ott et al., 2019) on this data and evaluating the system on unseen test sets labeled devt' and test' (Koehn et al., 2020).",
"For Filter Method BLEU devt test LASER (2019 winner) 7.1 8.4 Alibaba system (2020 winner) 8.9 11.0 Maximum Matching (score) 8.2 10.3 Accum.",
"training the filtering system, around 123k clean parallel sentences are given as well as large monolingual corpora for both languages (14M sentences for Khmer and 1.9B sentences for English).",
"As a first step, we apply filtering using language identification as described in Section 3 to sort out sentence pairs with wrong language on source and/or target side.",
"Based on the previous findings, we use our Accum. Expl. Scores' and our Maximum Matching (score)' methods on the BPE level for scoring.",
"Since the parallel data is very small and of questionable quality we only use the monolingual data for the training of our word embeddings.",
"We use all the available monolingual Khmer data while subsampling 14M English sentences.",
"We use the polyglot tokenizer 4 on the Khmer data and train BPE models for Khmer and English separately.",
"The performance of the resulting NMT system is shown in Table 7.",
"Also shown in the table are the results of the LASER filtering system (Chaudhary et al., 2019) which won the WMT 2019 data-filtering evaluation as well of the Alibaba filtering system (Lu et al., 2020) which won the WMT 2020 data-filtering evaluation for Khmer English.",
"We find that our filtering methods performs strongly on this task as well, with our Accum. Expl. Scores' method performing on par with the strongest submission of the latest WMT campaign while not relying on any parallel data.",
"In this work we focus on data filtering for machine translation.",
"We define this task as the selection of a subset of a given, possibly noisy corpus, without the help of additional large-scale clean' corpora.",
"In order to develop a helpful filtering method, we first analyze the commonly used filtering by lan-4 https://github.com/aboSamoor/polyglot guage identification' approach by applying it to synthetically generated noisy data.",
"We find that while filtering by language identification' does an outstanding job in detecting noise that comes from wrong language alignment, it fails to detect noise resulting from a semantic mismatch between source and target sentence.",
"Building on these findings, we develop several approaches based on cross-lingual word embeddings specifically targeting the word alignments between source and target sentence.",
"Furthermore, we conduct a systematic comparison on data selection methods in an effort to uncouple the scoring and selection parts of any data filtering pipeline.",
"We compare our approaches to one of the winning methods from the WMT 2018 shared task on parallel corpus filtering on three real-life, high resource tasks as well as on the recent WMT 2020 shared task on parallel corpus filtering.",
"We find that the existing approach does not perform well in our more realistic scenario, leading to a degradation in performance in most cases.",
"Our methods result in improvements over the baseline on all three three tasks.",
"However, different variants of our methods perform best on different tasks and we can not identify a single best approach.",
"Finally, we compare our methods to state-of-the-art data-filtering systems on the WMT 2020 shared task on parallel corpus filtering.",
"Here, our proposed approaches yield comparable results to aforementioned state-of-the-art methods while not relying on any parallel training data.",
"This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No 694537, project \"SEQCLAS\").",
"The work re-flects only the authors' views and the European Research Council Executive Agency (ERCEA) is not responsible for any use that may be made of the information it contains."
] | [
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"method",
"result",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"other",
"objective",
"objective",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"other",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"method",
"method",
"other",
"result",
"objective",
"method",
"method",
"result",
"result",
"method",
"method",
"objective",
"other",
"other"
] |
[
"Neural machine translation systems have become state-of-the-art approaches for Grammatical Error Correction (GEC) task.",
"In this paper, we propose a copy-augmented architecture for the GEC task by copying the unchanged words from the source sentence to the target sentence.",
"Since the GEC suffers from not having enough labeled training data to achieve high accuracy.",
"We pre-train the copy-augmented architecture with a denoising auto-encoder using the unlabeled One Billion Benchmark and make comparisons between the fully pre-trained model and a partially pre-trained model.",
"It is the first time copying words from the source context and fully pretraining a sequence to sequence model are experimented on the GEC task.",
"Moreover, We add token-level and sentence-level multi-task learning for the GEC task.",
"The evaluation results on the CoNLL-2014 test set show that our approach outperforms all recently published state-of-the-art results by a large margin.",
"The code and pre-trained models are released at https://github.com/zhawe01/fairseq-gec.",
"Grammatical Error Correction (GEC) is a task of detecting and correcting grammatical errors in text.",
"Due to the growing number of language learners of English, there has been increasing attention to the English GEC, in the past decade.",
"The following sentence is an example of the GEC task, where the word in bold needs to be corrected to its adverb form.",
"Although machine translation systems have become state-of-the-art approaches for GEC, GEC is different from translation since it only changes several words of the source sentence.",
"In Table 1, Corpus Sent.",
"we list the ratio of unchanged words of the target sentence to the source sentence in three different datasets.",
"We can observe that more than 80% of the words can be copied from the source sentence.",
"Considering the percentage of unchanged words is high in the GEC task, a more proper neural architecture is needed for it.",
"We enhance the current neural architecture by enabling it to copy the unchanged words and the out-of-vocabulary words directly from the source sentence, just as what humans do when they correct sentences.",
"To our knowledge, this is the first time that neural copying mechanism is used on GEC.",
"Progresses have been made thanks to large-scale training corpus, including NUS Corpus of Learner English (NUCLE) (Dahlmeier et al., 2013) and the large-scale Lang-8 corpus(Tajiri et al., 2012).",
"However, even with millions of labeled sentences, automatic GEC is challenging due to the lack of enough labeled training data to achieve high accuracy.",
"To alleviate the problem of insufficient labeled data, we propose a method to leverage the unlabeled data.",
"The concrete way is to pre-train our copy-augmented model with the unlabeled One Billion Benchmark (Chelba et al., 2013) by leveraging denoising auto-encoders.",
"We also add two multi-tasks for the copy-augmented architecture, including a token-level labeling task and a sentence-level copying task, to further improve the performance of the GEC task.",
"The copying mechanism is for the first time used on the GEC task, which was used on text summarization tasks.",
"On the GEC task, copying mechanism enables training a model with a small vocabulary since it can straightly copy the unchanged and out-of-vocabulary words from the source input tokens.",
"Besides, by separating the constant part of the work from the GEC task, copying makes the generating portion of the architecture more powerful.",
"In the experiment section of this paper, we show that copying does more than just solving the UNK problem, and it can also recall more edits for the GEC problem.",
"The copy-augmented architecture outperforms all the other architectures on the GEC task, by achieving a 56.42 F 0 .",
"5 score on the CoNLL 2014 test data set.",
"Combined with denoising auto-encoders and multi-tasks, our architecture achieves 61.15 F 0 .",
"5 on the CoNLL-2014 test data set, improving +4.9 F 0 .",
"5 score than state-of-the-art systems.",
"In summary, our main contributions are as follows.",
"(1) We propose a more proper neural architecture for the GEC problem, which enables copying the unchanged words and out-of-vocabulary words directly from the source input tokens.",
"(2) We pre-train the copy-augmented model with large-scale unlabeled data using denoising auto-encoders, alleviating the problem of the insufficient labeled training corpus.",
"(3) We evaluate the architecture on the CoNLL-2014 test set, which shows that our approach outperforms all recently published state-of-the-art approaches by a large margin.",
"Neural machine translation systems have become the state-of-the-art approaches for Grammatical Error Correction (GEC), by treating the sentence written by the second language learners as the source sentence and the grammatically corrected one as the target sentence.",
"Translation models learn the mapping from the source sentence to the target sentence.",
"We use the attention based Transformer (Vaswani et al., 2017) architecture as our baseline.",
"The Transformer encodes the source sentence with a stack of L identical blocks, and each of them applies a multi-head self-attention over the source tokens followed by position-wise feedforward layers to produce its context-aware hidden state.",
"The decoder has the same architecture as the encoder, stacking L identical blocks of multi-head attention with feed-forward networks for the target hidden states.",
"However, the decoder block has an extra attention layer over the encoder's hidden states.",
"The goal is to predict the next word indexed by t in a sequence of word tokens ( y 1 , ..., y T ), given the source word tokens ( x 1 , ..., x N ), as follows: h src 1 ...N = encoder ( L src x 1 ...N ) (1) h t = decoder ( L trg y t 1 ... 1 , h src 1 ...N ) (2) P t ( w ) = softmax ( L trg h t ) (3) The matrix L R d x | V | is the word embedding matrix, where d x is the word embedding dimension and | V | is the size of the vocabulary.",
"h src 1 ...N is the encoder's hidden states and h t is the target hidden state for the next word.",
"Applying softmax operation on the inner product between the target hidden state and the embedding matrix, we get the generation probability distribution of the next word.",
"The loss l ce of each training example is an accumulation of the cross-entropy loss of each position during decoding.",
"Copying mechanism was proved effective on text summarization tasks (See et al., 2017; Gu et al., 2016) and semantic parsing tasks (Jia and Liang, 2016).",
"In this paper, we apply the copying mechanism on GEC task, for the first time, enabling the model to copy tokens from the source sentence.",
"As illustrated in Figure 1, besides generating words from a fixed vocabulary, our copy-augmented network allows copying words from the source input tokens.",
"Defined in Equation 5, the final probability distribution P t is a mix of the generation distribution P gen t and the copy distribution P copyt .",
"As a result, the fixed vocabulary is extended by all the words appearing in the source sentence.",
"The balance between the copying Copy Scores Vocabulary Distribution Final Distribution 1 3 2 4 1 3 2 4 5 1 3 2 4 Encoder Decoder Attention Distribution h 1 src h 2 src h 3 src h 4 src h 1 trg h 2 trg h 3 trg h 4 trg h 5 trg + t copy N N Token-level labeling output Figure 1: Copy-Augmented Architecture.",
"and generating is controlled by a balancing factor copyt [0 , 1] at each time step t.",
"The new architecture outputs the generation probability distribution as the base model, by generating the target hidden state.",
"The copying score over the source input tokens is calculated with a new attention distribution between the decoder's current hidden state h trg and the encoder's hidden states H src (same as h src 1 ...N ).",
"The copy attention is calculated the same as the encoder-decoder attentions, listed in Equation 6, 7, 8 : q t , K, V = h trgt W Tq , H src W Tk , H src W Tv (6) A t = q Tt K (7) P copyt ( w ) = softmax ( A t ) (8) The q t , K and V are the query, key, and value that needed to calculate the attention distribution and the copy hidden state.",
"We use the normalized attention distribution as the copy scores and use the copy hidden states to estimate the balancing factor copy t .",
"The loss function is as described in Equation 4, but with respect to our mixed probability distribution y t given in Equation 5.",
"Pre-training is shown to be useful in many tasks when lacking vast amounts of training data.",
"In this section, we propose denoising auto-encoders, which enables pre-training our models with large-scale unlabeled corpus.",
"We also introduce a partially pre-training method to make a comparison with the denoising auto-encoder.",
"Denoising auto-encoders (Vincent et al., 2008) are commonly used for model initialization to extract and select features from inputs.",
"BERT (Devlin et al., 2018) used a pre-trained bi-directional transformer model and outperformed existing systems by a wide margin on many NLP tasks.",
"In contrast to denoising auto-encoders, BERT only predicts the 15% masked words rather than reconstructing the entire input.",
"BERT denoise the 15% of the tokens at random by replacing 80% of them with [MASK], 10% of them with a random word and 10% of them unchanged.",
"Inspired by BERT and denoising auto-encoders, we pre-traine our copy-augmented sequence to sequence model by noising the One Billion Word Benchmark (Chelba et al., 2013), which is a large sentence-level English corpus.",
"In our experiments, the corrupted sentence pairs are generated by the following procedures.",
"Delete a token with a probability of 10%.",
"Add a token with a probability of 10%.",
"Replace a word with a randomly picked word from the vocabulary with a probability of 10%.",
"Shuffle the words by adding a normal distribution bias to the positions of the words and re-sort the words by the rectified positions with a standard deviation 0.5.",
"With a large amount of the artificial training data, the sequence to sequence model learns to reconstruct the input sentence, by trusting most of the input tokens but not always.",
"A sentence pair generated by the corruption process is a GEC sentence pair to some degree, since both of them are translating a not perfect sentence to a perfect sentence by deleting, adding, replacing or shuf-fling some tokens.",
"In nature language processing (NLP), pre-training part of the model also improves many tasks' performance.",
"Word2Vec and GloVe (Pennington et al., 2014; Mikolov et al., 2013) pre-trained word embeddings.",
"CoVe (McCann et al., 2017) pre-trained a encoder.",
"ELMo (Peters et al., 2018) pre-trained a deep bidirectional architecture, and etc.",
"All of them are shown to be effective in many NLP tasks.",
"Following (Ramachandran et al., 2016; Junczys-Dowmunt et al., 2018), we experiment with pre-training the decoder of the copy-augmented sequence-to-sequence architecture as a typical language model.",
"We initialize the decoder of the GEC model with the pre-trained parameters, while initializing the other parameters randomly.",
"Since we use the tied word embeddings between encoder and decoder, most parameters of the model are pre-trained, except for those of the encoder, the encoder-decoder's attention and the copy attention.",
"The Multi-Task Learning (MTL) solves problems by jointly training multiple related tasks, and has shown its advantages in many tasks, ranging from computer vision (Zhang et al., 2014; Dai",
"et al., 2016) to NLP (Collobert and Weston, 2008; Sgaard and Goldberg, 2016).",
"In this paper, we explore two different tasks for GEC to improve the performance.",
"We propose a token-level labeling task for the source sentence, and assign each token in the source sentence a label indicating whether this token is right/wrong.",
"Assuming that each source token x i can be aligned with a target token y j , we define that the source token is right if x i = y j , and wrong otherwise.",
"Each token's label is predicted by passing the final state h srci of the encoder through a softmax after an affine transformation, as shown in Equation 10.",
"p ( label i | x 1 ...N ) = softmax ( WT h srci ) (10) This token-level labeling task explicitly augment the input tokens' correctness to the encoder, which can later be used by the decoder.",
"The primary motivation behind the sentence-level copying task is to make the model do more copying when the input sentence looks entirely correct.",
"During training, we send equal number of sampled correct sentence pairs and the edited sentence pairs to the model.",
"When inputting the right sentences, we remove the decoder's attention over the outputs of the encoder.",
"Without the encoder-decoder attention, the generating work gets hard.",
"As a result, the copying part of the model will be boosted for the correct sentences.",
"As previous studies, we use the public NUCLE (Dahlmeier et al., 2013), Lang-8 (Tajiri et al., 2012) and FCE (Yannakoudakis et al., 2011) corpus as our parrallel training data.",
"The unlabeled dataset we use is the well-known One Billion Word Benchmark (Chelba et al., 2013).",
"We choose the test set of CoNLL-2014 shared task as our test set and CoNLL-2013 test data set (Dahlmeier et al., 2013) as our development benchmark.",
"For the CoNLL data sets, the Max-Match ( M 2 ) scores (Dahlmeier and Ng, 2012) were reported, and for the JFLEG (Napoles et al., Corpus Sent. Public Type Lang-8 1,097,274 Yes Labeled NUCLE 57,119 Yes Labeled FCE 32,073 Yes Labeled One-Billion 30,178,573 Yes Unlabeled Table 2: Training Corpus Corpus Sent. Annot. Metric CoNLL-2013 1,381 1 M 2 CoNLL-2014 1,312 2 M 2 JFLEG 747 4 GLEU Table 3: Evaluation Corpus 2017) test set, the GLEU metric (Sakaguchi et al., 2016) were reported.",
"To make our results comparable to state-of-the-art results in the field of GEC, we limit our training data strictly to public resources.",
"Table 2 and Table 3 list all the data sets that we use in this paper.",
"We build a statistical-based spell error correction system and correct the spell errors in our training data.",
"Following (Ge et al., 2018; Junczys-Dowmunt et al., 2018; Chollampatt and Ng, 2018) and etc., we apply spell correction before evaluation for our dev/test datasets.",
"A 50,000-word dictionary is extracted from the spell-corrected Lang-8 data corpus.",
"Like previous works, we remove the unchanged sentence pairs in the Lang-8 corpus before training.",
"In this paper, we use the Transformer implementation in the public FAIR Sequence-to-Sequence Toolkit 1 (Gehring et al., 2017) codebase.",
"For the transformer model, we use token embeddings and hidden size of dimension 512, and the encoder and decoder have 6 layers and 8 attention heads.",
"For the inner layer in the position-wise feed-forward network, we use 4096.",
"Similar to previous models we set the dropout to 0.2.",
"A 50,000 vocabulary for the input and output tokens are collected from the training data.",
"In total, this model has 97M parameters.",
"Models are optimized with Nesterovs Accelerated Gradient (Nesterov, 1983).",
"We set the learning rate with 0.002, the weight decay 0.5, the pa-tience 0, the momentum 0.99 and minimum learn-1 https://github.com/pytorch/fairseq ing rate 10-4.",
"During training, we evaluate the performance on the development set for every epoch.",
"We also use edit-weighted MLE objective as (Junczys-Dowmunt et al., 2018), by scaling the loss of the changed words with a balancing factor .",
"Almost the same architecture and hyper-parameters are used when pre-training using unlabeled data, except the parameter for edit-weighted loss.",
"We set = 3 when we train the denoising auto-encoder, and set [1 , 1 . 8] when we train GEC models.",
"During decoding, we use a beam-size of 12 and normalize model scores by length.",
"We do not use reranking when evaluating the CoNLL-2014 data sets.",
"But we rerank the top 12 hypothesizes using the language model trained on Common Crawl (Junczys-Dowmunt and Grundkiewicz, 2016) for the JFLEG test sets.",
"We compare our results with the well-known GEC systems, as shown in Table 4.",
"Rule, classification, statistical machine translation (SMT), and neural machine translation (NMT) based systems were built for the GEC task.",
"We list the well-known models on the top section of Table 4 and our results in the middle.",
"Almost all the previous systems reranked their top 12 results using a big language model and some of them used partially pre-trained parameters, which improve their results by 1.5 to 5 F 0 .",
"5 score.",
"Our copy-augmented architecture achieve a 56.42 F 0 .",
"5 score on the CoNLL-2014 dataset and outperforms all the previous architectures even without reranking or pre-training.",
"Combined with denoising auto-encoders and multi-tasks, our model achieve a 61.15 F 0 .",
"5 score on the CoNLL-2014 data set.",
"This result exceeds the previous state-of-the-art system +4.9 F 0 .",
"5 points.",
"In the bottom section of Table 4, we list the results of (Ge et al., 2018).",
"No direct comparison can be made between us, because they used the non-public Cambridge Learner Corpus (CLC) (Nicholls, 2003) and their own collected nonpublic Lang-8 corpus, making their labeled training data set 3.6 times larger than ours.",
"Even so, our results on the CoNLL 2014 test data set and JFLEG test data set are very close to theirs.",
"SMT Rule-Based Hybird refers to (Felice et al., 2014); SMT Classification Hybird refers to (Ro-zovskaya and Roth, 2016); Neural Hybird MT refers to (Ji et al., 2017); CNN + EO refers to (Chollampatt and Ng, 2018) and EO means rerank with edit-operation features; Transformer + MIMs refers to (Junczys-Dowmunt et al., 2018) and MIMs means model indepent methods; NMT SMT Hybrid refers to (Grundkiewicz and Junczys-Dowmunt, 2018); CNN + FB Learning refers to (Ge et al., 2018).",
"In this section, we compare the Transformer archi-tecture's results with and without copying mechanism on the GEC task.",
"As illustrated in Table 5, copy-augmented model increases the F 0 .",
"5 score from 48.07 to 54.67, with a +6.6 absolute increase.",
"Most of the improvements come from the words that are out of the fixed vocabulary, which will be predicted as a UNK word in the base model but will be copied as the word itself in the copy-augmented model.",
"Copying is generally known as good at handling the UNK words.",
"To verify if copying is more than copying UNK words, we do experiments by ignoring all UNK edits.",
"From Table 5, we can see that even ignoring the UNK benefits, the copy-augmented model is still 1.62 F 0 .",
"5 points higher than the baseline model, and most of the benefit comes from the increased recall.",
"From Table 5, we can observe that by partially pretraining the decoder, the F 0 .",
"5 score is improved from 54.67 to 57.21 (+2.54).",
"It is an evident improvment compared to the un-pre-trained ones.",
"However, the denoising auto-encoder improves the single model from 54.67 to 58.8 (+4.13).",
"We can also see that both the precision and recall are improved after pre-training.",
"To further investigate how good the pre-trained parameters are, we show the results of the early stage with and without the denoising auto-encoder's pre-trained parameters in Table 6.",
"The results show, if we finetune the model for 1 epoch with the labeled training data, the pre-trained model beats the un-pretrained one with a big gap (48.89 vs 17.19).",
"Even without finetune, the pre-trained model can get a F 0 .",
"5 score of 31.33.",
"This proves that pre-training gives the models much better initial parameters than the randomly picked ones.",
"We add the sentence-level copying task to encourage the model outputs no edits when we input a correct sentence.",
"To verify this, we create a correct sentence set by sampling 500 sentences from Model Pre.",
"Wikipedia.",
"Also, we generate an error sentence set by sampling 500 sentences from CoNLL-2013 test data set, which is an error-annotated dataset.",
"Then we calculate the average value of the balance factor copy of the two sets.",
"Before we add the sentence-level copying task, the copy is 0.44/0.45 for the correct and error sentence sets.",
"After adding the sentence-level copying task, the value changed to 0.81/0.57.",
"This means that 81% of the final score comes from copying on the correct sentence set, while only 57% on the error sentence set.",
"By adding the sentence-level copying task, models learn to distinguish correct sentences and error sentences.",
"To analyze how copying and generating divide their work.",
"We visualized the copying attention alignment and the encoder-decoder attention alignment in Figure 2.",
"In Figure",
"2(a), copying focus their weights on the next word in good order, while in Figure",
"2(b), generating moves its attention more on the other words, e.g., the nearby words, and the end of the sentence.",
"As explained in (Raganato et al., 2018), this means that the gen-Error Type % Recall Article Or Determiner 14.31% 44.54% Wrong Collocation/Idiom 12.75% 10.38% Spelling, Punctuation, etc. 12.47% 45.66% Preposition 10.38% 49.03% Noun number 9.38% 72.65% Verb Tense 5.41% 28.15% Subject-Verb Agreement 4.93% 61.79% Verb form 4.69% 57.26% Redundancy 4.65% 25.86% Others 20.99% 23.28% Table 7: Recall on Different Error Types.",
"erating part tries to find long dependencies and attend more on global information.",
"By separating the copying work from the generation work, the generation part of the model can focus more on the creative works.",
"Automatic grammatical error correction is a complicated task since there are different kinds of errors and various correction ways.",
"In this section, we analyze our systems' performance on different grammatical error types.",
"(Ng et al., 2014) labeled CoNLL-2014 test set with 28 error types, and we list the recall percentage on the top 9 error types.",
"We summarize the other 19 types in the last line of the table.",
"Our approach recalls 72.65% errors on the Noun number type and 61.79% on the Subject-Besides , we try can to reduce the bad e ect cause by the new technology .",
"Verb Agreement type.",
"However, only 10.38% errors are recalled on the Wrong Colloca-tion/Idiom type.",
"Computers are good at the definite and mechanical errors, but still have a big gap with humans on the error types that are subjective and with cultural characteristics.",
"Early published works in GEC develop specific classifiers for different error types and then use them to build hybrid systems.",
"Later, leveraging the progress of statistical machine translation(SMT) and large-scale error corrected data, GEC systems are further improved treated as a translation problem.",
"SMT systems can remember phrase-based correction pairs, but they are hard to generalize beyond what was seen in training.",
"The CoNLL-14 shared task overview paper (Ng et al., 2014) provides a comparative evaluation of approaches.",
"(Rozovskaya and Roth, 2016) detailed classification and machine translation approaches to grammatical error correction problems, and combined the strengths for both methods.",
"Recently, neural machine translation approaches have been shown to be very powerful.",
"(Yannakoudakis et al., 2017) developed a neural sequence-labeling model for error detection to calculate the probability of each token in a sentence as being correct or incorrect, and then use the error detecting model's result as a feature to re-rank the N best hypotheses.",
"(Ji et al., 2017) proposed a hybrid neural model incorporating both the word and character-level information.",
"(Chollampatt and Ng, 2018) used a multilayer convolutional encoder-decoder neural network and outperforms all prior neural and statistical based systems on this task.",
"(Junczys-Dowmunt et al., 2018) tried deep RNN (Barone et al., 2017) and transformer (Vaswani et al., 2017) encoder-decoder models and got a higher result by using transformer and a set of model-independent methods for neural GEC.",
"The state-of-the-art system on GEC task is achieved by (Ge et al., 2018), which are based on the sequence-to-sequence framework and fluency boost learning and inference mechanism.",
"However, the usage of the non-public CLC corpus (Nicholls, 2003) and self-collected non-public error-corrected sentence pairs from Lang-8 made their training data 3.6 times larger than the others and their results hard to compare.",
"We present a copy-augmented architecture for GEC, by considering the characteristics of this problem.",
"Firstly, we propose an enhanced copy-augmented architecture, which improves the sequence-to-sequence model's ability by directly copying the unchanged words and out-of-vocabulary words from the source input tokens.",
"Secondly, we fully pre-train the copy-augmented architecture using large-scale unlabeled data, leveraging denoising auto-encoders.",
"Thirdly, we introduce two auxiliary tasks for multi-task learning.",
"Finally, we outperform the state-of-the-art automatic grammatical error correction system by a large margin.",
"However, due to the complexity of the GEC problem, there is still a long way to go to make the automatic GEC systems as reliable as humans."
] | [
"abstain",
"objective",
"abstain",
"objective",
"abstain",
"method",
"result",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"other",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"other",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"other",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"objective",
"objective",
"abstain",
"result",
"abstain"
] |
[
"Most current neural machine translation models adopt a monotonic decoding order of either left-to-right or right-to-left.",
"In this work, we propose a novel method that breaks up the limitation of these decoding orders, called Smart-Start decoding.",
"More specifically, our method first predicts a median word.",
"It starts to decode the words on the right side of the median word and then generates words on the left.",
"We evaluate the proposed Smart-Start decoding method on three datasets.",
"Experimental results show that the proposed method can significantly outperform strong baseline models.",
"Neural machine translation (NMT) has made remarkable progress in recent years.",
"There has been much progress in encoder-decoder framework, including recurrent neural models (Wu et al., 2016), convolutional models (Gehring et al., 2017) and self-attention models (Vaswani et al., 2017).",
"Particularly, the Transformer, only relying on self-attention networks, has achieved state-of-the-art performance on different benchmarks.",
"Most encoder-decoder frameworks generate target translation in a completely monotonic order from left to right (L2R) or from right to left (R2L).",
"However, monotonic generation is not always the best translation order for the machine translation task.",
"As shown in Figure 1, (happy) needs to leverage the future context (lively) to make disambiguation of the translation in English sentence, because has two meanings: happy to do something and Le (person name).",
"In this example, the L2R baseline model produced an incorrect translation of Le (person name) due to unseen future context.",
"Source: (cid:7503) (cid:7437)(cid:7609) (cid:7587)(cid:23303) , (cid:13927)(cid:14317) (cid:12070)(cid:14139) (cid:26205)(cid:11575) (cid:11775)(cid:13846) .",
"Ref: Happy to talk with people , Yang Sen has a lively personality Left-to-Right Translation: Le talks with people , Yang Sen is very lively .",
"Translation: Chatting with people , Yang Sen has a lively personality .",
"glad with people chat Yang Sen personality very (cid:28671)(cid:28668)(cid:28681)(cid:28664)(cid:28671)(cid:28684)",
"(a)",
"(b) Smart-Start: Yang Sen has a lively personality .",
"[m] Chatting with people , Figure 1: Example of baseline method",
"There are some related works on non-monotonic text generation (Mehri and Sigal, 2018; Welleck et al., 2019; Gu et al., 2019; Zhou et al., 2019b,a).",
"Inspired by these works, we are extremely interested in considering choosing one proper position to start decoding instead of L2R or R2L order.",
"We propose a novel method called the Smart-Start decoding method.",
"Specifically, our method starts the generation of target words from the right part of the sentence Yang Sen has a lively personality ., followed by the generation of the left part of the sentence Chatting with people ,.",
"The intuition is that humans do not always translate the sentence from the first word to the last word.",
"Instead, humans may translate different parts of the sentence before organizing the whole translation.",
"As shown in Figure 1, our Smart-Start method predicts the word Yang in the median position of the target sentence, together with the following words of the right part of the sentence Yang Sen has a lively personality ..",
"Once our model produces the specific symbol [m] which is designed to indicate the termination of the right part generation, we will start predicting the left part of the sentence Chatting with people ,.",
"Finally, we obtain the final translation from the intermediate translation by solely placing the right part Yang Sen has a lively personality . in front of the left part and removing the additional symbol [m] .",
"We introduce a weighted maximum likelihood algorithm to automatically learn this kind of decoding order by giving weights to translations with different start positions.",
"To verify the effectiveness of our method, we conduct experiments on three benchmarks, including IWSLT14 German-English, WMT14 English-German, and LDC Chinese-English translation tasks.",
"Experimental results show that our method outperforms monotonic and non-monotonic baselines.",
"In conclusion, we propose a simple but effective method, which predicts from the median words to the last position's word followed by the word predictions on the left part of the sentence.",
"In this section, we present the details of the proposed hard and soft Smart-Start methods.",
"Our method first predicts a median word and then predicts the words on the right part, and then generates words on the left.",
"Our method is split into two phases.",
"First, given the source sentence X = ( x 1 , x 2 , . . . , x m ) , we use the model P ( Z k | X ) to predict the intermediate translation Z k starting from the middle position of the sentence, where Z k = ( y n k +1 , . . . , y n , [m] , y 1 , . . . , y n k ) and [m] is the k th word of Z k .",
"Second, we construct the final translation Y from the the intermediate translation Z k .",
"As shown in Figure 2, our method predicts a word y n k +1 , given the source sentence.",
"Then our model predicts the right part of sentence ( y n k +1 , . . . , y n ) at a time.",
"Furthermore, when it predicts the symbol [m] , we start predicting the left part of the sentence ( y 1 , . . . , y n k ) .",
"Then, we obtain the final translation Y from the intermediate translation Z k .",
"Our method is based on the Transformer architecture.",
"Our Smart-Start method is extremely interested in breaking up the limitation of this decoding order.",
"Different from the traditional L2R and R2L (Sennrich et al., 2016a), our Smart-Start method predicts median word y n k +1 over the source sentence.",
"Furthermore, we predict the right part of target sentence ( y n k +1 , . . . , y n ) sequentially which is on the right part of this word.",
"Finally, we generate the rest words ( y 1 , . . . , y n k ) on the left part of the sentence given the source sentence and left part.",
"Formally, we build our Smart-Start neural machine translation model as below: P ( Z k | X ) = P ( y n k +1 | X ) (cid:2) n k +1 <i n P ( y i | X ; y k ,...,y i 1 ) P ( [m] | X ; y n k +1 ,...,y n ) (cid:2) 1 j n k P ( y j | X ; y 1 ,...,y j 1 ,y n k +1 ,...,y n ) (1) where i , j denote the i th and j th words in the target sentence.",
"[m] is the k th word of Z k .",
"Since there is no annotation of initial words to start the decoding, we construct the intermediate sentences with different start positions and then score them with hard or soft Smart-Start methods.",
"Therefore, given the source sentence X of length m and target sentence Y of length n , we can construct n intermediate sentences Z k = ( y n k +1 , . . . , y n , [m] , y 1 , . . . , y n k )( k [1 , n ]) .",
"Because the target sentence length n can be too long, we randomly sample S intermediate sentences from n intermediate sentences to construct the subset SY , where S is the number of sampled start positions.",
"We apply scores calculated by the hard or soft Smart-Start methods to the loss of different intermediate samples to teach model which start position is better.",
"This procedure can be described by the weighted log-likelihood (WML) (Dimitroff et al., 2013) reward function L over the dataset D as below: L = (cid:2) X,Y D (cid:2) Z k S Y w k log P ( Z k | X ) (2) where SY is the subset containing S samples.",
"w k is calculated by the hard or soft Smart-Start methods.",
"For the hard Smart-Start method, we use the median training loss of intermediate samples as threshold to select appropriate samples to update model parameters.",
"We calculate w k by comparing the training loss generated by the current model of each Z k from SY with the threshold as below: w k = L k L med (3) where L k L med equals to 1 if L k L med else 0.",
"L med is the median loss of the sample in SY .",
"For each intermediate sentence Z k SY , the objective of Z k is denoted as L k = log P ( Z k | X ) .",
"The soft Smart-Start method uses BLEU metric to evaluate intermediate samples with different start positions.",
"It calculates BLEU points between the translation Z transk and the reference Z k .",
"Softmax function is used to reweigh the w k as below: w k = Softmax Z k S Y ( BLEU ( Z transk , Z k )) (4) where Z transk is the intermediate translation generated by the current training model P ( Z k | X ) using the teacher forcing method.",
"Z k is the intermediate sentence from SY .",
"In this section, we evaluate our method on three popular benchmarks.",
"IWSLT14 De-En corpus contains 16K training sequence pairs.",
"The valid and test set both contain 7K sentence pairs.",
"LDC Zh-En corpus is from the LDC corpus.",
"The training data contains 1.4M sentence pairs.",
"NIST 2006 is used as the valid set.",
"NIST 2002, 2003, 2005, 2008, and 2012 are used as test sets.",
"WMT14 En-De corpus has 4.5M sentence pairs.",
"The newstest2013 and the newstest2014 are used as valid the test set.",
"All languages are tokenized by Moses (Koehn et al., 2007) and our Chinese tokenizer, and then encoded using byte pair encoding (BPE) (Sennrich et al., 2016b) with 40K merge operations.",
"The evaluation metric is BLEU (Papineni et al., 2002).",
"We conduct experiments on 8 NVIDIA 32G V100 GPUs and set batch size as 1024 tokens.",
"In the training stage, we adopt the Adam optimizer BLEU Number of Sampled Start Positions Figure 3: Results of different values of the number of sampled start positions on IWSLT14 De En test set.",
"( 1 = 0 . 9 , 2 = 0 . 98 ) (Kingma and Ba, 2015) using the inverse sqrt learning rate schedule (Vaswani et al., 2017) with a learning rate of 0.1 and 4000 warming-up steps.",
"We set the number of sampled start positions S = 8 described as Equation",
"2. For the LDC Zh En translation task , we use the Transformer_base setting with the embedding size as 512 and feed-forward network (FFN) size as 2048.",
"For the IWSLT14 De En translation task , we use the Transformer_small setting with embedding size as 512 and FFN size as 1024.",
"The dropout is set as 0.3 and weight decay as 0.0001 to prevent overfitting.",
"For the WMT14 En De translation task , we use the Transformer_big setting with embedding size as 1024 and FFN size as 4096.",
"Following the previos work (Ott et al., 2018), we accumulate the gradient for 16 iterations to simulate a 128-GPU environment.",
"We compare our method with the other baselines, including Transformer (Vaswani et al., 2017), RP Transformer (Shaw et al., 2018), Light-3985",
"Conv/DynamicConv (Wu et al., 2019), and SB-NMT (Zhou et al., 2019a).",
"For the results of IWSLT14 De En in Table 2 and LDC Zh En machine translation tasks in Table 1, our soft method significantly gets an improvement of +0.98/+1.71 BLEU points than a strong Transformer model.",
"For the WMT14 En De task, the results of our model are presented in Table",
"3. Besides, we also compare our method with other self-attention models.",
"The SB-NMT model gets a BLEU points of 29.21 which decodes from L2R and R2L simultaneously and interactively.",
"Our method achieves an improvement of +0.56 BLEU points over the Transformer baseline.",
"Besides, our soft Smart-Start method outperforms the SB-NMT model by +0.80 En De BLEU RP Transformer (Shaw et al., 2018) 29.20 SB-NMT (Zhou et al., 2019a) 29.21 LightConv (Wu et al., 2019) 28.90 DynamicConv (Wu et al., 2019) 29.70 Transformer (our implementation) 29.36 Hard Smart-Start (our method) 29.45 Soft Smart-Start (our method) 30.01 Table 3: Case-sensitive BLEU-4 scores (%) on WMT14 En De translation task.",
"Number of Sampled Start Positions To explore the effect of the number of sampled start positions S described as Equation 2, we conduct experiments on the IWSLT14 De En translation task.",
"Figure 3 shows that our hard and soft Smart-Start methods have gradually improved performance by increasing the value of S .",
"Soft Smart-Start method outperforms the hard method under different settings.",
"The soft method achieves a higher BLEU score when the number of sampled start positions equals 7.",
"The proper interval ( 4 S 12 ) is recommended to use in our method.",
"In conclusion, the 3986 soft Smart-Start method can bring a more positive influence on BLEU scores.",
"Distribution of Start Positions During the inference stage, our model generates intermediate translation Z k , where [m] is in the k th position.",
"We explore the distribution of the positions of symbol [m] .",
"We separately collect all translations, the length of which equals 10, 15, and 20 tokens.",
"For example, in the left picture of Figure 4, we count the positions of [m] in all sentences with a length of 10.",
"Also, the middle picture reports the positions of sentences with a length of 15 and the right picture reports these sentences with a length of 20.",
"Figure 4 shows that other positions in the sentence also occupy a certain proportion.",
"Therefore, the conventional left-to-right decoding order is not always the best decoding order, and starting from other positions is beneficial for translation quality, which verifies our motivation.",
"Linguistic Analysis Based on the Figure 4, we further try making linguistic analysis.",
"Three pictures show that the [m] tends to occur in the 1 th position, where the intermediate translation is Z 1 = ( y n , [m] , y 1 , . . . , y n 1 ) .",
"We observe that y n mostly is the punctuation such as period, question mark, and exclamation mark under this situation.",
"Conjunction and preposition words are also inclined to appear at the beginning of sentences such as or and but, which indicates clauses are easier to be placed at the beginning.",
"It is consistent with our intuition that punctuation marks are most easy to predict at first.",
"Training Time The Transformer baseline costs nearly 0.9 hours and our method costs nearly 1.8 hours (only 2 lower speed) on the IWSLT-2014 De En translation task, where both experiments are conducted on the 8-V100-GPU environment with 1024 max tokens.",
"Our method doesn't require many additional training steps to converge compared with the Transformer baseline.",
"Our method outperforms the Transformer baseline by +0.8 BLEU points.",
"Another factor affecting the training time is the number of sampled start positions.",
"We also investigate the proper value of the number of sampled start positions.",
"In practice, smaller value such as 4 or 6 can also bring significant improvements.",
"Therefore, we choose a smaller value of the sampled start positions and use multiple GPUs to keep the training time in a reasonable range.",
"Neural Machine Translation (NMT) has attracted a lot of attention recently.",
"The architecture of NMT models has evolved quickly so that there are many different models (Sutskever et al., 2014; Bahdanau et al., 2015; Luong et al., 2015; Kalchbrenner et al., 2016; Gehring et al., 2017; Vaswani et al., 2017; He et al., 2018).",
"Asynchronous and synchronous Bidirectional decoding Model (Zhang et al., 2018; Zhou et al., 2019b) exploits the contexts generated in the R2L manner to help the L2R translation.",
"Previous non-monotonic methods (Serdyuk et al., 2018; Zhang et al., 2018; Zhou et al., 2019a,b; Zhang et al., 2019; Welleck et al., 2019) jointly leverage L2R and R2L information.",
"Non-monotonic methods are also widely used in many tasks (Huang et al., 2018; Shu and Nakayama, 2018), such as parsing (Goldberg and Elhadad, 2010), image caption (Mehri and Sigal, 2018), and dependency parsing (Kiperwasser and Goldberg, 2016; Li et al., 2019).",
"Similarly, insertion-based method (Gu et al., 2019; Stern et al., 2019) predicts the next token and its position to be inserted.",
"In this work, we propose a novel method that breaks up the limitation of these decoding orders, called Smart-Start decoding.",
"Our method predicts a median word and then generates the words on the right part.",
"Finally, it generates words on the left.",
"Experimental results show that our Smart-Start method significantly improves the quality of translation.",
"This work is supported by the National Natural Science Foundation of China (Grand Nos. U1636211, 61672081, 61370126), and the Fund of the State Key Laboratory of Software Development Environment(No.SKLSDE-2021ZX)."
] | [
"abstain",
"objective",
"objective",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"method",
"abstain",
"abstain",
"result",
"method",
"result",
"abstain",
"result",
"result",
"objective",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"result",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"method",
"abstain",
"result",
"other"
] |
[
"Training data for text classification is often limited in practice, especially for applications with many output classes or involving many related classification problems.",
"This means classifiers must generalize from limited evidence, but the manner and extent of generalization is task dependent.",
"Current practice primarily relies on pre-trained word embeddings to map words unseen in training to similar seen ones.",
"Unfortunately, this squishes many components of meaning into highly restricted capacity.",
"Our alternative begins with sparse pre-trained representations derived from unlabeled parsed corpora; based on the available training data, we select features that offers the relevant generalizations.",
"This produces task-specific semantic vectors; here, we show that a feed-forward network over these vectors is especially effective in low-data scenarios, compared to existing state-of-the-art methods.",
"By further pairing this network with a convolutional neural network, we keep this edge in low data scenarios and remain competitive when using full training sets.",
"Modern neural networks are highly effective for text classification, with convolutional neural networks (CNNs) as the de facto standard for classifiers that represent both hierarchical and ordering information implicitly in a deep network (Kim, 2014).",
"Deep models pre-trained on language model objectives and fine-tuned to available training data have recently smashed benchmark scores on a wide range of text classification problems (Peters et al., 2018; Howard and Ruder, 2018; Devlin et al., 2018).",
"Despite the strong performance of these approaches for large text classification datasets, challenges still arise with small datasets with few, possibly imbalanced, training examples per class.",
"Labels can be obtained cheaply from crowd workers for some languages, but there are a nearly unlimited number of bespoke, challenging text classification problems that crop up in practical settings (Yu et al., 2018).",
"Obtaining representative labeled examples for classification problems with many labels, like taxonomies, is especially challenging.",
"Text classification is a broad but useful term and covers classification based on topic, on sentiment, and even social status.",
"As Systemic Functional Linguists such as Halliday (1985) point out, language carries many kinds of meanings.",
"For example, words such as ambrosial and delish inform us not just of the domain of the text ( food ) and sentiment, but perhaps also of the age of the speaker.",
"Text classification problems differ on the dimensions they distinguish along and thus in the words that help in identifying the class.",
"As Sachan et al. (2018) show, classifiers mostly focus on sub-lexicons; they memorize patterns instead of extending more general knowledge about language to a particular task.",
"When there is low lexical overlap between training and test data, accuracy drops as much as 23.7%.",
"When training data is limited, most meaning-carrying terms are never seen in training, and the sub-lexicons correspondingly poorer.",
"Classifiers must generalize from available training data, possibly exploiting external knowledge, including representations derived from raw texts.",
"For small training sizes, this requires moving beyond sub-lexicons.",
"Existing strategies for low data scenarios include treating labels as informative (Song and Roth, 2014; Chang et al., 2008) and using label-specific lexicons (Eisenstein, 2017), but neither is competitive when labeled data is plentiful.",
"Instead, we seek classifiers that adapt to both low and high data scenarios.",
"People exploit parallelism among examples for generalization (Hofstadter, 2001; Hofstadter and 1.1 Kampuchea says rice crop in 1986 increased . . . 2.1 Gamma ray Bursters.",
"Sander, 2013).",
"Consider Table 1, which displays five examples from a single class for two tasks.",
"Bolded terms for each task are clearly related, and to a person, suggest abstractions that help relate other terms to the task.",
"This helps with disambiguation: that the word Pluto is the planet and not Disney's character is inferred not just by within-example evidence (e.g. mission ) but also by cross-example presence of Mars and astronauts .",
"Cross-example analysis also reveals the amount of generalization warranted.",
"For a word associated with a label, word embeddings give us neighbors, which often are associated with that label.",
"What they do not tell us is the extent this associated-with-same-label phenomenon holds; that depends on the granularity of the classes.",
"Cross-example analysis is required to determine how neighbors at various distances are distributed among labels in the training data.",
"This should allow us to include barley and peaches as evidence for a class like Agriculture but only barley for Grains .",
"Most existing systems ignore cross-example parallelism and thus miss out on a strong classification signal.",
"We introduce a flexible method for controlled generalization that selects syntacto-semantic features from sparse representations constructed by Category Builder (Mahabal et al., 2018).",
"Starting with sparse representations of words and their contexts, a tuning algorithm selects features with the relevant kinds and appropriate amounts of generalization, making use of parallelism among examples.",
"This produces task-specific dense embeddings for new texts that can be easily incorporated into classifiers.",
"Our simplest model, CBC (Category Builder Classifier), is a feed-forward network that uses only CB embeddings to represent a document.",
"For small amounts of training data, this simple model dramatically outperforms both CNNs and BERT (Devlin et al., 2018).",
"When more data is available, both CNNs and BERT exploit their greater capacity and broad pre-training to beat CBC.",
"We thus create CBCNN, a simple combination of CBC and dataset k train/test/dev size range 20NG 20 15076/1885/1885 513/810 reuters 8 6888/862/864 128/3128 spam 2 3344/1115/1115 436/2908 attack 2 10000/2000/2000 1126/8874 Table 2: Data sizes, and the disparity between the smallest and the largest class in training data.",
"the CNN that concatenates their pre-prediction layers and adds an additional layer.",
"By training this model with a scheduled block dropout (Zhang et al., 2018) that gradually introduces the CBC sub-network, we obtain the benefits of CBC in low data scenarios while obtaining parity with CNNs when plentiful data is available.",
"BERT still dominates when all data is available, suggesting that further combinations or ensembles are likely to improve matters further.",
"Our primary goal is to study classifier performance with limited data.",
"To that end, we obtain learning curves on four standard text classification datasets (Table 2) based on evaluating predictions on the full test sets.",
"At each sample size, we produce multiple samples and run several text classification methods multiple times, measuring the following: Macro-F1 score .",
"Macro-F1 measures support for all classes better than accuracy, especially with imbalanced class distributions.",
"Recall for the rarest class .",
"Many measures like F1 and accuracy often mask performance on infrequent but high impact classes, such as detecting toxicity (Waseem and Hovy, 2016)) Degenerate solutions .",
"Complex classifiers with millions of parameters sometimes produce degenerate classifiers when provided very few training examples; as a result, they can skip some output classes entirely.",
"The datasets we chose for evaluation, while all multi-class, form a diverse set in terms of the number of classes and kinds of cohesion among examples in a single class.",
"The former clearly affects training data needs, while the latter informs appropriate generalization.",
"20 Newsgroups 20Newsgroups (20NG) contains documents from 20 different newsgroups with about 1000 messages from each.",
"We randomly split the documents into an 80-10-10 train-dev-test split.",
"The classes are evenly balanced.",
"Reuters R8.",
"The Reuters21578 dataset contains Reuters Newswire articles.",
"Following several authors (Pinto and Rosso, 2007; Zhao et al., 2018, for example), we use only the eight most frequent labels.",
"We begin with a given 80/10/10 split.",
"Given that we focused on single-label classification, we removed items associated with two or more of the top eight labels (about 3% of exam-ples).",
"Classes are highly imbalanced.",
"Of the 6888 training examples, 3128 are labeled earn , while only 228 examples are of class interest and only 128 are ship .",
"Wiki Comments Personal Attack.",
"The Wikipedia Detox project collected over 100k discussion comments from English Wikipedia and annotated them for presence of personal attack (Wulczyn et al., 2017).",
"We randomly select 10k, 2k, and 2k items as train/dev/test.",
"11% are attacks.",
"Spam The SMS Spam Collection v.1 has SMS labeled messages that were collected for mobile phone spam research (Hidalgo et al., 2012).",
"Each of the 5574 messages is labeled as spam or ham .",
"In this section, we explicate the source of features, discuss the properties relevant to generalization by focusing on one feature in isolation, and present the overall feature selection method.",
"The overview in Figure 1 displays the order of operations: identify generalizing features based on the training data (done once), and for each document to be classified, convert it to a vector, where each entry corresponds to a generalizing feature.",
"Feature Prototypical Supports allergen as X pollen, dander, dust mites, soy, perfumes, milk, smoke, mildew liter of X water, petrol, milk, fluid, beer serve with X rice, sauce, salad, fries, milk flour mixture butter mixture, rubber spat-ula, dredged, creamed, medium speed, sifted, milk replacer colostrum, calves, whole milk, inulin, pasteurized, weaning Table 3: A few features (among hundreds) evoked by milk , with top n-grams in their support.",
"Our source of generalizing features is Category Builder (CB) (Mahabal et al., 2018), which constructs a sparse vector space derived from parsed corpora (Erk, 2012).",
"CB constructs features for n-grams (not just unigrams) that are the union of syntactic context features FS and co-occurrence features FC .",
"Consider milk : an FS feature is gallon prep of pobj X and FC features include goat , cow , drink , spill , etc.",
"Table 3 provides other examples of features evoked by milk , along with other n-grams which evoke them.",
"For present purposes, we can treat CB as a matrix with n-grams as rows and features in FC and FS as columns.",
"The entries of CB are weights that give the association strength between an n-gram and a feature; these weights are an asymmetric variant of pointwise-mutual information (Mahabal et al., 2018).",
"Which features generalize well depends on the granularity of classes in a task.",
"Useful features for generalization strike a balance between breadth and specificity .",
"A feature that is evoked by many words provides generalization potential because the feature's overall support is likely to be distributed across both the training data and test data.",
"However, this risks over-generalization, so a feature should also be sufficiently specific to be a precise indicator of a particular class.",
"A key aspect of choosing good features based on a limited training set is to resolve referential ambiguity (Quine, 1960; Wittgenstein, 1953) to the extent supported by the observed uses of the words.",
"To illustrate, consider the grains class in the Reuters Newswire dataset.",
"The word wheat can evoke the features at different levels of the taxonimical hierarchy: triticum (the wheat genus), poaceae (grass family), spermatophyta (seeded plants), plantae (plant kingdom), and living thing .",
"The first among these has low breadth and is evoked only by wheat .",
"The second is far more useful: specific and yet with a large support, including maize and sorghum .",
"The final feature is too broad.",
"In general, the most useful features for generalization are the intermediate features, also known as Basic Level Categories (Rosch et al., 1976).",
"Another important aspect of generalization comprises the facets of meaning.",
"For example, the word milk has facets relating it to other liquids (e.g., oil , kerosene ), foods ( cheese , pasta ), white things ( ivory ), animal products ( honey , eggs ), and allergens ( pollen , ragweed ).",
"Along these axes, generalization can be more or less conservative; e.g., both cheese and tears of a phoenix are animal products, but the former is semantically closer to milk .",
"Looking back at Table 3, the utility of individual features evoked by milk for tasks involving related topics varies; e.g., does the classification problem pertain to food or animal husbandary ?",
"A single generalizing feature is associated with many n-grams, each of which evokes it (with different strengths).",
"Table 4 displays n-grams that evoke the feature co-occurrence with Saturn V , as discovered by unsupervised analysis of a large corpus of web pages.",
"The table further displays the interaction of this unsupervised feature with super-Training Testing n-gram wt C C C C apollo 8.93 1 1 5 1 launch pad 8.52 0 0 1 0 rocket 7.32 3 1 8 0 rockets 7.27 2 0 4 1 liftoff 6.92 1 0 1 0 space shuttle 6.27 0 0 4 0 space station 6.19 0 0 4 3 payload 4.23 0 0 5 0 shuttle 2.57 2 0 15 3 kennedy 2.30 1 0 1 4 capacity 1.95 0 1 0 4 Table 4: Some evoking n-grams associated with the CB feature co-occurrence with Saturn V and pivoting on the class sci.space .",
"vised data, specifically, with the label sci.space in 20NG, when using a size 320 training sample that contain only 18 sci.space documents.",
"Counts for some evoking terms are shown within and outside this class, for both training and test data.",
"Notation.",
"We introduce some notation and explicate with Table 4.",
"We have a labelled collection of training documents T .",
"T l is the training examples with label l .",
"The positive support set l ( f, t ) is the set of n-grams in T l evoking feature f with weight greater than t , here, { apollo, rocket, . . . , shuttle } for t =2 .",
"3 .",
"The positive support size l ( f, 2 . 3)= | l ( f, 2 . 3) | =5 and the positive support weight l ( f, 2 . 3) is the sum of counts of supports of f in l with weight greater than 2 .",
"3 , here 1+3+2+1+2=9 .",
"Analogously the negative support weight l ( f, 2 . 3) is the sum of counts from outside T l ; here, 1+1=2 since { apollo, rocket } were seen outside sci.space once each.",
"What makes this feature ( words that have co-occurred with Saturn V ) well suited for sci.space is that many evoking words here are associated with the label sci.space .",
"What confirms the bene-fit is the limited amount of negative support.",
"Crucially, the bolded terms do not occur in the training data, but do occur in the test data.",
"(We stress that we include these counts here only for this example; our methods do not access the test data for feature selection in our experiments.)",
"That said, we must limit potential noise from such features, so we seek thresholded features (cid:104) f, t (cid:105) , as suggested by the dashed line in Table 4.",
"Items below this line are prevented from evoking f .",
"We choose the highest threshold such that dropped negative support exceeds dropped positive support.",
"This is determined simply by go-ing through all the supports of a feature, sorted by ascending weight, and checking the positive and negative support of all features with smaller versus greater weight given the class.",
"The weight of the feature at this cusp is used as the threshold of the feature for this particular class.",
"This (cid:104) f, t (cid:105) pair then forms one element of the CB-vector used as a feature for classification.",
"Given the labeled subsets of T and this feature thresholding algorithm, we produce a vectorizer that embeds documents.",
"The values of a docu-ment's embedding are not directly associated with any class.",
"Such association happens during training.",
"Although sci.space accounts for just 6% of the documents, 75% of documents that contain an n-gram evoking the Saturn V feature are in that class.",
"A classifier trained with such an embedding should learn to associate this feature with that class, and an unseen document containing the unseen-in-training term space shuttle stands a good chance to be classified as sci.space .",
"The feature displayed in Table 4 is useful for the 20NG problem because it contains a class related to space travel.",
"This feature has no utility in spam classification or in sentiment classification, since, for those problems, seeing rocket in one class does not make it more likely that a document containing space station belongs to that same class.",
"This example illustrates why a generalization strategy must incorporate both what we can learn from unsupervised data as well as (limited) labeled training data.",
"We now describe how we use the training data T to produce a set of features-and-threshold pairs; each chosen feature-with-threshold (cid:104) f, t (cid:105) will be one component in the CB-vectors provide to classifiers.",
"Calculation of features for a single class is a three step process:",
"(i) for each feature f , choose a threshold t (as discussed above)",
"(ii) score the resultant (cid:104) f, t (cid:105)",
"(iii) filter useless or redundant (cid:104) f, t (cid:105) .",
"Given a label l and a feature f , we implicitly produce a table of supporting n-grams and their distribution within and outside l (e.g. as in Table 4).",
"This involves computing the precision of a feature at a given threshold value, comparing it to the class probability and deciding whether to keep it.",
"Recall the positive support l ( f, t ) and negative support f ( f, t ) defined previously.",
"The precision of f at threshold t is l ( f, t ) = l ( f,t ) l ( f,t )+ l ( f,t ) , (this is 911 in the example of Table 4, with t =2 . 3 ).",
"However, since we are often dealing with low counts, we smooth the precision toward the empirical class probability of l , p ( l ) = | T l | | T l | + | T l | .",
"l ( f, t ) = l ( f, t ) + p ( l ) l ( f, t ) + l ( f, t ) + The score S l ( f, t ) is reduction in error rate of the smoothed precision relative to the base rate: S l ( f, t ) = l ( f, t ) p ( l ) 1 p ( l ) We retain a thresholded feature if it is generalizing ( l ( (cid:104) f, t (cid:105) ) > 1 ), has better-than-chance precision (we use S l ( (cid:104) f, t (cid:105) ) > 0 .",
"01 ), and is not redundant (i.e., its positive support has one or more terms not present in positive supports of higher scoring fea-tures).",
"Each vector dimension corresponds to some (cid:104) f, t (cid:105) .",
"The evocation level of f is the sum of its evocation for the n-grams in the document d , e d ( f ) = (cid:80) w d CB ( w, f ) .",
"The vector entry is e d ( f ) t when e d ( f ) > = t , and is clipped to 0 otherwise.",
"As benchmarks, we use a standard CNN with pretrained embeddings (Kim, 2014) and BERT (De-vlin et al., 2018).",
"1 For CNN, we used 300 filters each of sizes 2, 3, 4, 5, and 6, fed to a hidden layer of 200 nodes after max pooling.",
"Pretrained vectors provided by Google were used.",
"2 For BERT, we used the run classifier script from GitHub and used the BERT-large-uncased model.",
"We use the pre-computed vocab-to-context association matrix provided as part of the open 1 https://github.com/google-research/bert 2 https://code.google.com/archive/p/word2vec source Categorial Builder repository.",
"3 This contains 194,051 co-occurrence features ( FC ) and 954,276 syntactic features ( FS ).",
"CBC model .",
"The CB-vector containing the derived features from the training dataset and Category Builder can be exploited in various ways with existing techniques.",
"The simplest of these is to use a feed-forward network over the CB-vector .",
"This model does not encode the tokens or any word order informationinformation which is highly informative in many classification tasks.",
"CBCNN model .",
"Inspired by the combination of standard features and deep networks in Wide-and-Deep models (Cheng et al., 2016), we pair the CBC model with a standard CNN, concatenating their pre-prediction layers, and add an additional layer before the softmax prediction.",
"In early experiments, this combined model performed worse than the CNN on larger data sizes, as the network above the CB-vector effectively stole useful signal from the CNN.",
"To ensure that the more complex CNN side of the network had a chance to train, we employed a block dropout strategy (Zhang et al., 2018) with a schedule.",
"During training, with some probability, all weights in the CB-vector are set to 0.5.",
"The probability of hiding decreases from 1 to 0 using a parameterized hyperbolic tangent function p k = 2 e Cx +1 .",
"Lower values of C lead to slower convergence to zero.",
"The effect is that the CBC sub-network is introduced gradually, allowing the CNN to train while eventually taking advantage of the additional information.",
"The natural strategy of replacing with 0s (in-stead of 0.5 as above) was tried and also works, but less well, since the network has no way to distinguish between genuine absence of feature and hiding.",
"In CB-vector, non-zero values are at least 1, and thus 0.5 does not suffer from this problem.",
"Our primary goal is to improve generalization for low-data scenarios, but we also want our methods to remain competitive on full data.",
"We compare different models across learning curves of increasing the training set sizes.",
"We use training data sizes of 40 , 80 , . . . , 5120 as well as the entire available training data.",
"For each training size, we produce three independent samples 3 https://github.com/google/categorybuilder by uniformly sampling the training data and training each model three times at each size.",
"The final value reported is the average of all nine runs.",
"All models are implemented in Tensorflow.",
"Batch sizes are between 5 and 64 depending on training size.",
"Training stops after there is no macro-F1 improvement on development data for 1000 steps.",
"For evaluation, we focus primarily on macro-F1 and recall of the rarest class.",
"The recall on the rarest class is especially important for imbalanced classification problems.",
"For such problems, a model can obtain high accuracy by strongly preferring the majority class, but we seek models that effectively identify minority class labels.",
"(This is especially important for active learning scenarios, where we expect the CB-vectors to help with in future.) 5.2 Results: low data scenarios Figure 2 shows learning curves giving macro-F1 scores and rarest class recall for all four datasets.",
"When very limited training data is available, the simple CBC model generally outperforms the CNN and BERT, except for the Spam dataset.",
"The more powerful models eventually surpass CBC; however, the CBCNN model provides consistent strong performance at all dataset sizes by combining the generalization of CBC with the general ef-ficacy of CNNs.",
"Importantly, CBCNN provides massive error reductions with low data for 20NG and R8 (tasks with many labels).",
"Table 5's left half gives results for all models when using only 320 training examples.",
"For 20NG, CNN's macro-F1 is just 43.9, whereas CBC and CBCNN achieve 61.7 and 62.4the same as CNN performance with four times as much data.",
"These models outperform CNN on R8 as well, reaching 83.7 vs CNN's 74.1, and also on the Wiki-attack dataset, achieving 80.6 vs CNNs 74.0.",
"BERT fails to produce a solution for the two datasets with > 2 labels, but does produce the best result for Spamindicating an opportunity to more fully explore BERT's parameter settings for low data scenarios and to fruitfully combine CBC with BERT.",
"Rarest class recall is generally much better with less data when exploiting CB-features.",
"For example, with 320 training examples for R8, CNNs reach 36.2 whereas CBCNN scores 76.2.",
"Prediction quality with few training examples (especially getting good balance across all labels) also inter-40 80 160 320 640 1280 2560 5120 15076 0 20 40 60 80 100 M a c r o -F 1 20NG CNN BERT CBC CBCNN 40 80 160 320 640 1280 2560 5120 15076 0 20 40 60 80 100 R a r e s t R e c a ll 20NG 40 80 160 320 640 1280 2560 6888 0 20 40 60 80 100 M a c r o -F 1 R8 40 80 160 320 640 1280 2560 6888 0 20 40 60 80 100 R a r e s t R e c a ll R8 40 80 160 320 640 1280 3344 0 20 40 60 80 100 M a c r o -F 1 Spam 40 80 160 320 640 1280 3344 0 20 40 60 80 100 R a r e s t R e c a ll Spam 40 80 160 320 640 1280 2560 5120 10000 Training Size 0 20 40 60 80 100 M a c r o -F 1 Attack 40 80 160 320 640 1280 2560 5120 10000 Training Size 0 20 40 60 80 100 R a r e s t R e c a ll Attack Figure 2: Left: F1 score by training size for 20NG, Reuters, SMS Spam, and Wiki-attack.",
"acts with other strategies for dealing with limited resources, such as active learning.",
"For example, Baldridge and Osborne (2008) obtained stronger data utilization ratios with better base models and uncertainty sampling for Reuters text classification: better models pick better examples for annotation and thus require fewer new labeled data points to achieve a given level of performance.",
"Importantly, the CBC and CBCNN models take far less data to produce non-degenerate models (defined as a model which produces all output classes as predictions).",
"CNN and BERT have a large number of parameters, and using these powerful tools with small training sets produces unstable results.",
"Table 6 gives the minimum training set sizes at which each model produces at least one non-degenerate model.",
"While it might be possible to ameliorate the instability of CNN and BERT with a wider parameter search and other strategies, nothing special needs to be done for CBC.",
"It is likely that an approach which adaptively selects CBC or CBCNN and BERT would obtain the strongest result across all training set sizes.",
"For each dataset, among the 100 best features chosen (for training size 640), the breakdown of domain features ( FC ) versus type features ( FS ) is revealing.",
"As expected, domain features are more important in a topical task such as 20NG (71% are FC features), while the opposite is true for Spam (19%) and a toxicity dataset like Wiki Attack (23%).",
"Reuters shows a fairly even balance between the two types of features (41%): it is useful for R8 to be topically coherent and also to hone in on fairly narrow groups of words that collectively cover a Basic Level Category.",
"Table 5 provides macro-F1 scores for all models when given all available training data.",
"The CBC model performs well, but its (intentional) ignorance of the actual tokens in a document takes a toll when more labeled documents are available.",
"The CNN benchmark, which exploits both word order and the tokens themselves, is a strong performer.",
"The CBCNN model effectively keeps pace with the CNNimproving on 20NG and R8, though slipping on Wiki-Attack.",
"BERT simply crushes all other models when there is suffi-cient training data, showing the impact of structured pre-training and consistent with performance across a wide range of tasks in Devlin et al. (2018).",
"We demonstrate an effective method for exploiting syntactically derived features from large external corpora and selecting the most useful of those given a small labeled text classification corpus.",
"We show how to do this with the map provided by Category Builder n-grams to features, but other sources of well generalizing features have been exploited for text classification.",
"These include topic models (Blei et al., 2003), ontologies such as WordNet (Bloehdorn and Hotho, 2004) and Wikipedia Category structure (Gabrilovich and Markovitch, 2009).",
"It may be possible to use these other sources exactly as we use CB.",
"Some of these sources have been manually curated, which makes them high quality but limits the size and facets.",
"We have not yet explored their use because CB features seem to cover many of these sources' strengthsfor example, FC features are like topics, and FS features like nodes in ontologies.",
"Nonetheless, a combination may add value.",
"Our focus is on data scarce scenarios.",
"However, it would be ideal to derive utility at both the small and large labeled data sizes.",
"This will likely require models that can generalize with contextual features while also exploiting implicit hierarchical organization and order of the texts, e.g. as done by CNNs and BERT.",
"The CBCNN model is one effective way to do this and we expect there could be similar benefits from combining CBC with BERT.",
"Furthermore, approaches like AutoML (Zoph and Le, 2017) would likely be effective for exploring the design space of network architectures for representing and exploiting the information inherent in both signals.",
"Finally, although we focus on multi-class problems hereeach example belongs to a single classthe general approach of selecting features should work for multi-label problems.",
"Our confi-dence in this (unevaluated) claim stems from the observation that we select features one class at a time, treating that class and its complement as a binary classification problem.",
"We would like to thank our anonymous reviewers and the Google AI Language team, especially Rahul Gupta, Tania Bedrax-Weiss and Emily Pitler, for the insightful comments that contributed to this paper."
] | [
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"result",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"result",
"abstain",
"method",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"other"
] |
[
"Yao-Hung Hubert Tsai, Ruslan Salakhutdinov, Louis-Philippe Morency School of Computer Science, Carnegie Mellon University { pliang,yaochonl,yaohungt,rsalakhu,morency } @cs.cmu.edu",
"Abstract Human language is a rich multimodal signal consisting of spoken words, facial expressions, body gestures, and vocal intonations.",
"Learning representations for these spoken utterances is a complex research problem due to the presence of multiple heterogeneous sources of information.",
"Recent advances in multimodal learning have followed the general trend of building more complex models that utilize various attention, memory and recurrent components.",
"In this paper, we propose two simple but strong baselines to learn embeddings of multimodal utterances.",
"The first baseline assumes a conditional factorization of the utterance into unimodal factors.",
"Each unimodal factor is modeled using the simple form of a likelihood function obtained via a linear transformation of the embedding.",
"We show that the optimal embedding can be derived in closed form by taking a weighted average of the unimodal features.",
"In order to capture richer representations, our second baseline extends the first by factorizing into unimodal, bimodal, and trimodal factors, while retaining simplicity and efficiency during learning and inference.",
"From a set of experiments across two tasks, we show strong performance on both supervised and semi-supervised multimodal prediction, as well as significant (10 times) speedups over neural models during inference.",
"Overall, we believe that our strong baseline models offer new benchmarking options for future research in multimodal learning.",
"Human language is a rich multimodal signal consisting of spoken words, facial expressions, body gestures, and vocal intonations (Streeck and Knapp, 1992).",
"At the heart of many multimodal modeling tasks lies the challenge of learning rich representations of spoken utterances from multiple modalities (Papo et al., 2014).",
"However, learning repre-* authors contributed equally sentations for these spoken utterances is a complex research problem due to the presence of multiple heterogeneous sources of information (Baltrusaitis et al., 2017).",
"This challenging yet crucial research area has real-world applications in robotics (Mon-talvo et al., 2017; Noda et al., 2014), dialogue systems (Johnston et al., 2002; Rudnicky, 2005), intelligent tutoring systems (Mao and Li, 2012; Banda and Robinson, 2011; Pham and Wang, 2018), and healthcare diagnosis (Wentzel and van der Geest, 2016; Lisetti et al., 2003; Sonntag, 2017).",
"Recent progress on multimodal representation learning has investigated various neural models that utilize one or more of attention, memory and recurrent components (Yang et al., 2017; Liang et al., 2018).",
"There has also been a general trend of building more complicated models for improved performance.",
"In this paper, we propose two simple but strong baselines to learn embeddings of multimodal utterances.",
"The first baseline assumes a factorization of the utterance into unimodal factors conditioned on the joint embedding.",
"Each unimodal factor is modeled using the simple form of a likelihood function obtained via a linear transformation of the utterance embedding.",
"We derive a coordinate-ascent style algorithm (Wright, 2015) to learn the optimal multimodal embeddings under our model.",
"We show that, under some assumptions, maximum likelihood estimation for the utterance embedding can be derived in closed form and is equivalent to computing a weighted average of the language, visual and acoustic features.",
"Only a few linear transformation parameters need to be learned.",
"In order to capture bimodal and trimodal representations, our second baseline extends the first one by assuming a factorization into unimodal, bimodal, and trimodal factors (Zadeh et al., 2017).",
"To summarize, our simple baselines 1) consist primarily of linear functions, 2) have few parameters, and 3) can be approximately solved in a closed form solution.",
"As a result, they demonstrate simplicity and efficiency during learning and inference.",
"We perform a set of experiments across two tasks and datasets spanning multimodal personality traits recognition (Park et al., 2014) and multimodal sentiment analysis (Zadeh et al., 2016).",
"Our proposed baseline models 1) achieve competitive performance on supervised multimodal learning, 2) improve upon classical deep autoencoders for semi-supervised multimodal learning, and 3) are up to 10 times faster during inference.",
"Overall, we believe that our baseline models offer new benchmarks for future multimodal research.",
"We provide a review of sentence embeddings , multimodal utterance embeddings , and strong baselines",
"Sentence embeddings are crucial for down-stream tasks such as document classification, opinion analysis, and machine translation.",
"With the advent of deep neural networks, multiple network designs such as Recurrent Neural Networks (RNNs) (Rumelhart et al., 1986), Long-Short Term Memory networks (LSTMs) (Hochre-iter and Schmidhuber, 1997), Temporal Convolutional Networks (Bai et al., 2018), and the Transformer (Vaswani et al., 2017) have been proposed and achieve superior performance.",
"However, more training data is required for larger models (Pe-ters et al., 2018).",
"In light of this challenge, researchers have started to leverage unsupervised training objectives to learn sentence embedding which showed state-of-the-art performance across multiple tasks (Devlin et al., 2018).",
"In our paper, we go beyond unimodal language-based sentence embeddings and consider multimodal spoken utterances where additional information from the nonverbal behaviors is crucial to infer speaker intent.",
"Learning multimodal utterance embeddings brings a new level of complexity as it requires modeling both intra-modal and inter-modal interactions (Liang et al., 2018).",
"Previous approaches have explored variants of graphical models and neural networks for multimodal data.",
"RNNs (Elman, 1990; Jain and Medsker, 1999), LSTMs (Hochreiter and Schmidhuber, 1997), and convolutional neural networks (Krizhevsky et al., 2012) have been extended for multimodal settings (Rajagopalan et al., 2016; Lee et al., 2018).",
"Experiments on more advanced networks suggested that encouraging correlation between modalities (Yang et al., 2017), enforcing disentanglement on multimodal representations (Tsai et al., 2018), and using attention to weight modalities (Gulrajani et al., 2017) led to better performing multimodal representations.",
"In our paper, we present a new perspective on learning multimodal utterance embeddings by assuming a conditional factorization over the language, visual and acoustic features.",
"Our simple but strong baseline models offer an alternative approach that is extremely fast and competitive on both supervised and semi-supervised prediction tasks.",
"A recent trend in NLP research has been geared towards building simple but strong baselines (Arora et al., 2017; Shen et al., 2018; Wieting and Kiela, 2019; Denkowski and Neubig, 2017).",
"The effectiveness of these baselines indicate that complicated network components are not always required.",
"For example, Arora et al. (2017) constructed sentence embeddings from weighted combinations of word embeddings which requires no trainable parameters yet generalizes well to down-stream tasks.",
"Shen et al. (2018) proposed parameter-free pooling operations on word embeddings for document classification, text sequence matching, and text tagging.",
"Wieting and Kiela (2019) discovered that random sentence encoders achieve competitive performance as compared to larger models that involve expensive training and tuning.",
"Denkowski and Neubig (2017) emphasized the importance of choosing a basic neural machine translation model and carefully reporting the relative gains achieved by the proposed techniques.",
"Authors in other domains have also highlighted the importance of developing strong baselines (Lakshminarayanan et al., 2017; Sharif Razavian et al., 2014).",
"To the best of our knowledge, our paper is the first to propose and evaluate strong, non-neural baselines for multimodal utterance embeddings.",
"Suppose we are given video data where each utterance segment is denoted as s .",
"Each segment contains individual words w in a sequence w , visual features v in a sequence v , and acoustic features a in a sequence a .",
"We aim to learn a representation It doesn't give any insight or help Language: Visual: Acoustic: Gaussian likelihood Gaussian likelihood Log-linear likelihood = = = MultimodalUtterance MultimodalUtteranceEmbedding PositionalEncodings PositionalEncodings Figure 1: Our baseline model assumes a factorization of the multimodal utterance into unimodal factors conditioned on the joint utterance embedding.",
"Our model is related to the work done by Arora et al. (2016) and Arora et al. (2017).",
"In the following, we first provide a brief review of their method.",
"Given a sentence, Arora et al. (2016) aims to learn a sentence embedding c s .",
"They do so by assuming that the probability of observing a word w t at time t is given by a log-linear word production model (Mnih and Hinton, 2007) with respect to c s : P [ w t c s ] = exp ( v w t , c s ) Z c s , (1) where c s is the sentence embedding (context), v w t represents the word vector associated with word w t and Z c s = w V exp ( v w , c s ) is a normalizing constant over all words in the vocabulary.",
"Given this posterior probability, the desired sentence embedding c s can be obtained by maximizing Equation (1) with respect to c s .",
"Under some assumptions on c s , this maximization yields a closed-form solution which provides an efficient learning algorithm for sentence embeddings.",
"Arora et al. (2017) further extends this model by introducing a smoothing term to account for the production of frequent stop words or out of context words independent of the discourse vector.",
"Given estimated unigram probabilities p ( w ) , the probability of a word at time t is given by P [ w t c s ] = p ( w t ) + ( 1 ) exp ( v w t , c s ) Z c s .",
"(2) Under this model with the additional hyperparameter , we can still obtain a closed-form solution for the optimal c s .",
"In this subsection, we outline our method for learning representations of multimodal utterances.",
"An overview of our proposed baseline model is shown in Figure",
"1. Our method begins by assuming a factorization of the multimodal utterance into unimodal factors conditioned on the joint utterance embedding.",
"Next, each unimodal factor is modeled using the simple form of a likelihood function obtained via a linear transformation of the utterance embedding.",
"Finally, we incorporate positional encodings to represent temporal information in the features.",
"We first present the details of our proposed baseline before deriving a coordinate ascent style optimization algorithm to learn utterance embeddings in our model.",
"Unimodal Factorization: We use m s to represent the multimodal utterance embedding.",
"To begin, we simplify the composition of m s by assuming that the segment s can be conditionally factorized into words ( w ), visual features ( v ), and acoustic features ( a ).",
"Each factor is also associated with a temperature hyperparameter ( w , v , a ) that represents the contribution of each factor towards the multimodal utterance.",
"The likelihood of a segment s given the embedding m s is therefore P [ s m s ] = P [ w m s ] w ( P [ v m s ]) v P [ a m s ] a = w w P [ w m s ] w v v P [ v m s ] v a a P [ a m s ] a .",
"Choice of Likelihood Functions: As suggested by Arora et al. (2017), given m s , we model the probability of a word w using Equation (2).",
"In order to analytically solve for m s , a lemma is introduced by Arora et al. (2016, 2017) which states that the partition function Z m s is concentrated around some constant Z (for all m s ).",
"This lemma is also known as the self-normalizing phenomenon of log-linear models (Andreas and Klein, 2015; Andreas et al., 2015).",
"We use the same assumption and treat Z m s t Z for all m s .",
"Unlike discrete text tokens, the visual features are continuous.",
"We assume that the visual features are generated from an isotropic Gaussian distribution.",
"In section 5.1, we visually analyze the distribution of the features for real world datasets and show that these likelihood modeling assumptions are indeed justified.",
"The Gaussian distribution is parametrized by simple linear transformations W v , W v R v m s and b v , b v R v : v m s N ( v , 2 v ) , (4) v = W v m s + b v , (5) v = diag ( exp ( W v m s + b v )) .",
"(6) Similarly, we also assume that the continuous acoustic features are generated from a different isotropic Gaussian distribution parametrized as: a m s N ( a , 2 a ) , (7) a = W a m s + b a , (8) a = diag ( exp ( W a m s + b a )) .",
"(9) Positional Encodings: Finally, we incorporate positional encodings (Vaswani et al., 2017) into the features to represent temporal information.",
"We use d -dimensional positional encodings with entries: P E pos, 2 i = sin ( pos / 10000 2 i / d ) , (10) P E pos, 2 i + 1 = cos ( pos / 10000 2 i / d ) .",
"where pos is the position (time step) and i [ 1 , d ] indexes the dimension of the positional encodings.",
"We call this resulting model Multimodal Baseline 1 ( MMB1 ).",
"We define our objective function by the log-likelihood of the observed multimodal utterance s .",
"The maximum likelihood estimator of the utterance embedding m s and the linear transformation parameters W and b can then be obtained by maximizing this objective L( m s , W , b ; s ) = log P [ s m s ; W , b ] , (12) where we use W and b to denote all linear transformation parameters.",
"Coordinate Ascent Style Algorithm: Since the objective (12) is not jointly convex in m s , W and b , we optimize by alternating between: 1) solving for m s given the parameters W and b at the current iterate, and 2) given m s , updating W and b using a gradient-based algorithm.",
"This resembles the coordinate ascent optimization algorithm which maximizes the objective according to each coordinate separately (Tseng, 2001; Wright, 2015).",
"Algorithm 1 presents our method for learning utterance embeddings.",
"In the following sections, we describe how to solve for m s and update W and b .",
"Solving for m s : We first derive an algorithm to solve for the optimal m s given the log likelihood objective in (12), and parameters W and b .",
"m s = w s w w + v s ( W v v ( 1 ) ( 1 ) v + W v v ( 2 ) ( 2 ) v ) + a s ( W a a ( 1 ) ( 1 ) a + W a a ( 2 ) ( 2 ) a ) .",
"(13) where the shifted visual and acoustic features are: v ( 1 ) = v b v , v ( 2 ) = ( v b v ) ( v b v ) , (14) a ( 1 ) = a b a , a ( 2 ) = ( a b a ) ( a b a ) , (15) where denotes Hadamard (element-wise) product and the weights 's are given as follows: w = w ( 1 )/( Z ) p ( w ) + ( 1 )/( Z ) , (16) ( 1 ) v = diag ( v exp ( 2 b v )) , (17) ( 2 ) v = diag ( v exp ( 2 b v ) v ) , (18) ( 1 ) a = diag ( a exp ( 2 b a )) , (19) ( 2 ) a = diag ( a exp ( 2 b a ) a ) .",
"(20)",
"Proof.",
"The proof is adapted from Arora et al. (2017) and involves computing the gradients m s log P [ m s ] .",
"We express log P [ m s ] via a Taylor expansion approximation and we observe that log P [ m s ] c + m s , g for a constant c and a vector g .",
"Then, we can obtain m s by computing arg max m s L( m s , W , b ; s ) which yields a closed-form solution.",
"Please refer to the supplementary material for proof details.",
"Observe that the optimal embedding m s is a weighted average of the word features w and the (shifted and transformed) visual and acoustic features, v and a .",
"Our choice of a Gaussian likelihood for the visual and acoustic features introduces a squared term ( v b v )( v b v ) to account for the (cid:96) 2 distance present in the pdf.",
"The transformation matrix W transforms the visual and acoustic features into the multimodal embedding space.",
"Regarding the weights , note that: 1) the weights are proportional to the global temperatures assigned to that modality, 2) the weights w are inversely proportional to p ( w ) (rare words carry more weight), and 3) the weights v and a scale each feature dimension inversely by their magnitude.",
"Updating W and b : To find the optimal linear transformation parameters W and b to maximize the objective in (12), we perform gradient-based optimization on W and b (in Algorithm 1 line 5-8).",
"Proposition",
"1. [Updating W and b ] The gradients W L( m s , W , b ) and b L( m s , W , b ) , in each dimension, are: W vij L( m s ,W ,b ) = v tr [( 2 v ( v v )) m s j ] , (21) W vij L( m s ,W ,b ) = v 2 tr [( 2 v 2 v ( v v )( v v ) 2 v ) vii m s j ] , (22) b vi L( m s ,W ,b ) = v tr [( 2 v ( v v )) ] , (23) b vi L( m s ,W ,b ) = v 2 tr [( 2 v 2 v ( v v )( v v ) 2 v ) vii ] .",
"(24)",
"Proof.",
"The proof involves differentiating the log likelihood of a multivariate Gaussian with respect to and before applying the chain rule to = W m s + b and = diag ( exp ( W m s + b )) .",
"So far, we have assumed the utterance segment s can be independently factorized into unimodal features.",
"In this subsection, we extend the setting to take account for bimodal and trimodal interactions.",
"We adopt the idea of early-fusion (Srivas-tava and Salakhutdinov, 2012), which means the bimodal and trimodal interactions are captured by the concatenated features from different modalities.",
"Specifically, we define our factorized model as: P [ s m s ] = P [ w m s ] w P [ v m s ] v P [ a m s ] a P [( w v ) m s ] wv P [( w a ) m s ] wa P [( v a ) m s ] va P [( w v a ) m s ] wva , (25) where denotes vector concatenation for bimodal and trimodal features.",
"Each of the individual probabilities factorize in the same way as Equation (3) (i.e. P [ a m s ] a = a a P [ a m s ] a ).",
"Similar to baseline 1, we assume a log-linear likelihood (2) for P [ w m s ] and a Gaussian likelihood (4) for all remaining terms.",
"We call this Multimodal Baseline 2 ( MMB2 ).",
"The optimization algorithm derived in section 3.4 can be easily extended to learn m s , W and b in Baseline",
"2. We again alternate between the 2 steps of 1) solving for m s given the parameters W and b at the current iterate, and 2) given m s , updating W and b using a gradient-based algorithm.",
"Solving for m s : We state a result that derives the closed-form of m s given W and b : Corollary",
"1. [Solving for m s ] Assume that the optimal m s lies on the unit sphere (i.e. m s 22 = 1 ).",
"The closed-form (in Algorithm 1 line 4) for m s is: m s = w w w w + v v ( W v v ( 1 ) ( 1 ) v + W v v ( 2 ) ( 2 ) v ) + a a ( W a a ( 1 ) ( 1 ) a + W a a ( 2 ) ( 2 ) a ) + f { w v , w a , v a , w v a } f f ( W f f ( 1 ) ( 1 ) f + W f f ( 2 ) ( 2 ) f ) (26) where the shifted (and squared) visual features are: v ( 1 ) = v b v , v ( 2 ) = ( v b v ) ( v b v ) , (27) (and analogously for f ( 1 ) , f ( 2 ) , f { a, w v, w a, v a, w v a } ).",
"The weights 's are: w = w ( 1 )/( Z ) p ( w ) + ( 1 )/( Z ) , (28) ( 1 ) v = diag ( v exp ( 2 b v )) , (29) ( 2 ) v = diag ( v exp ( 2 b v ) v ) .",
"(30) (and analogously for ( 1 ) f , ( 2 ) f , f { a, w v, w a, v a, w v a } ).",
"Updating W and b : The gradient equations for updating W and b are identical to those derived in Proposition 1, Equations (21-24).",
"Given the optimal embeddings m s , we can now train a classifier from m s to labels y for multimodal prediction.",
"m s can also be fine-tuned on labeled data (i.e. taking gradient descent steps to update m s with respect to the task-specific loss functions) to learn task-specific multimodal utterance representations.",
"In our experiments, we use a fully connected neural network for our classifier.",
"speaker traits recognition and multimodal sentiment analysis.",
"The code for our experiments is released at https://github.com/ yaochie/multimodal-baselines , and all datasets for our experiments can be downloaded at https://github.com/A2Zadeh/ CMU-MultimodalSDK .",
"All datasets consist of monologue videos where the speaker's intentions are conveyed through the language,",
"language, visual and acoustic modalities.",
"The multimodal features are described in the next subsection.",
"Multimodal Speaker Traits Recognition involves recognizing speaker traits based on multimodal utterances.",
"POM (Park et al., 2014) contains 903 videos each annotated for speaker traits: confi-dent (con), voice pleasant (voi), dominance (dom), vivid (viv), reserved (res), trusting (tru), relaxed (rel), outgoing (out), thorough (tho), nervous (ner), and humorous (hum).",
"The abbreviations (inside parentheses) are used in the tables.",
"Multimodal Sentiment Analysis involves analyzing speaker sentiment based on video content.",
"Multimodal sentiment analysis extends conventional language-based sentiment analysis to a multimodal setup where both verbal and non-verbal signals contribute to the expression of sentiment.",
"We use CMU-MOSI (Zadeh et al., 2016) which consists of 2199 opinion segments from online videos each annotated with sentiment from strongly negative ( 3 ) to strongly positive (+ 3 ) .",
"GloVe word embeddings (Pennington et al., 2014), Facet (iMotions, 2017) and COVAREP (Degottex et al., 2014) are extracted for the language, visual",
"and acoustic modalities respectively 1 .",
"Forced alignment is performed using P2FA (Yuan and Liber-man, 2008) to obtain the exact utterance times of each word.",
"The video and audio features are aligned by computing the expectation of their features over each word interval (Liang et al., 2018).",
"For classification, we report multiclass classification accuracy A ( c ) where c denotes the number of classes and F1 score.",
"For regression, we report Mean Absolute Error (MAE) and Pearson's correlation ( r ).",
"For MAE lower values indicate better performance.",
"For all remaining metrics, higher values indicate better performance.",
"Before proceeding to the experimental results, we perform some sanity checks on our modeling assumptions.",
"We plotted histograms of the visual and acoustic features in CMU-MOSI utterances to visually determine if they resemble a Gaussian distribution.",
"From the plots in Figure 2, we observe that many of the features indeed converge approximately to a Gaussian distribution across the time 1 Details on feature extraction are in supplementary.",
"steps in the utterance, justifying the parametrization for the visual and acoustic likelihood functions in our model.",
"Our first set of experiments evaluates the performance of our baselines on two multimodal prediction tasks: multimodal sentiment analysis on CMU-MOSI and multimodal speaker traits recognition on POM.",
"On CMU-MOSI (right side of Table 1), our model MMB2 performs competitively against many neural models including early fusion deep neural networks (Nojavanasghari et al., 2016), several variants of LSTMs (stacked, bidirectional etc.) (Hochreiter and Schmidhuber, 1997; Schuster and Paliwal, 1997), Multi-view LSTMs (Ra-jagopalan et al., 2016), and tensor product recurrent models (TFN) (Zadeh et al., 2017).",
"For multimodal personality traits recognition on POM (left side of Table 1), our baseline is able to additionally outperform more complicated memory-based recurrent models such as MFN (Zadeh et al., 2018) on several metrics.",
"We view this as an impressive achievement considering the simplicity of our model and the significantly fewer parameters that our model contains.",
"As we will later show, our model's strong performance comes with the additional benefit of being significantly faster than the existing models.",
"Our next set of experiments evaluates the performance of our proposed baseline models when there is limited labeled data.",
"Intuitively, we expect our model to have a lower sample complexity since training our model involves learning fewer parameters.",
"As a result, we hypothesize that our model will generalize better when there is limited amounts of labeled data as compared to larger neural models with a greater number of parameters.",
"We test this hypothesis by evaluating the performance of our model on the CMU-MOSI dataset with only 40%, 60%, 80%, and 100% of the training labels.",
"The remainder of the train set now consists of unlabeled data which is also used during training but in a semi-supervised fashion.",
"We use the entire train set (both labeled and unlabeled data) for unsupervised learning of our multimodal embeddings before the embeddings are fine-tuned to predict the label using limited labeled data.",
"A comparison is performed with two models that also learn embeddings from unlabeled multimodal utterances: 1) deep averaging autoencoder ( AE ) (Iyyer et al., 2015; Hinton and Salakhutdinov, 2006) which averages the temporal dimension before using a fully connected autoencoder to learn a latent embedding, and 2) sequence to sequence autoencoder ( seq2seq ) (Sutskever et al., 2014) which captures temporal information using a recurrent neural network encoder and decoder.",
"For each of these models, an autoencoding model is used to learn embeddings on the entire training set (both labeled and unlabeled data) before the embeddings are fine-tuned to predict the label using limited la-Method Average Time (s) Inferences Per Second (IPS) DF 0.305 1850 EF-LSTM 0.022 31200 MV-LSTM 0.490 1400 BC-LSTM 0.210 3270 TFN 2.058 333 MFN 0.144 4760 MMB1 0.00163 421000 MMB2 0.00219 313000 Table 3: Average time taken for inference on the CMU-MOSI test set and Inferences Per Second (IPS) on a single Nvidia GeForce GTX 1080 Ti GPU, averaged over 5 trials.",
"beled data.",
"The validation and test sets remains unchanged for fair comparison.",
"Under this semi-supervised setting, we show prediction results on the CMU-MOSI test set in Table",
"2. Empirically, we find that our model is able to outperform deep autoencoders and their recurrent variant.",
"Our model remains strong and only suffers a drop in performance of about 3% (75.1% 72.9% binary accuracy) despite having access to only 40% of the labeled training data.",
"To demonstrate another strength of our model, we compare the inference times of our model with existing baselines in Table",
"3. Our model achieves an inference per second (IPS) of more than 10 times the closest neural model (EF-LSTM).",
"We attribute this speedup to our (approximate) closed form solution for m s as derived in Theorem 1 and Corollary 1, the small size of our model, as well as the fewer number of parameters (linear transformation parameters and classifier parameters) involved.",
"To further motivate our design decisions, we test some ablations of our model: 1) we remove the modeling capabilities of the visual and acoustic",
"modalities, instead modeling only the language modality, 2) we remove the positional encodings, and 3) we remove the fine tuning step.",
"We provide these results in Table 4 and observe that each component is indeed important for our model.",
"Although the text only model performs decently, incorporating visual and acoustic features under our modeling assumption improves performance.",
"Our results also demonstrate the effectiveness of positional encodings and fine tuning without having to incorporate any additional learnable parameters.",
"This paper proposed two simple but strong baselines to learn embeddings of multimodal utterances.",
"The first baseline assumes a factorization of the utterance into unimodal factors conditioned on the joint embedding while the second baseline extends the first by assuming a factorization into unimodal, bimodal, and trimodal factors.",
"Both proposed models retain simplicity and efficiency during both learning and inference.",
"From experiments across multimodal tasks and datasets, we show that our proposed baseline models: 1) display competitive performance on supervised multimodal prediction, 2) outperform classical deep autoencoders for semi-supervised multimodal prediction and 3) attain significant (10 times) speedup during inference.",
"Overall, we believe that our strong baseline models provide new benchmarks for future research in multimodal learning.",
"PPL and LM were partially supported by Sam-sung and NSF (Award 1750439).",
"Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of Sam-sung and NSF, and no official endorsement should be inferred.",
"YHT and RS were supported in part by the NSF IIS1763562, Office of Naval Research N000141812861, and Google focused award.",
"We would also like to acknowledge NVIDIA's GPU support and the anonymous reviewers for their constructive comments on this paper."
] | [
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"result",
"objective",
"result",
"objective",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"objective",
"abstain",
"method",
"objective",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"other",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"other",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"other",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"objective",
"abstain",
"abstain",
"objective",
"objective",
"other",
"other",
"other",
"other"
] |
[
"Previous literatures show that pre-trained masked language models (MLMs) such as BERT can achieve competitive factual knowledge extraction performance on some datasets, indicating that MLMs can potentially be a reliable knowledge source.",
"In this paper, we conduct a rigorous study to explore the underlying predicting mechanisms of MLMs over different extraction paradigms.",
"By investigating the behaviors of MLMs, we find that previous decent performance mainly owes to the biased prompts which overfit dataset artifacts.",
"Furthermore, incorporating illustrative cases and external contexts improve knowledge prediction mainly due to entity type guidance and golden answer leakage.",
"Our findings shed light on the underlying predicting mechanisms of MLMs, and strongly question the previous conclusion that current MLMs can potentially serve as reliable factual knowledge bases 1 .",
"Recently, pre-trained language models (Peters et al., 2018; Devlin et al., 2019; Brown et al., 2020) have achieved promising performance on many NLP tasks.",
"Apart from utilizing the universal representations from pre-trained models in downstream tasks, some literatures have shown the potential of pretrained masked language models (e.g., BERT (De-vlin et al., 2019) and RoBERTa (Liu et al., 2019b)) to be factual knowledge bases (Petroni et al., 2019; Bouraoui et al., 2020; Jiang et al., 2020b; Shin et al., 2020; Jiang et al., 2020a; Wang et al., 2020; Kassner and Schutze, 2020a; Kassner et al., 2020).",
"For example, to extract the birthplace of Steve Jobs , we can query MLMs like BERT with Steve Jobs was born in [MASK] , where Steve Jobs is the subject Corresponding Authors 1 We openly release the source code and data at https: //github.com/c-box/LANKA Prompt Bias wasbornin without X predicts",
"of the fact, was born in is a prompt string for the relation place-of-birth and [MASK] is a placeholder for the object to predict.",
"Then MLMs are expected to predict the correct answer California at the [MASK] position based on its internal knowledge.",
"To help MLMs better extract knowledge, the query may also be enriched with external information like illustrative cases ( e.g., (Obama, Hawaii) ) (Brown et al., 2020) or external context ( e.g., Jobs lives in California ) (Petroni et al., 2020).",
"Some literatures have shown that such paradigms can achieve decent performance on some benchmarks like LAMA (Petroni et al., 2019).",
"Despite some reported success, currently there is no rigorous study looking deeply into the underlying mechanisms behind these achievements.",
"Besides, it is also unclear whether such achievements depend on certain conditions (e.g., datasets, domains, relations).",
"The absence of such kind of studies undermines our trust in the predictions of MLMs.",
"We could neither determine whether the predictions are reliable nor explain why MLMs make a specific prediction, and therefore significantly limits MLMs' further applications and improvements.",
"To this end, this paper conducts a thorough study on whether MLMs could be reliable factual knowledge bases.",
"Throughout our investigations, we analyze the behaviors of MLMs, figure out the critical factors for MLMs to achieve decent performance, and demonstrate how different kinds of external information influence MLMs' predictions.",
"Specifically, we investigate factual knowledge extraction from MLMs 2 over three representative factual knowledge extraction paradigms, as shown in Figure 1: Prompt-based retrieval (Petroni et al., 2019; Jiang et al., 2020b; Shin et al., 2020), which queries MLM for object answer only given the subject and the corresponding relation prompt as input, e.g., Jobs was born in [MASK]. Case-based analogy (Brown et al., 2020; Madotto et al., 2020; Gao et al., 2020), which enhances the prompt-based retrieval with several illustrative cases, e.g., Obama was born in Hawaii. [SEP] Jobs was born in [MASK]. Context-based inference (Petroni et al., 2020; Bian et al., 2021), which augments the prompt-based retrieval with external relevant contexts, e.g., Jobs lives in California. [SEP] Jobs was born in [MASK].",
"Surprisingly, the main conclusions of this paper somewhat diverge from previous findings in published literatures, which are summarized in Figure 1. For prompt-based paradigm ( 3), we find that the prediction distribution of MLMs is significantly prompt-biased.",
"Specifically, we find that prompt-based retrieval generates similar predictions on totally different datasets.",
"And predictions are spuriously correlated with the applied prompts, rather than the facts we want to extract.",
"Therefore, previous decent performance mainly stems from the prompt over-fitting the dataset answer distribution, rather than MLMs' knowledge extraction ability.",
"Our findings strongly question the conclusions of previous literatures, and demonstrate that current MLMs can not serve as reliable knowledge bases when using prompt-based retrieval paradigm.",
"2 This paper shows the experimental results on BERT-large because previous work has shown that it can achieve the best performance on factual knowledge extraction among all MLMs.",
"In the Appendix, we also report the experimental results on RoBERTa-large, which also reach the main conclusions reported in the paper.",
"For case-based paradigm ( 4), we find that the illustrative cases mainly provide a type guidance for MLMs.",
"To show this, we propose a novel algorithm to induce the object type of each relation based on Wikidata 3 taxonomy.",
"According to the induced types, we find that the performance gain brought by illustrative cases mainly owes to the improvement on recognizing object type.",
"By contrast, it cannot help MLMs select the correct answer from the entities with the same type: the rank of answer within its entity type is changed randomly after introducing illustrative cases.",
"That is to say, under the case-based paradigm, although MLMs can effectively analogize between entities with the same type, they still cannot well identify the exact target object based on their internal knowledge and the provided illustrative cases.",
"For context-based paradigm ( 5), we find that context can help the factual knowledge extraction mainly because it explicitly or implicitly leaks the correct answer.",
"Specifically, the knowledge extraction performance improvement mainly happens when the introduced context contains the answer.",
"Furthermore, when we mask the answer in the context, the performance still significantly improves as long as MLMs can correctly reconstruct the masked answer in the remaining context.",
"In other words, in these instances, the context itself servers as a delegator of the masked answer, and therefore MLMs can still obtain sufficient implicit answer evidence even the answer doesn't explicitly appear.",
"All the above findings demonstrate that current MLMs are not reliable in factual knowledge extraction.",
"Furthermore, this paper sheds some light on the underlying predicting mechanisms of MLMs, which can potentially benefit many future studies.",
"The great success of Pre-trained Language Models (PLMs) raises the question of whether PLMs can be directly used as reliable knowledge bases.",
"Petroni et al. (2019) propose the LAMA benchmark, which probes knowledge in PLMs using prompt-based retrieval.",
"Jiang et al. (2020a) build a multilingual knowledge probing benchmark based on LAMA.",
"There are many studies focus on probing specific knowledge in PLMs, such as linguistic knowledge (Lin et al., 2019; Tenney et al., 2019; Liu et al., 2019a; Htut et al., 2019; Hewitt and Manning, 2019; Goldberg, 2019; Warstadt et al., 2019), 3 www.wikidata.org semantic knowledge (Tenney et al., 2019; Wallace et al., 2019; Ettinger, 2020) and world knowledge (Davison et al., 2019; Bouraoui et al., 2020; Forbes et al., 2019; Zhou et al., 2019; Roberts et al., 2020; Lin et al., 2020; Tamborrino et al., 2020).",
"Recently, some studies doubt the reliability of PLMs as knowledge base by discovering the the spurious correlation to surface forms (McCoy et al., 2019; Poerner et al., 2020; Shwartz et al., 2020), and their sensitivity to negation and mispriming (Kass-ner and Schutze, 2020b).",
"Currently, there are three main paradigms for knowledge extraction from PLMs: prompt-based retrieval (Schick and Schutze, 2021; Li and Liang, 2021), case-based analogy (Schick and Schutze, 2020a,b), and context-based inference.",
"For prompt-based retrieval, current studies focus on seeking better prompts by either mining from corpus (Jiang et al., 2020b) or learning using labeled data (Shin et al., 2020).",
"For case-based analogy, current studies mostly focus on whether good cases will lead to good few-shot abilities, and many tasks are tried (Brown et al., 2020; Madotto et al., 2020; Gao et al., 2020).",
"For context-based inference, current studies focus on enhancing the prediction by seeking more informative contexts, e.g., for knowledge extraction (Petroni et al., 2020) and CommonsenseQA (Bian et al., 2021).",
"However, there is no previous work which focuses on systematically study the underlying predicting mechanisms of MLMs on these paradigms.",
"The prompt-based retrieval extracts factual knowledge by querying MLMs with (subject, prompt, [MASK]).",
"For example, to extract the place-of-birth of Steve Jobs , we could query BERT with Steve Jobs was born in [MASK]. and the predicted California would be regarded as the answer.",
"We consider three kinds of prompts: the manually prompts T man created by Petroni et al. (2019), the mining-based prompts T mine by Jiang et al. (2020b) and the automatically searched prompts T auto from Shin et al. (2020).",
"Conclusion 1. Prompt-based retrieval is prompt-biased.",
"As a result, previous decent performance actually measures how well the applied prompts fit the dataset answer distribution, rather than the factual knowledge extraction ability from MLMs.",
"(a) The true answer distributions are very different between LAMA and WIKI-UNI.",
"(b) However, the prediction distribution made by MLMs on them are still very similar.",
"Specifically, we conduct studies and find that 1) Prompt-based retrieval will generate similar responses given quite different datasets.",
"To show this, we construct a new dataset from Wikidata WIKI-UNI, which have a totally different answer distribution from the widely-used LAMA 4 dataset (Petroni et al., 2019).",
"However, we find that the prediction distributions on WIKI-UNI and LAMA are highly correlated, and this spurious correlation holds across different prompts.",
"Such results reveal that there is just a weak correlation between the predictions of MLMs and the factual answer distribution of the dataset.",
"2) The prediction distribution is dominated by the prompt, i.e., the prediction distribution using only (prompt, [MASK]) is highly correlated to the prediction distribution using (subject, prompt, [MASK]).",
"This indicates that it is the applied prompts, rather than the actual facts, determine the predictions of MLMs.",
"3) The performance of the prompt can be predicted by the divergence between the prompt-only distribution and the answer distribution of the dataset.",
"All these findings reveal that previous decent performance in this field actually measures the degree of prompt-dataset fitness, rather than the universal factual knowledge extraction ability.",
"Finding 1. Prompt-based retrieval will generate similar responses to quite different datasets.",
"A reliable knowledge extractor should generate 4 Since we focus on factual knowledge, we use the T-REx (Elsahar et al., 2018) subset of the LAMA benchmark.",
"different responses to different knowledge queries.",
"To verify whether MLMs meet this standard, we manually construct a new dataset WIKI-UNI, which has a comparable size but totally different answer distribution to LAMA, and then compare the prediction distributions on them.",
"For a fair comparison, we follow the construction criteria of LAMA: we use the same 41 relations, filter out the queries whose objects are not in the MLMs' vocabulary.",
"Compared with LAMA, the major difference is that WIKI-UNI has a uniform answer distribution, i.e., for each relation, we keep the same number of instances for each object.",
"Please refer to Appendix for more construction details.",
"Figure 2a shows the answer distributions of LAMA and WIKI-UNI on relation place-of-birth .",
"We can see that the answers in LAMA are highly concentrated on the head object entities, while the answers in WIKI-UNI follow a uniform distribution.",
"Given LAMA and WIKI-UNI, we investigate the predicting behaviors of MLMs.",
"Surprisingly, the prediction distributions on these two totally different datasets are highly correlated.",
"Figure 2b shows an example.",
"We can see that the prediction distribution on WIKI-UNI is very similar to that on LAMA.",
"And these two distributions are both close to the answer distribution of LAMA but far away from the answer distribution of WIKI-UNI.",
"To investigate whether this spurious correlation 0.0 0.2 0.4 0.6 0.8 1.0 T man T mine T auto Figure 4: Correlations between the prompt-only distribution and prediction distribution on WIKI-UNI.",
"is a common phenomenon, we analyze the Pearson correlation coefficient between prediction distributions on LAMA and WIKI-UNI across different relations and three kinds of prompts.",
"The boxplot in Figure 3 shows the very significant correlation between the prediction distributions on LAMA and WIKI-UNI: on all three kinds of prompts, the correlation coefficients exceed 0.8 in more than half of relations.",
"These results demonstrate that prompt-based retrieval will lead to very similar prediction distributions even when test sets have vastly different answer distributions.",
"Furthermore, we find that the prediction distribution obviously doesn't correspond to the answer distribution of WIKI-UNI.",
"From Table 1, we can see that on average, the top-5 answers of each relation in WIKI-UNI cover only 7.78% instances.",
"By contrast, the top-5 predictions of each relation in WIKI-UNI cover more than 52% instances, which is close to the answer distribution and prediction distribution on LAMA.",
"As a result, the performance on WIKI-UNI (mean P@1: 16.47) is significantly worse than that on LAMA (mean P@1: 30.36).",
"In conclusion, the facts of a dataset cannot explain the predictions of MLMs, and therefore previous evaluations of the MLMs' ability on factual knowledge extraction are unreliable.",
"Finding 2. The prediction distribution is severely prompt-biased.",
"To investigate the underlying factors of the predicting behavior of MLMs, we compare the prompt-only prediction distribution using only (prompt, [MASK]) and the full prediction distribution using (subject, prompt, [MASK]).",
"To obtain the prompt-only distribution, we mask the subject and then use ([MASK], prompt, [MASK]) to query MLMs ( e.g., [MASK] was born in [MASK] ).",
"Because there is no subject information in the input, MLMs can only depend on applied prompt's information to make the prediction at the second [MASK].",
"Therefore, we regard the probability distribution at the second [MASK] symbol as the prompt-only distribution.",
"After that, we analyze the correlations between the prompt-only distribution and the prediction distribution on WIKI-UNI dataset.",
"Figure 4 shows the boxplot.",
"On all three kinds of prompts, correlation coefficients between the prompt-only distribution and the prediction distribution on WIKI-UNI exceed 0.6 in more than half of relations.",
"This demonstrates that in these relations, the prompt-only distribution dominates the prediction distribution.",
"Combining with the findings in Section 3.2, we can summarize that the prompt-based retrieval is mainly based on guided guessing , i.e., the predictions are generated by sampling from the prompt-biased distribution guided by the moderate impact of subjects.",
"Note that among a minor part of relations, the correlations between the prompt-only distribution and the prediction distribution are relatively low.",
"We find that the main reason is the type selectional preference provided by the subject entities, and Section 4 will further discuss the impact of this type-guidance mechanism for MLMs.",
"Finding 3. Better prompts are the prompts fitting the answer distribution better, rather than the prompts with better retrieval ability.",
"Some previous literatures attempt to find better prompts for factual knowledge extraction from MLMs.",
"However, as we mentioned above, the prompt itself will lead to a biased prediction distribution.",
"This raises our concern that whether the found better prompts are really with better knowledge extraction ability, or the better performance just come from the over-fitting between the prompt-only distribution and the answer distribution of the test set.",
"To answer this question, we evaluate the KL divergence between the prompt-only distribution and the answer distribution of LAMA on different kinds of prompts.",
"The results are shown in Table 2. We find that the KL divergence is a strong indicator of the performance of a prompt, i.e., the smaller the KL divergence between the prompt-only distribution and the answer distribution of the test set is, the better performance the prompt achieve.",
"Furthermore, Table 3 shows several comparisons between different kinds of prompts and Prompt Precision KL divergence T man 30.36 12.27 T mine 39.49 10.40 T auto 40.36 10.27 Table 2: The smaller KL divergence between the prompt-only distribution and golden answer distribution of LAMA, the better performance of the prompt.",
"their performance on LAMA.",
"We can easily observe that the better-performed prompts are actually over-fitting the dataset, rather than better capturing the underlying semantic of the relation.",
"As a result, previous prompt searching studies are actually optimized on the spurious prompt-dataset compatibility, rather than the universal factual knowledge extraction ability.",
"The case-based analogy enhances the prompt-based paradigm with several illustrative cases.",
"For example, if we want to know the place-of-birth of Steve Jobs , we would first sample cases such as ( Obama , place-of-birth , Hawaii ), and combine them with the original query.",
"In this way, we will use Obama was born in Hawaii. [SEP] Steve Jobs was born in [MASK]. to query MLMs.",
"Conclusion 2. Illustrative cases guide MLMs to better recognizing object type, rather than better predicting facts.",
"To show this, we first design an effective algorithm to induce the type of an entity set based on Wikidata taxonomy, which can identify the object type of a relation.",
"According to the induced types, we find that the benefits of illustrative cases mainly stem from the promotion of object type recognition.",
"In other words, case-based analogy guides MLMs with better type prediction ability but contributes London Chicago Capital 1 Milan Big City 2 City 3 Area 3 Area 1.0 City 1.0 Entity Set Entity Type Sequence Entity Type Graph Big City 0.6 Capital 0.3 Figure 5: Illustration of our type induction algorithm.",
"little to the entity prediction ability.",
"In the following, we first illustrate our type inducing algorithm, and then explain how we reach the conclusion.",
"To induce the object type of a relation, we first collect all its objects in LAMA and form an entity set.",
"Then we induce the type of an entity set by designing a simple but effective algorithm.",
"The main intuition behind our algorithm is that a representative type should be the finest grained type that can cover a sufficient number of the instances in the entity set.",
"Figure 5 shows an example of our algorithm.",
"Given a set of entities in Wikidata, we first construct an entity type graph (ETG) by recursively introducing all ancestor entity types according to the instance-of and subclass-of relations.",
"For the example in Figure 5, Chicago is in the entity set and is an instance-of Big City .",
"Big City is a subclass-of City .",
"As a result, Chicago , Big City and City will all be introduced into ETG.",
"Then we apply topological sorting (Cook, 1985) to ETG to obtain a Fine-to-Coarse entity type sequence .",
"Finally, based on the sequence, we select the first type which covers more than 80% of entities in the entity set (e.g., City in Figure 5).",
"Table 4 illustrates several induced types, and please refer to the Appendix for details.",
"Finding 4. Illustrative cases help MLMs to better recognize the type of objects, and therefore improve factual knowledge extraction.",
"For case-based analogy, the first thing we want to know is whether illustrative cases can improve the knowledge extraction performance.",
"To this end, for each (subject, relation) query in LAMA, we 25% 30% 35% 40% 45% In-typeRank OverallRank Raised Unchanged Dropped Figure 6: Percentages on the change of overall rank (among all candidates) and the in-type rank (among candidates with the same type) of golden answer.",
"randomly sample 10 illustrative cases.",
"To avoid answer leakage, we ensure the objects of these cases don't contain the golden answer of the query.",
"Then we use (cases, subject, prompt, [MASK]) as the analogous query to MLMs.",
"Results show that case-based analogy can significantly improve performance.",
"After introducing illustrative cases, the mean precision increases from 30.36% to 36.23%.",
"Besides, we find that 11.81% instances can benefit from the introduced cases and only 5.94% instances are undermined.",
"This shows that case-based analogy really helps the MLMs to make better predictions.",
"By analyzing the predicting behaviors, we observe that the main benefit of introducing illustrative cases comes from the better type recognition.",
"To verify this observation, we investigate how the types of predictions changed after introducing the illustrative cases.",
"Table 4 shows the results on relations whose precision improvement is more than 10% after introducing illustrative cases.",
"From the table, it is very obvious that illustrative cases enhance the factual knowledge extraction by improving type prediction: 1) For queries whose predictions are correctly reversed (from wrong to right), the vast majority of them stems from the revised type prediction; 2) Even for queries whose predictions are mistakenly reversed (from right to wrong), the type of the majority of predictions still remains correct.",
"In conclusion, introducing illustrative cases can significantly improve the knowledge extraction ability by recognizing the object type more accurately.",
"That is, adding illustrative cases will provide more guidance for object type.",
"Finding 5. Illustrative cases are of limited help for selecting the answer from entities of the same type.",
"To show this, we introduce a new metric referred as in-type rank , which is the rank of the correct answer within the entities of the same type for a query.",
"By comparing the in-type rank in prompt-based and case-based paradigm, we can evaluate whether the illustrative cases can actually help better entity prediction apart from better type recognition.",
"Figure 6 shows the percentages on the change of overall rank (among all candidates) and the in-type rank (among candidates with the same type) of golden answer.",
"Unfortunately, we find that illustrative cases are of limited help for entity prediction: the change of in-type rank is nearly random.",
"The percentages of queries with Raised/Unchanged/Dropped in-type rank are nearly the same: 33.05% VS 35.47% VS 31.47%.",
"Furthermore, we find that the MRR with the type only changed from 0.491 to 0.494, which shows little improvement after introducing illustrative cases.",
"These results show that the raises of overall rank of golden answer are not because of the better prediction inside the same type.",
"In conclusion, illustrative cases cannot well guide the entity prediction, and they mainly benefit the factual knowledge extraction by providing guidance for object type recognition.",
"The context-based inference augments the prompt-based paradigm with external contexts.",
"For example, if we want to know the place-of-birth of Steve Jobs , we can use the external context Jobs was from California. , and form a context-enriched Answer in context Prompt-based Context-based Present (45.30%) 34.83 64.13 +29.30 Absent (54.70 %) 25.37 23.26 -2.11 Table 5: Comparison between prompt-based and context-based paradigms grouped by whether the answer presents or absents in the context.",
"query Jobs was from California. [SEP] Steve Jobs was born in [MASK]. to query MLMs.",
"Specifically, we use the same context retrieval method as Petroni et al. (2020): for each instance, given the subject and relation as query, we use the first paragraph of DRQA's (Chen et al., 2017) retrieved document as external contexts.",
"Conclusion 3. Additional context helps MLMs predict the answer because they contain the answer, explicitly or implicitly.",
"Several studies (Petroni et al., 2020; Bian et al., 2021) show that external context can help knowledge extraction from MLMs.",
"To investigate the underlying mechanism, we evaluate which kinds of information in contexts contribute to the fact prediction, and find that the improvement mainly comes from the answer leakage in context.",
"Furthermore, we find the answers can not only be leaked explicitly, but can also be leaked implicitly if the context provides sufficient information.",
"To show this, we split LAMA into two parts ac-Prompt-based Context-based Masked Context-based 30.36 41.44 35.66 Table 6: Overall performance when introducing different kinds of contexts.",
"cording to whether the additional context contains the answer.",
"Table 5 shows the results on these two parts respectively.",
"We can see that the improvements on these two parts diverge significantly.",
"For context containing the answer, context-based inference significantly improves the factual knowledge extraction performance.",
"However, there is even a little performance drop for those instances whose context does not contain the answer.",
"This indicates that the improvement of factual knowledge extraction is mainly due to the explicit existence of the answer in the context.",
"Finding 7. Implicit answer leakage can also",
"significantly improve the prediction performance.",
"As we mentioned above, explicit answer leakage significantly helps the answer prediction.",
"The answer-leaked context may explicitly provide the answer or implicitly guide the prediction by providing answer-specific information.",
"To understanding the underlying mechanism, we mask the answer in the context and verify whether it can still achieve the performance gain.",
"Table 6 shows the results.",
"We find that the performance gain is still very significant after masking the answer.",
"This indicates that the contexts previously containing the answer are still very effective even the answer doesn't explicitly present.",
"To further investigate the reason behind, we split the masked version of answer-leaked instances into two groups by whether MLMs can or cannot correctly reconstruct the masked answer from the remaining context.",
"The results are shown in Table 7. We can see that the performance gain significantly diverges in these two groups: the improvements mainly come from the instances whose answer in context can be reconstructed we refer to this as implicit answer leakage .",
"That is to say, for these instances, the context serves as a sufficient delegator of its answer, and therefore MLMs can obtain sufficient answer evidence even the answer does not explicitly appear.",
"However, for contexts that cannot reconstruct the masked answer, the improvements are relatively minor.",
"In conclusion, the real efficacy of context-based inference comes from the sufficient answer evidence provided by the context, either explicitly or implicitly.",
"In this paper, we thoroughly study the underlying mechanisms of MLMs on three representative factual knowledge extraction paradigms.",
"We find that the prompt-based retrieval is severely prompt-biased, illustrative cases enhance MLMs mainly via type guidance, and external contexts help knowledge prediction mostly because they contain the correct answer, explicitly or implicitly.",
"These findings strongly question previous conclusions that current MLMs could serve as reliable factual knowledge bases.",
"The findings of this paper can benefit the community in many directions.",
"By explaining the underlying predicting mechanisms of MLMs, we provide reliable explanations for many previous knowledge-intensive techniques.",
"For example, our method can explain why and how incorporating external contexts will help knowledge extraction and CommonsenseQA (Talmor et al., 2019).",
"Our findings also reveal why PLM probing datasets may not be reliable and how the evaluation can be promoted by designing de-biased evaluation datasets.",
"This paper also sheds light on future research directions.",
"For instance, knowing the main benefit of illustrative cases comes from type-guidance, we can enhance many type-centric prediction tasks such as NER (Lample et al., 2016) and factoid QA (Iyyer et al., 2014).",
"Moreover, based on the mechanism of incorporating external contexts, we can better evaluate, seek, and denoise external contexts for different tasks using MLMs.",
"For example, we can assess and select appropriate facts for CommonsenseQA based on whether they can reconstruct the candidate answers.",
"This paper focuses on masked language models, which have been shown very effective and are widely used.",
"We also want to investigate another representative category of language models the generative pre-trained models (e.g., GPT2/3 (Rad-ford et al., 2019; Brown et al., 2020)), which have been shown to have quite different mechanisms and we leave it for future work due to page limitation.",
"We sincerely thank all anonymous reviewers for their insightful comments and valuable suggestions.",
"This work is supported by the National Key Research and Development Program of China (No. 2020AAA0106400), the National Natural Science Foundation of China under Grants no.",
"U1936207, and in part by the Youth Innovation Promotion Association CAS(2018141)."
] | [
"abstain",
"objective",
"objective",
"abstain",
"method",
"abstain",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"objective",
"other",
"objective",
"result",
"method",
"abstain",
"objective",
"result",
"result",
"result",
"objective",
"result",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"objective",
"result",
"objective",
"abstain",
"objective",
"objective",
"method",
"method",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"result",
"abstain",
"abstain",
"method",
"abstain",
"result",
"abstain",
"abstain",
"method",
"abstain",
"result",
"abstain",
"other",
"other",
"other"
] |
[
"rgangad@amazon.com",
"Balakrishnan Narayanaswamy AWS AI, Amazon",
"Abstract",
"Neural network models have recently gained traction for sentence-level intent classification and token-based slot-label identification.",
"In many real-world scenarios, users have multiple intents in the same utterance, and a token-level slot label can belong to more than one intent.",
"We investigate an attention-based neural network model that performs multi-label classification for identifying multiple intents and produces labels for both intents and slot-labels at the token-level.",
"We show state-of-the-art performance for both intent detection and slot-label identification by comparing against strong, recently proposed models.",
"Our model provides a small but statistically significant improvement of 0.2% on the predominantly single-intent ATIS public data set, and 55% intent accuracy improvement on an internal multi-intent dataset.",
"In dialog systems, the natural language understanding component (NLU) is responsible for identifying the user's request and creating a semantic frame that succinctly summarizes the user's needs.",
"These semantic frames are typically constructed using intents and slot-labels (T ur et al., 2010).",
"As the names imply, an intent captures the intention of the user and slot-labels capture any additional information or constraints the user provides.",
"These constraints must be satisfied in order to fulfill the user's request.",
"The example below shows a user's request, how is the weather in Dallas ? .",
"We need to identify the intent ( GetWeatherInfo ) as well as the values for the slot-labels (SL), here, City (value = Dallas ).",
"It is crucial that intents and slot-labels are identi-fied with high accuracy as an error made by the NLU component propagates through downstream components such as the dialog state tracker, the dialog policy and the natural language generator components, leading to a substantial degradation of the performance of the entire dialog system.",
"Intent detection has been modeled as a sentence classification task where an intent ( y I ) is assigned to the user's utterance.",
"Slot labeling is typically modeled as a sequential labeling problem, where a user's sentence, x 1 , x 2 , ...x N , is labeled with y S 1 , y S 2 ,",
"..y SN , and y Si is the slot label assigned to the token at position i ( x i ).",
"In the example above, the sequence of slot labels would be, O O O O O City O , where, O stands for Other .",
"Sequential models such as Maximum Entropy Markov models (Toutanova and Manning, 2000; McCallum et al., 2000; Berger et al., 1996) and Conditional Random Fields, CRFs (Lafferty et al., 2001; Jeong and Geunbae Lee, 2008) are popular approaches for slot-labeling while intent prediction is often performed using standard classification approaches such as Support Vector Machines (Cortes and Vapnik, 1995) or logistic regression (Bishop, 2006).",
"More recently, neural network-based models (Mesnil et al., 2015; Kurata et al., 2016; Goo et al., 2018; Liu and Lane, 2016) have been shown to significantly outperform previous approaches.",
"These models are also appealing since a single model is trained end-to-end to perform both intent detection and slot label identification.",
"Jointly modeling intent and slot label identification (Liu and Lane, 2016; Goo et al., 2018) has been shown to significantly outperform other neural network-based approaches.",
"This is intuitive since slot labels could depend on the intent.",
"Most neural network-based approaches (Mesnil et al., 2015; Kurata et al., 2016; Goo et al., 2018; Liu and Lane, 2016), with the exception of (Xu and Sarikaya, 2013a), predict a single intent for a user's utterance.",
"In real-world scenarios, users indicate multiple intents in the same utterance.",
"For example, a user's utterance such as, show me flights from Dallas to New York and the cost , clearly has two intents, one for obtaining the price of the flights ( GetFlightCost ) and another for the flight information.",
"It is critical to understand and model such scenarios to allow more natural interaction with users.",
"In this paper, we treat the intent detection task as a multi-label classification problem and suggest various neural network models to obtain multiple intents.",
"Our work is related to Xu et al.,(2013b) and Kim et al.,(2017), where multiple intents are assigned to a user's utterance.",
"Xu et al., (2013b) use log-linear models to achieve this, while we use neural network models.",
"Both Xu et al., (2013b) and Kim et al., (2017) only consider intents and do not handle slot labels.",
"In this paper, we jointly perform multi-label intent classification and slot-label identification.",
"In contrast with all prior work, we investigate and study the problem of assigning slot labels (or constraints) provided by a user to multiple intents.",
"Consider the example in Figure 1 with two intents in the same domain, BookCab and BookHotel .",
"Suppose BookCab has three possible slot labels, City , Time and pick up location ( Loc ), and suppose that BookHotel has slot labels City , CheckinDate , and Dura-tion .",
"Consider a user's utterance, book a cab from the airport in Seattle and find me a hotel to stay .",
"Here, the user wants to book a cab ( BookCab intent) as well book a hotel ( BookHotel ).",
"The slot label Seattle' ' should be assigned to both intents to accurately capture the user's request. Hence, we study a model that predicts multiple intents both at the token level as well as at the sentence-level. We model token-level multi-intent classification using Long Short Term Memory (LSTMs) units to capture dependencies that may exist between intents. For example, a user who wants to book a cab is also likely to make a request for a hotel in the same utterance but probably not order food i.e., intents such as BookCab and BookHotel are more likely to occur together when compared to BookCab and OrderFood .",
"To summarize, the contributions of this paper are: We investigate approaches to the problem of multi-intent classification.",
"We perform joint multi-intent classification both at sentence-level and at token-level.",
"We see that, the token-level multi-intents help assign user constraints to the intents.",
"sentence-level multi-intent classification captures dependencies between intents.",
"We compare the performance of the approach with recently proposed state-of-the-art approaches and show significant improvement.",
"The paper is organized as follows.",
"Section 2 describes the proposed approach.",
"Section 3 describes the experimental setup, including, data sets and metrics used to evaluate the approaches followed by the results in Section 3.2.",
"Finally, we conclude and suggest possible future directions and extensions in Section 4.",
"LSTM-based RNN models have become popular for sequential labeling, especially in natural language processing tasks, due to their ability to model long-term dependencies.",
"We extend encoder-decoder architectures from Liu et al., (2016) and Gangadharaiah et al., (2018), which showed superior performance when compared to Convolutional neural network based CRFs (Xu and Sarikaya, 2013a) and other RNN-based architectures (Mesnil et al., 2015; Kurata et al., 2016) for intent detect and slot label identification.",
"We use a bidirectional LSTM encoder to encode the input word sequence.",
"The encoder hidden state, h enci , at each word position is a concatenation of the forward state ( fh i ) and backward state ( bh i ), h enci = [ fh i , bh i ] .",
"For intent detection at the sentence-level, a context vector c I is computed using the encoder's final hidden state.",
"The vectors, c I and the final encoder's hidden state vector are sent to a dense layer of sigmoid units to predict the probabilities for every intent.",
"This produces multiple intents ( (cid:126)y I ) as opposed to previous approaches that produce a single intent.",
"For slot labeling, the decoder also uses LSTMs.",
"At each decoding step i , the decoder state ( h S,deci ) is a function of the previous decoder state ( h S,deci 1 ), the previously emitted label ( y S i 1 ), the encoder's state ( h S,enci ), the context vectors, ( c Si ) and c I , as shown in Figure",
"2. The context vector c Si is a weighted combination of the encoder's states ( h enc 1 , h enc 2 , ...h encN ) with weights, Si,j , as shown in eqn.",
"1. g is a feed forward network.",
"The output of the LSTM layer is then sent to a softmax layer to predict the slot labels.",
"We also experimented with a CRF layer as the decoder.",
"In our preliminary experiments, the LSTM decoder was faster to train and also showed better performance when compared to the CRF layer and hence we use LSTMs in the experiments below.",
"We also apply a slot-gated mechanism similar to Goo et al., (2018).",
"The idea is to leverage the intent's context vector for modeling slot-intent relationships, thereby improving the performance of slot labeling.",
"The slot gate is computed as a function of both the slot context vector ( c Si ) and the intent context vector ( c I ), where, v and W are both trainable.",
"In Goo et al., (2018), a similar model showed at-par or better performance over Liu et al. (2016) and Tur et.",
"al. (2016).",
"The slot gate gS is defined as, gS = (cid:88) v tanh ( c Si + W c I ) (2) where, gS is used to weight h enci and c Si to obtain y Si , i.e., h enci + c Si gS is sent to the feed forward network to compute y Si .",
"Since a slot label can belong to multiple intents, we also perform multi-label intent detection at the token level.",
"We again use an LSTM decoder, where each decoder state, h MI,deci , is a function of c I , previous decoder state ( h MI,deci 1 ), the encoder's state ( h enci ) and the context vector ( c MIi ), as shown in Figure",
"2. c MIi is computed in the same manner as c Si .",
"The output of the decoder is sent to a dense layer with sigmoid units.",
"Thus at each word position, we predict multiple intents.",
"In all our experiments, we set the hidden vectors to a dimension of 64 and use the adam optimizer with an early stopping strategy.",
"We use a drop-out rate of 0.5 for regularization and the maximum norm for gradient clipping is set to 5.",
"The results are obtained by averaging the performance of the models over 10 runs.",
"To do a fair comparison against existing models, we do not pre-train our word em-beddings (Devlin et al., 2018; Pennington et al., 2014; Mikolov et al., 2013), instead we use an embedding layer in the model which is trained along with the rest of the model's parameters.",
"As done in the NLU community, we report F1 scores for slot labeling.",
"We use F1 scores for intent detection at the token-level and accuracy for sentence-level intent detection.",
"We use two widely used public datasets, ATIS (Airline Travel Information System) (Tur et al., 2010) and SNIPS 1 .",
"The ATIS dataset contains audio recordings of people requesting flight reservations, with 21 intent types and 120 slot labels.",
"There are 4,478 utterances in the training set, 893 in the test set and 500 utterances in the development set.",
"The SNIPS data was collected from the SNIPS personal voice assistant, with 7 intent types and 72 slot labels.",
"The training set contains 13,084 utterances, the test set contains 700 utterances and the development set also contains 700 utterances.",
"The ATIS dataset contains utterances with multi intents, while the SNIPS is only single intent.",
"In order to demonstrate that our approach does not degrade performance on single intent datasets, we also perform evaluations on the SNIPS dataset.",
"We also test the performance of the models on an internal dataset.",
"In this dataset, about 1 https://github.com/snipsco/nlu-benchmark/tree/master/2017-06-custom-intent-engines 52% of examples are multi-intent compared to ATIS which has 2% of test examples with multi-intents.",
"The average number of intents per utterance in the internal dataset is 1.6.",
"We compare our approach against two of the state of the art approaches that have shown the best performance in previous work.",
"We will use Model 1 to refer to the model proposed by Liu et al., (2016).",
"Model 2 refers to the more recent model proposed by Goo et al., (2018).",
"Table ??",
"shows results obtained by the model investigated in this paper when compared with Model 1 and Model",
"2. As mentioned earlier, both Models 1 and 2 only handle single intents per user utterance.",
"For these two models, we insert a # between the multiple intents and treat it as one single intent, i.e., when an example such as, please give me a list of all the flights between dallas and baltimore and their cost , contains multiple intents, atis flight and atis airfare , we use atis flight#atis airfare instead.",
"When evaluating the baselines, the ordering of intents does not matter, and so we replace the # with spaces once we have the predictions.",
"To allow comparison across approaches, both ATIS and SNIPS were modified to include token-level intents as follows.",
"For utterances that had only a single intent, we assigned this intent to all tokens that had a slot label (i.e., to slot labels that do not correspond to O ).",
"For utterances that had more than one intent, we assigned all intents to all tokens that had slot labels.",
"After this process, if an utterance had two intents, intent 1 and intent 2 , and if a token i had a slot label, the token would end up with targets of the form, ( slot i , intent i 1 , intent i 2 ) The proposed model shows a statistically significant improvement in sentence-level intent prediction (S-level) on ATIS when compared to the two baselines.",
"Any improvement in slot labeling is unclear, since this could be attributed to the architecture changes which involved additional penalty terms on the intent (since we use both token-level and sentence-level intent loss).",
"We also notice that the performance on SNIPS (a single intent dataset) does not degrade.",
"We see a larger performance boost in both token-level (T-level) and sentence-level (S-level) intent detection on the internal dataset due to the large percentage of examples with multi-intents.",
"Wilcoxon signed-rank test Model ATIS SNIPS Internal Dataset Slot Intent (Acc) Intent (F1) Slot Intent (Acc) Intent (F1) Slot Intent (Acc) Intent (F1) (F1) S-level T-level (F1) S-level T-level F1 S-level T-level Model 1 90.16 93.84 N/A 87.24 97.14 N/A 89.28 57.27 N/A Model 2 93.37 95.18 N/A 88.23 96.85 N/A 89.64 57.47 N/A Proposed approach 94.22 95.39 95.82 88.03 97.23 97.89 90.94 89.41 94.54 Table 1: Performance of the model against Model 1 and Model",
"(Wilcoxon, 1945) was used to find statistical sig-nificance.",
"The paper investigated an approach for multi-intent classification.",
"We perform multi-intent classification both at sentence-level and at token-level.",
"The token-level multi-label classification helped assign common constraints (or slot labels) to multiple intents, improving accuracy.",
"The sentence-level multi-intent classification captured dependencies between intents.",
"We compared the performance of our approach with previously proposed state-of-the-art approaches for single intent classification and showed significant improvements in performance on all the datasets.",
"As future work, we would like to explore other architectures to directly model dependencies between slot labels and intents.",
"This is useful since only a subset of slot labels occur with certain intents.",
"We will also test the proposed approaches against real-world scenarios to understand their generality across various domains."
] | [
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"method",
"result",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"objective"
] |
[
"Neural Chat Translation (NCT) aims to translate conversational text into different languages.",
"Existing methods mainly focus on modeling the bilingual dialogue characteristics ( e.g. , coherence) to improve chat translation via multi-task learning on small-scale chat translation data.",
"Although the NCT models have achieved impressive success, it is still far from satisfactory due to insufficient chat translation data and simple joint training manners.",
"To address the above issues, we propose a scheduled multi-task learning framework for NCT.",
"Specifically, we devise a three-stage training framework to incorporate the large-scale in-domain chat translation data into training by adding a second pre-training stage between the original pre-training and fine-tuning stages.",
"Further, we investigate where and how to schedule the dialogue-related auxiliary tasks in multiple training stages to effectively enhance the main chat translation task.",
"Extensive experiments on four language directions (English Chinese and English German) verify the effectiveness and superiority of the proposed approach.",
"Additionally, we will make the large-scale in-domain paired bilingual dialogue dataset publicly available for the research community.",
"1 1 Introduction A cross-lingual conversation involves speakers in different languages ( e.g. , one speaking in Chinese and another in English), where a chat translator can be applied to help them communicate in their native languages.",
"The chat translator bilaterally converts the language of bilingual conversational text, e.g. from Chinese to English and vice versa (Wang et al., 2016a; Farajian et al., 2020; Liang et al., 2021a, 2022).",
"Generally, since the bilingual dialogue corpus is scarce, researchers (Bao et al., 2020; Wang et al., 2020; Liang et al., 2021a,d) resort to making use of the large-scale general-domain data through the pre-training-then-fine-tuning paradigm as done in many context-aware neural machine translation models (Tiedemann and Scherrer, 2017; Maruf and Haf-fari, 2018; Miculicich et al., 2018; Tu et al., 2018; Voita et al., 2018, 2019a,b; Yang et al., 2019; Wang et al., 2019; Maruf et al., 2019; Ma et al., 2020, etc), having made significant progress.",
"However, conventional pre-training on large-scale general-domain data usually learns general language patterns, which is also aimless for capturing the useful dialogue context to chat translation, and fine-tuning usually suffers from insufficient supervised data (about 10k bilingual dialogues).",
"Some studies (Gu et al., 2020; Gururangan et al., 2020; Liu et al., 2021; Moghe et al., 2020; Wang et al., 2020; Ruder, 2021) have shown that learning domain-specific patterns by additional pre-training is beneficial to the models.",
"To this end, we firstly construct the large-scale in-domain chat translation data 2 .",
"And to 2 Firstly, to build the data, for English Chinese (En Zh), we crawl two consecutive English and Chinese movie subtitles (not aligned).",
"For English German (En De), we download two consecutive English and German movie subtitles (not aligned).",
"Then, we use several advanced technologies to align En Zh and En De subtitles.",
"Finally, we obtain the paired bilingual dialogue dataset.",
"Please refer to 3.1 for details.",
"incorporate it for learning domain-specific patterns, we then propose a three-stage training framework via adding a second pre-training stage between general pre-training and fine-tuning, as shown in Fig. 1.",
"To further improve the chat translation performance through modeling dialogue characteristics ( e.g. , coherence), inspired by previous studies (Phang et al., 2020; Liang et al., 2021d; Pruk-sachatkun et al., 2020), we incorporate several dialogue-related auxiliary tasks to our three-stage training framework.",
"Unfortunately, we find that simply introducing all auxiliary tasks in the conventional multi-task learning manner does not obtain significant cumulative benefits as we expect.",
"It indicates that the simple joint training manner may limit the potential of these auxiliary tasks, which inspires us to investigate where and how to make these auxiliary tasks work better for the main NCT task.",
"To address the above issues, we present a S cheduled M ulti-task L earning framework (SML) for NCT, as shown in Fig. 1. Firstly, we propose a three-stage training framework to introduce our constructed in-domain chat translation data for learning domain-specific patterns.",
"Secondly, to make the most of auxiliary tasks for the main NCT task, where : we analyze in which stage these auxiliary tasks work well and find that they are different strokes for different folks .",
"Therefore, to fully exert their advantages for enhancing the main NCT task, how : we design a gradient-based strategy to dynamically schedule them at each training step in the last two training stages, which can be seen as a fine-grained joint training manner.",
"In this way, the NCT model is effectively enhanced to capture both domain-specific patterns and dialogue-related characteristics ( e.g. , coherence) in conversation, which thus can generate better translation results.",
"We validate our SML framework on two datasets: BMELD (Liang et al., 2021a) (En Zh) and BConTrasT (Farajian et al., 2020) (En De).",
"Experimental results show that our model gains consistent improvements on four translation tasks in terms of both BLEU (Papineni et al., 2002) and TER (Snover et al., 2006) scores, demonstrating its effectiveness and generalizability.",
"Human evaluation further suggests that our model can produce more coherent and fluent translations compared to the previous related methods.",
"Our contributions are summarized as follows: We propose a scheduled multi-task learning framework with three training stages, where a gradient-based scheduling strategy is designed to fully exert the auxiliary tasks' advantages for the main NCT task, for higher translation quality.",
"Extensive experiments on four chat translation tasks show that our model achieves new state-of-the-art performance and outperforms the existing NCT models by a significant margin.",
"We contribute two large-scale in-domain paired bilingual dialogue corpora (28M for En Zh and 18M for En De) to the research community.",
"We introduce the conventional multi-task learning framework (Liang et al., 2021d) for NCT, which includes four parts: problem formalization ( 2.1), the NCT model ( 2.2), existing three auxiliary tasks ( 2.3), and training objective ( 2.4).",
"In a bilingual conversation, we assume the two speakers have alternately given utterances in different languages for u turns, resulting in X 1 , X 2 , X 3 , ..., X u and Y 1 , Y 2 , Y 3 , ..., Y u on the source and target sides, respectively.",
"Among these utterances, X 1 , X 3 , X 5 , ..., X u are originally spoken and Y 1 , Y 3 , Y 5 , ..., Y u are the corresponding translations in the target language.",
"Similarly, Y 2 , Y 4 , Y 6 , ..., Y u 1 are originally spoken and X 2 , X 4 , X 6 , ..., X u 1 are the translated utterances in the source language.",
"According to languages, we define the dialogue history context of X u on the source side as CX u ={ X 1 , X 2 , X 3 , ..., X u 1 } and that of Y u on the target side as CY u ={ Y 1 , Y 2 , Y 3 , ..., Y u 1 } .",
"3 The goal of an NCT model is to translate X u to Y u with dialogue history context CX u and CY u .",
"The NCT model (Ma et al., 2020; Liang et al., 2021d) utilizes the standard transformer (Vaswani et al., 2017) architecture with an encoder and a decoder 4 .",
"3 For each of { CX u , CY u }, we add the special token [CLS]' tag at the head of it and use another token [SEP]' to delimit its included utterances, as in Devlin et al. (2019).",
"4 Here, we just describe some adaptions to the NCT model, and please refer to Vaswani et al. (2017) for more details.",
"In the encoder, it takes [ CX u ; X u ] as input, where [; ] denotes the concatenation.",
"The input embedding consists of word embedding WE , position embedding PE , and turn embedding TE : B ( x i ) = WE ( x i ) + PE ( x i ) + TE ( x i ) , where WE R | V | d and TE R | T | d .",
"5 When computation in the encoder, words in CX u can only be attended by those in X u at the first encoder layer while CX u is masked at the other layers, which is the same implementation as in Ma et al. (2020).",
"In the decoder, at each decoding time step t , the top-layer ( L -th) decoder hidden state h Ld,t is fed into a softmax layer to predict the probability distribution of the next target token: p ( Y u,t | Y u,<t , X u , CX u ) = Softmax( W o h Ld,t + b o ) , where Y u,<t denotes the preceding tokens before the t -th time step in the utterance Y u , W o R | V | d and b o R | V | are trainable parameters.",
"To generate coherent translation, Liang et al. (2021d) present Monolingual Response Generation (MRG) task, Cross-lingual Response Generation (XRG) task, and Next Utterance Discrimination (NUD) task during the NCT model training.",
"MRG.",
"Given the dialogue context CY u in the target language, it forces the NCT model to generate the corresponding utterance Y u coherent to CY u .",
"Particularly, the encoder of the NCT model is used to encode CY u , and the NCT decoder predicts Y u .",
"The training objective of this task is formulated as: LMRG = | Y u | (cid:88) t =1 log( p ( Y u,t |C Y u , Y u,<t )) , p ( Y u,t |C Y u , Y u,<t ) = Softmax( W m h Ld,t + b m ) , where h Ld,t is the L -th decoder hidden state at the t -th decoding step, W m and b m are trainable parameters.",
"XRG.",
"Similar to MRG, the NCT model is also jointly trained to generate the corresponding utterance Y u which is coherent to the given dialogue 5 | V | , | T | and d denote the size of shared vocabulary, maximum dialogue turns, and the hidden size, respectively.",
"history context CX u in the source language: LXRG = | Y u | (cid:88) t =1 log( p ( Y u,t |C X u , Y u,<t )) , p ( Y u,t |C X u , Y u,<t ) = Softmax( W c h Ld,t + b c ) , where W c and b c are trainable parameters.",
"NUD.",
"The NUD task aims to distinguish whether the translated text is coherent to be the next utterance of the given dialogue history context.",
"Specifically, the positive and negative samples are firstly constructed: (1) the positive sample ( CY u , Y u + ) with the label = 1 consists of the target utterance Y u and its dialogue history context CY u ; (2) the negative sample ( CY u , Y u ) with the label = 0 consists of the identical CY u and a randomly selected utterance Y u from the preceding context of Y u .",
"Formally, the training objective of NUD is defined as follows: LNUD = log( p ( = 1 |C Y u , Y u + )) log( p ( = 0 |C Y u , Y u )) , p ( =1 |C Y u , Y u )=Softmax( W n [ HY u ; HC Yu ]) , where HY u and HC Yu denote the representations of the target utterance Y u and CY u , respectively.",
"Concretely, HY u is calculated as 1 | Y u | (cid:80) | Y u | t =1 h Le,t while HC Yu is defined as the encoder hidden state h Le, 0 of the prepended special token [CLS]' of CY u .",
"W n is the trainable parameter of the NUD classifier and the bias term is omitted for simplicity.",
"With the main chat translation task and three auxiliary tasks, the total training objective of the conventional multi-task learning is formulated as:",
"L = LNCT + ( LMRG + LXRG + LNUD ) , (2)",
"In this section, we introduce the proposed S cheduled M ulti-task L earning (SML) framework, including three stages: general pre-training, in-domain pre-training, and in-domain fine-tuning, as shown in Fig. 1. Specifically, we firstly describe the process of in-domain pre-training ( 3.1) and then present some findings of conventional multi-task learning ( 3.2), which inspire us to investigate the scheduled multi-task learning ( 3.3).",
"Finally, we 4377 elaborate on the process of training and inference ( 3.4).",
"For the second in-domain pre-training, we firstly build an in-domain paired bilingual dialogue data and then conduct pre-training on it.",
"To construct the paired bilingual dialogue data, we firstly crawl the in-domain consecutive movie subtitles of En Zh and download the consecutive movie subtitles of En De on related websites 6 .",
"Since both bilingual movie subtitles are not strictly aligned, we utilize the Vecalign tool (Thompson and Koehn, 2019), an accurate sentence alignment algorithm, to align them.",
"Meanwhile, we leverage the LASER toolkit 7 to obtain the multilingual embedding for better alignment performance.",
"Consequently, we obtain two relatively clean paired movie subtitles.",
"According to the setting of dialogue context length in Liang et al. (2021a), we take four consecutive utterances as one dialogue, and then filter out duplicate dialogues.",
"Finally, we attain two in-domain paired bilingual dialogue dataset, the statistics of which are shown in Tab.",
"1. Datasets # Dialogues # Utterances # Sentences En Zh 28,214,769 28,238,877 22,244,006 En De 18,041,125 18,048,573 45,541,367 Table 1: Statistics of our constructed chat translation data.",
"Based on the constructed in-domain bilingual corpus, we continue to pre-train the NCT model after the general pre-training stage, and then go to the in-domain fine-tuning stage, as shown in the In-domain Pre-training&Fine-tuning parts of Fig. 1. 3.2 Findings of Conventional Multi-task Learning According to the finding that multi-task learning can enhance the NCT model (Liang et al., 2021d), in the last two training processes ( i.e. , the In-domain Pre-training and In-domain Fine-tuning parts of Fig. 1), we conduct extensive multi-task learning experiments, aiming to achieve a better NCT model.",
"Firstly, we present one additional auxiliary task, i.e. Cross-lingual NUD (XNUD), given the intuition that more dialogue-related tasks may 6 En Zh: https://www.kexiaoguo.com/ and En De: https://opus.nlpl.eu/OpenSubtitles.php 7 https://github.com/facebookresearch/LASER \u00000\u00005\u0000* \u0000;\u00005\u0000* \u00001\u00008\u0000' \u0000;\u00001\u00008\u0000' \u0000$\u0000O\u0000O \u0000(\u0000Q\u0000\u0010\u0000=\u0000K\u0000\u0003\u00005\u0000H\u0000V\u0000X\u0000O\u0000W\u0000V\u0000\u0003\u0000L\u0000Q\u0000\u0003\u0000'\u0000L\u0000I\u0000I\u0000H\u0000U\u0000H\u0000Q\u0000W\u0000\u0003\u00006\u0000W\u0000D\u0000J\u0000H\u0000V \u0000\u0016\u0000\u0016\u0000\u0011\u0000\u0014 \u0000\u0016\u0000\u0016\u0000\u0011\u0000\u0015 \u0000\u0016\u0000\u0016\u0000\u0011\u0000\u0016 \u0000\u0016\u0000\u0016\u0000\u0011\u0000\u0017 \u0000\u0016\u0000\u0016\u0000\u0011\u0000\u0018 \u0000\u0016\u0000\u0016\u0000\u0011\u0000\u0019 \u0000\u0016\u0000\u0016\u0000\u0011\u0000\u001a \u0000\u0016\u0000\u0016\u0000\u0011\u0000\u001b \u0000% \u0000/\u0000( \u00008 \u00006\u0000H\u0000F\u0000R\u0000Q\u0000G\u0000\u0003\u00006\u0000W\u0000D\u0000J\u0000H\u0000)\u0000L\u0000Q\u0000H\u0000\u0010\u0000W\u0000X\u0000Q\u0000L\u0000Q\u0000J\u0000\u0003\u00006\u0000W\u0000D\u0000J\u0000H\u0000%\u0000R\u0000W\u0000K\u0000\u0003\u00006\u0000W\u0000D\u0000J\u0000H\u0000V\u00001\u0000&\u00007\u0000\u0003\u0000P\u0000R\u0000G\u0000H\u0000O\u0000\u0003\u0000Z\u0000\u0012\u0000R\u0000\u0003\u0000W\u0000D\u0000V\u0000N \u00000\u00005\u0000* \u0000;\u00005\u0000* \u00001\u00008\u0000' \u0000;\u00001\u00008\u0000' \u0000$\u0000O\u0000O \u0000=\u0000K\u0000\u0010\u0000(\u0000Q\u0000\u0003\u00005\u0000H\u0000V\u0000X\u0000O\u0000W\u0000V\u0000\u0003\u0000L\u0000Q\u0000\u0003\u0000'\u0000L\u0000I\u0000I\u0000H\u0000U\u0000H\u0000Q\u0000W\u0000\u0003\u00006\u0000W\u0000D\u0000J\u0000H\u0000V \u0000\u0015\u0000\u001c\u0000\u0011\u0000\u0013 \u0000\u0015\u0000\u001c\u0000\u0011\u0000\u0015 \u0000\u0015\u0000\u001c\u0000\u0011\u0000\u0017 \u0000\u0015\u0000\u001c\u0000\u0011\u0000\u0019 \u0000\u0015\u0000\u001c\u0000\u0011\u0000\u001b \u0000\u0016\u0000\u0013\u0000\u0011\u0000\u0013 \u0000% \u0000/\u0000( \u00008 \u00006\u0000H\u0000F\u0000R\u0000Q\u0000G\u0000\u0003\u00006\u0000W\u0000D\u0000J\u0000H\u0000)\u0000L\u0000Q\u0000H\u0000\u0010\u0000W\u0000X\u0000Q\u0000L\u0000Q\u0000J\u0000\u0003\u00006\u0000W\u0000D\u0000J\u0000H\u0000%\u0000R\u0000W\u0000K\u0000\u0003\u00006\u0000W\u0000D\u0000J\u0000H\u0000V\u00001\u0000&\u00007\u0000\u0003\u0000P\u0000R\u0000G\u0000H\u0000O\u0000\u0003\u0000Z\u0000\u0012\u0000R\u0000\u0003\u0000W\u0000D\u0000V\u0000N Figure 2: The effect of each task on validation sets in different training stages, under transformer Base setting, where All denotes all four auxiliary tasks.",
"yield better performance.",
"Then, we conclude some multi-task learning findings that could motivate us to investigate how to use these auxiliary tasks well.",
"XNUD.",
"Similar to the NUD task described in 2.3, the XNUD aims to distinguish whether the translated text is coherent to be the next utterance of the given cross-lingual dialogue history context.",
"Compared to the NUD task, the different point lies in the cross-lingual dialogue context history, i.e. , a positive sample ( CX u , Y u + ) with the label = 1 and a negative sample ( CX u , Y u ) with the label = 0 .",
"Formally, the training objective of XNUD is defined as follows: LXNUD = log( p ( = 1 |C X u , Y u + )) log( p ( = 0 |C X u , Y u )) , p ( =1 |C X u , Y u )=Softmax( W x [ HY u ; HC Xu ]) , where HC Xu denotes the representation of CY u , which is calculated as same as HC Yu in NUD.",
"W x is the trainable parameter of the XNUD classifier and the bias term is omitted for simplicity.",
"Findings.",
"Based on four auxiliary tasks (MRG, XRG, NUD, and XNUD), we investigate in which stage in Fig. 1 the auxiliary tasks work well in a conventional multi-task learning manner 8 and the following is what we find from Fig. 2: Each auxiliary task can always bring improvement compared with the NCT model w/o task; 8 Note that, in the last two in-domain stages, we use the conventional multi-task learning to pre-train and fine-tune models rather than the scheduled multi-task learning.",
"By contrast, XRG and XNUD tasks perform relatively poorly in the final fine-tuning stage than MRG and NUD tasks; Some tasks used only in one stage ( e.g. , XRG and XNUD in the second stage) perform better than being used in both stages, revealing that different auxiliary tasks may prefer different stages to exert their advantages; (one best setting seems that all tasks are used in the second stage while only MRG and NUD tasks are used in the final fine-tuning stage.) Using all auxiliary tasks in a conventional multi-task learning manner does not obtain significant cumulative benefits.",
"Inspired by Yu et al. (2020), we design a gradient-based scheduled multi-task learning algorithm to dynamically schedule all auxiliary tasks at each training step, as shown in Algorithm 1. Specifically, at each training step ( line 1 ), for each task we firstly compute its gradient to model parameters ( lines 2 4 , and we denote the gradient of the main NCT task as g nct ).",
"Then, we obtain the projection of the gradient g k of each auxiliary task k onto g nct ( line 5 ), as shown in Fig. 3. Finally, we utilize the sum of g nct and all projection ( i.e. , the blue arrows part, as shown in Fig. 3) of auxiliary tasks to update model parameters.",
"The core ideas behind the gradient-based SML algorithm are: (1) when the cosine similarity between g k and g nct is positive, i.e. , the gradient projection g k is in the same gradient descent direction with the main NCT task, i.e. , Fig. 3",
"(a), which could help the NCT model achieve optimal solution; (2) when the cosine similarity between g k and g nct is negative, i.e. , Fig. 3",
"(b), which can avoid the model being optimized too fast and overfitted.",
"Therefore, we also keep the inverse gradient to prevent the NCT model from overfitting as a regularizer.",
"In this way, such auxiliary task joins in training at each step with the NCT task when its gradient projection is in line with g nct , which acted as a fine-grained joint training manner.",
"Our training process includes three stages: the first pre-training stage on the general-domain sentence",
"the second in-domain pre-training stage, and the final in-domain fine-tuning stage on the chat translation data: J = LNCT + T (cid:88) k L k , (4)",
"where T is the auxiliary tasks set and we keep the balancing hyper-parameter .",
"Although the form of L k is the same with Eq.",
"2, the gradient that participates in updating model parameters is different where it depends on the gradient descent direction of the NCT task in Eq.",
"4. At inference, all auxiliary tasks are not involved and only the NCT model after scheduled multi-task fine-tuning is applied to chat translation.",
"Datasets.",
"The training of our SML framework consists of three stages: (1) pre-train the model on a large-scale sentence-level NMT corpus (WMT20 9 ); 9 http://www.statmt.org/wmt20/translation-task.html 4379 Models En Zh Zh En En De De En BLEU TER BLEU TER BLEU TER BLEU TER Base Trans.",
"(2) further pre-train the model on our constructed in-domain chat translation corpus; (3) fine-tune on the target chat translation corpus: BMELD (Liang et al., 2021a) and BConTrasT (Farajian et al., 2020).",
"The target dataset details ( e.g. , splits of training, validation or test sets) are described in Appendix A. Metrics.",
"Following Liang et al. (2021d), we use SacreBLEU 10 (Post, 2018) and TER (Snover et al., 2006) with the statistical significance test (Koehn, 2004) for fair comparison.",
"Specifically, we report character-level BLEU for En Zh, case-insensitive BLEU score for Zh En, and case-sensitive BLEU score likewise for En De.",
"In this paper, we adopt the settings of standard Transformer-Base and Transformer-Big in Vaswani et al. (2017).",
"Generally, we utilize the settings in Liang et al. (2021d) for fair comparison.",
"For more details, please refer to Appendix B. We investigate the effect of the XNUD task in 5.4, where the new XNUD performs well based on existing auxiliary tasks.",
"10 BLEU+case.mixed+numrefs.1+smooth.exp+tok.13a+ version.1.4.13 4.3 Comparison Models Sentence-level NMT Systems.",
"Trans.",
"w/o FT and Trans.",
"(Vaswani et al., 2017): both are the de-facto transformer-based NMT models, and the difference is that the Trans. model is fine-tuned on the chat translation data after being pre-trained on sentence-level NMT corpus.",
"Context-aware NMT Systems.",
"Dia-Trans.",
"(Maruf et al., 2018): A Transformer-based model where an additional encoder is used to introduce the mixed-language dialogue history, re-implement by Liang et al. (2021a).",
"Gate-Trans.",
"(Zhang et al., 2018) and NCT (Ma et al., 2020): Both are document-level NMT Transformer models where they introduce the dialogue history by a gate and by sharing the first encoder layer, respectively.",
"CPCC (Liang et al., 2021a): A variational model that focuses on incorporating dialogue characteristics into a translator for better performance.",
"In Tab.",
"2, We report the main results on En Zh and En De under Base and Big settings.",
"In Tab.",
"3, we present additional results on En Zh.",
"Results on En Zh.",
"Under the Base setting, our model significantly outperforms the sentence-level/context-aware baselines by a large margin ( e.g. , the previous best CSA-NCT), 4.58 on En Zh and 4.06 on Zh En, showing the effectiveness of the large-scale in-domain data and our scheduled multi-task learning.",
"In terms of TER, SML also performs best on the two directions, 5.0 and 4.3 than CPCC (the lower the better), respectively.",
"Under the Big setting, our model consistently surpasses all existing systems once again.",
"Results on En De.",
"On both En De and De En, our model presents notable improvements over all comparison models by up to 2.50 and 2.69 BLEU gains under the Base setting, and by 2.55 and 2.53 BLEU gains under the Big setting, respectively.",
"These results demonstrate the superiority of our three-stage training framework and also show the generalizability of our model across different language pairs.",
"Since the baselines of En De are very strong, the results of En De are not so significant than En Zh.",
"Additional Results.",
"Tab.",
"2 presents our overall model performance, though, strictly speaking, it is unfair to directly compare our approaches with previous ones.",
"Therefore, we conduct additional experiments in Tab.",
"3 under two settings: ( i ) using the original pre-training-then-fine-tuning framework without introducing the large-scale in-domain data ( i.e. , Two-stage w/o data group); ( ii ) using the proposed three-stage method with the large-scale in-domain data ( i.e. , Three-stage w/ data group).",
"And we conclude that (1) the same model ( e.g. , SML) can be significantly enhanced by the second in-domain pre-training stage, demonstrating the effectiveness of the second pre-training on the in-domain data; (2) our SML model always exceeds the conventional multi-task learning model M-NCT in both settings, indicating the superiority of the scheduled multi-task learning strategy.",
"We conduct ablation studies in Tab.",
"4 and Tab.",
"5 to answer the following two questions.",
"Q1 : why a three-stage training framework?",
"and Q2 : why the scheduled multi-task learning strategy?",
"To answer Q1 , in Tab.",
"4, we firstly investigate the effect of the large-scale in-domain chat translation data and further explore where to use it.",
"Firstly, the results of rows 1 3 substantially outperform those in row 0, proving the availability of incorporating the in-domain data.",
"Secondly, the results of 4381 # Training Manners?",
"row 3 significantly surpass rows 1 2, indicating that the in-domain data used in the proposed second stage of our three-stage training framework is very successful rather than used in the stage of pre-training-then-fine-tuning paradigm.",
"That is, the experiments show the effectiveness and necessity of our three-stage training framework.",
"To answer Q2 , we investigate multiple multitask learning strategies in Tab.",
"5.",
"Firstly, the results of row 3 are notably higher than those of rows 0 2 in both language directions, obtaining significant cumulative benefits of auxiliary tasks than rows 0 2, demonstrating the validity of the proposed SML strategy.",
"Secondly, the results of row 3 vs row 4 show that the inverse gradient projection of auxiliary tasks also has a positive impact on the model performance, which may prevent the model from overfitting, working as a regularizer.",
"All experiments show the superiority of our scheduled multi-task learning strategy.",
"Inspired by Bao et al. (2020) and Liang et al. (2021a), we use two criteria for human evaluation to judge whether the translation is:",
"1. semantically coherent with the dialogue history?",
"2. fluent and grammatically correct?",
"Firstly, we randomly sample 200 conversations from the test set of BMELD in En Zh.",
"Then, we use 6 models in Tab.",
"6 to generate translated utterances of these sampled conversations.",
"Finally, we assign the translated utterances and their corre-Models ( Base ) Coherence Fluency Trans.",
"sponding dialogue history utterances in the target language to three postgraduate human annotators, and then ask them to make evaluations (0/1 score) according to the above two criteria, and average the scores as the final result.",
"Tab.",
"6 shows that our model generates more coherent and fluent translations when compared with other models (significance test, p < 0.05), which shows the superiority of our model.",
"The inter-annotator agreements calculated by the Fleiss' kappa (Fleiss and Cohen, 1973) are 0.558 and 0.583 for coherence and fluency, respectively.",
"It indicates Moderate Agreement for both criteria.",
"s i is the sentence.",
"We use Word2Vec 11 (Mikolov et al., 2013) trained on a dialogue dataset 12 to obtain the distributed word vectors whose dimension is set to 100.",
"Tab.",
"7 shows the measured coherence of different models on validation set of BMELD in En Zh direction.",
"It shows that our SML produces more coherent translations compared to all existing models (significance test, p < 0.01).",
"We investigate the effect of the XNUD task.",
"As shown in Tab.",
"8, the M-NCT denotes the multitask learning model jointly trained with four auxiliary tasks in conventional manner.",
"After removing the XNUD task, the performance drops to some extend, indicating that the new XNUD task achieves further performance improvement based on three existing auxiliary tasks (Liang et al., 2021d).",
"Then, based on the strong M-NCT model, we further investigate where and how to make the most of them for the main NCT task.",
"Neural Chat Translation.",
"The goal of NCT is to train a dialogue-aware translation model using the bilingual dialogue history, which is different from document-level/sentence-level machine translation (Maruf et al., 2019; Ma et al., 2020; Yan et al., 2020; Meng and Zhang, 2019; Zhang et al., 2019).",
"Previous work can be roughly divided into two categories.",
"One (Wang et al., 2016b; Maruf et al., 2018; Zhang and Zhou, 2019; Rikters et al., 2020) mainly pays attention to automatically constructing the bilingual corpus since no publicly available human-annotated data (Farajian et al., 2020).",
"The other (Wang et al., 2021; Liang et al., 2021a,d) aims to incorporate the bilingual dialogue characteristics 11 https://code.google.com/archive/p/word2vec/ 12 We choose our constructed dialogue corpus to learn the word embedding.",
"into the NCT model via multi-task learning.",
"Different from the above studies, we focus on introducing the in-domain chat translation data to learn domain-specific patterns and scheduling the auxiliary tasks to exert their potential for high translation quality.",
"Multi-task Learning.",
"Conventional multi-task learning (MTL) (Caruana, 1997), which trains the model on multiple related tasks to promote the representation learning and generalization performance, has been successfully used in many NLP tasks (Collobert and Weston, 2008; Ruder, 2017; Deng et al., 2013; Liang et al., 2021c,b).",
"In the NCT, conventional MTL has been explored to inject the dialogue characteristics into models with dialogue-related tasks such as response generation (Liang et al., 2021a,d).",
"In this work, we instead focus on how to schedule the auxiliary tasks at training to make the most of them for better translations.",
"This paper proposes a scheduled multi-task learning framework armed with an additional in-domain pre-training stage and a gradient-based scheduled multi-task learning strategy.",
"Experiments on En Zh and En De demonstrate that our framework significantly improves translation quality on both BLEU and TER metrics, showing its effectiveness and generalizability.",
"Human evaluation further verifies that our model yields better translations in terms of coherence and fluency.",
"Furthermore, we contribute two large-scale in-domain paired bilingual dialogue datasets to the research community.",
"The research work descried in this paper has been supported by the National Key R&D Program of China (2020AAA0108001) and the National Nature Science Foundation of China (No. 61976015, 61976016, 61876198 and 61370130).",
"Liang is supported by 2021 Tencent Rhino-Bird Research Elite Training Program.",
"The authors would like to thank the anonymous reviewers for their valuable comments and suggestions to improve this paper."
] | [
"abstain",
"abstain",
"abstain",
"objective",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"result",
"abstain",
"objective",
"result",
"result",
"objective",
"objective",
"result",
"result",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"other",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"method",
"objective",
"objective",
"abstain",
"objective",
"other",
"other",
"other"
] |
[
"We present OPINIONDIGEST , an abstractive opinion summarization framework, which does not rely on gold-standard summaries for training.",
"The framework uses an Aspect-based Sentiment Analysis model to extract opinion phrases from reviews, and trains a Transformer model to reconstruct the original reviews from these extractions.",
"At summarization time, we merge extractions from multiple reviews and select the most popular ones.",
"The selected opinions are used as input to the trained Transformer model, which verbalizes them into an opinion summary.",
"OPINIONDIGEST can also generate customized summaries, tailored to specific user needs, by filtering the selected opinions according to their aspect and/or sentiment.",
"Automatic evaluation on YELP data shows that our framework outperforms competitive baselines.",
"Human studies on two corpora verify that OPINIONDIGEST produces informative summaries and shows promising customization capabilities 1 .",
"The summarization of opinions in customer reviews has received significant attention in the Data Mining and Natural Language Processing communities.",
"Early efforts (Hu and Liu, 2004a) focused on producing structured summaries which numerically aggregate the customers' satisfaction about an item across multiple aspects, and often included representative review sentences as evidence.",
"Considerable research has recently shifted towards textual opinion summaries, fueled by the increasing success of neural summarization methods (Cheng and Lapata, 2016; Paulus et al., 2018; See et al., 2017; Liu and Lapata, 2019; Isonuma et al., 2019).",
"Opinion summaries can be extractive, i.e., created by selecting a subset of salient sentences from the input reviews, or abstractive, where summaries are generated from scratch.",
"Extractive approaches produce well-formed text, but selecting the sentences which approximate the most popular opinions in the input is challenging.",
"Angelidis and Lapata (2018) used sentiment and aspect predictions as a proxy for identifying opinion-rich segments.",
"Abstractive methods (Chu and Liu, 2019; Brazinskas et al., 2019), like the one presented in this paper, attempt to model the prevalent opinions in the input and generate text that articulates them.",
"Opinion summarization can rarely rely on gold-standard summaries for training (see Amplayo and Lapata (2019) for a supervised approach).",
"Recent work has utilized end-to-end unsupervised architectures, based on auto-encoders (Chu and Liu, 2019; Brazinskas et al., 2019), where an aggregated representation of the input reviews is fed to a decoder, trained via reconstruction loss to produce review-like summaries.",
"Similarly to their work, we assume that review-like generation is appropriate for opinion summarization.",
"However, we explicitly deal with opinion popularity , which we believe is crucial for multi-review opinion summarization.",
"Additionally, our work is novel in its ability to explicitly control the sentiment and aspects of selected opinions.",
"The aggregation of input reviews is no longer treated as a black box, thus allowing for controllable summarization.",
"Specifically, we take a step towards more interpretable and controllable opinion aggregation, as we replace the end-to-end architectures of previous work with a pipeline framework.",
"Our method has three components:",
"a) a pre-trained opinion extractor, which identifies opinion phrases in reviews;",
"b) a simple and controllable opinion selector, which merges, ranks, and optionally filters the extracted opinions; and",
"c) a generator model, which is trained Good location close to the wharf, aquatic park and the many other attraction.",
"to reconstruct reviews from their extracted opinion phrases and can then generate opinion summaries based on the selected opinions.",
"We describe our framework in Section 2 and present two types of experiments in Section 3: A quantitative comparison against established summarization techniques on the YELP summarization corpus (Chu and Liu, 2019); and two user studies, validating the automatic results and our method's ability for controllable summarization.",
"Let D denote a dataset of customer reviews on individual entities { e 1 , e 2 , . . . , e | D | } from a single do-main, e.g., restaurants or hotels.",
"For every entity e , we define a review set R e = { r i } | R e | i =1 , where each review is a sequence of words r = ( w 1 , . . . , w n ) .",
"Within a review, we define a single opinion phrase, o = ( w o 1 , . . . w om ) , as a subsequence of tokens that expresses the attitude of the reviewer towards a specific aspect of the entity 2 .",
"Formally, we define the opinion set of r as O r = { ( o i , pol i , a i ) } | O r | i =1 , where pol i is the sentiment polarity of the i -th phrase ( positive , neutral , or negative ) and a i is the aspect category it discusses (e.g., a hotel's service , or cleanliness ).",
"For each entity e , our task is to abstractively generate a summary s e of the most salient opinions expressed in reviews R e .",
"Contrary to previous abstractive methods (Chu and Liu, 2019; Brazinskas et al., 2019), which never explicitly deal with opinion phrases, we put the opinion sets of reviews at the core of our framework, as described in the following sections and illustrated in Figure 1. 2 Words that form an opinion may not be contiguous in the review.",
"Extracting opinion phrases from reviews has been studied for years under the Aspect-based Sentiment Analysis (ABSA) task (Hu and Liu, 2004b; Luo et al., 2019; Dai and Song, 2019; Li et al., 2019).",
"We follow existing approaches to obtain an opinion set O r for every review in our corpus 3 .",
"Specifically, we used a pre-trained tagging model (Miao et al., 2020) to extract opinion phrases, their polarity, and aspect categories.",
"Step 1 (top-left) of Figure 1 shows a set of opinions extracted from a full review.",
"Given the set or reviews R e = { r 1 , r 2 , . . . } for an entity e , we define the entity 's opinion set as O e = { O r 1 O r 2 . . . } .",
"Summarizing the opinions about entity e relies on selecting the most salient opinions S e O e .",
"As a departure from previous work, we explicitly select the opinion phrases that will form the basis for summarization, in the following steps.",
"Opinion Merging: To avoid selecting redundant opinions in S e , we apply a greedy algorithm to merge similar opinions into clusters C = { C 1 , C 2 , ... } : given an opinion set O e , we start with an empty C , and iterate through every opinion in O e .",
"For each opinion, ( o i , pol i , a i ) , we further iterate through every existing cluster in random order.",
"The opinion is added to the first cluster C which satisfies the following criterion, or to a newly created cluster otherwise: ( o j , pol j , a j ) C, cos (v i , v j ) , 3 Our framework is flexible with respect to the choice of opinion extraction models.",
"where v i and v j are the average word embedding of opinion phrase o i and o j respectively, cos ( , ) is the cosine similarity, and (0 , 1] is a hyper-parameter.",
"For each opinion cluster { C 1 , C 2 , . . . } , we define its representative opinion Repr ( C i ) , which is the opinion phrase closest to its centroid.",
"Opinion Ranking: We assume that larger clusters contain opinions which are popular among reviews and, therefore, should have higher priority to be included in S e .",
"We use the representative opinions of the topk largest clusters, as selected opinions S e .",
"The Opinion Merging and Ranking steps are demonstrated in Step 2 (bottom-left) of Figure 1, where the top-3 opinion clusters are shown and their representative opinions are selected.",
"Opinion Filtering (optional): We can further control the selection by filtering opinions based on their predicted aspect category or sentiment polarity.",
"For example, we may only allow opinions where a i = cleanliness .",
"Our goal is to generate a natural language summary which articulates S e , the set of selected opinions.",
"To achieve this, we need a natural language generation (NLG) model which takes a set of opinion phrases as input and produces a fluent, review-like summary as output.",
"Because we cannot rely on gold-standard summaries for training, we train an NLG model that encodes the extracted opinion phrases of a single review and then attempts to reconstruct the review's full text.",
"Then, the trained model can be used to generate summaries.",
"Training via Review Reconstruction: Having extracted O r for every review r in a corpus, we construct training examples { T ( O r ) , r } , where T ( O r ) is a textualization of the review's opinion set, where all opinion phrases are concatenated in their original order, using a special token [SEP] .",
"For example: O r = { very comfy bed , clean bath } T ( O r ) = very comfy bed [SEP] clean bath The { T ( O r ) , r } pairs are used to train a Transformer model (Vaswani et al., 2017) 4 to reconstruct review text from extracted opinions, as shown in Step 3a (top-right) of Figure 1. 4 Our framework is flexible w.r.t. the choice of the model.",
"Summarization: At summarization time, we use the textualization of the selected opinions, T ( S e ) , as input to the trained Transformer, which generates a natural language summary s e as output (Figure 1, Step 3b).",
"We order the selected opinions by frequency (i.e., their respective cluster's size), but any desired ordering may be used.",
"We used two review datasets for evaluation.",
"The public YELP corpus of restaurant reviews, previously used by Chu and Liu (2019).",
"We used a different snapshot of the data, filtered to the same specifications as the original paper, resulting in 624K training reviews.",
"We used the same gold-standard summaries for 200 restaurants as used in Chu and Liu (2019).",
"We also used HOTEL , a private hotel review dataset that consists of 688K reviews for 284 hotels collected from multiple hotel booking web-sites.",
"There are no gold-standard summaries for this dataset, so systems were evaluated by humans.",
"LexRank (Erkan and Radev, 2004): A popular unsupervised extractive summarization method.",
"It selects sentences based on centrality scores calculated on a graph-based sentence similarity.",
"MeanSum (Chu and Liu, 2019): An unsupervised multi-document abstractive summarizer that minimizes a combination of reconstruction and vector similarity losses.",
"We only applied MeanSum to YELP , due to its requirement for a pre-trained language model, which was not available for HOTEL .",
"For opinion extraction, the ABSA models are trained with 1.3K labeled review sentences for YELP and 2.4K for HOTEL .",
"For opinion merging, we used pre-trained word embeddings Method I-score C-score R-score LexRank -35.4 -32.1 -13.5 MeanSum 14.2 4.9 9.0 OPINIONDIGEST 21.2 27.2 4.4",
"( glove.6B.300d ), = 0 .",
"8 , and selected the topk ( k = 15 ) most popular opinion clusters.",
"We trained a Transformer with the original architecture (Vaswani et al., 2017).",
"We used SGD with an initial learning rate of 0.1, a momentum of = 0 .",
"1 , and a decay of = 0 .",
"1 for 5 epochs with a batch size of 8.",
"For decoding, we used Beam Search with a beam size of 5, a length penalty of 0.6, 3-gram blocking (Paulus et al., 2018), and a maximum generation length of 60.",
"We tuned hyper-parameters on the dev set, and our system appears robust to their setting (see Appendix A).",
"We performed automatic evaluation on the YELP dataset with ROUGE-1 (R1), ROUGE-2 (R2), and ROUGE-L (RL) (Lin, 2004) scores based on the 200 reference summaries (Chu and Liu, 2019).",
"We also conducted user studies on both YELP and HOTEL datasets to further understand the performance of different models.",
"Automatic Evaluation: Table 1 shows the automatic evaluation scores for our model and the baselines on YELP dataset.",
"As shown, our framework outperforms all baseline approaches.",
"Although OPINIONDIGEST is not a fully unsupervised framework, labeled data is only required by the opinion extractor and is easier to acquire than gold-standard summaries: on YELP dataset, the opinion extraction models are trained on a publicly available ABSA dataset (Wang et al., 2017).",
"Human Evaluation: We conducted three user studies to evaluate the quality of the generated summaries (more details in Appendix B).",
"First, we generated summaries from 3 systems (ours, LexRank and MeanSum/Best Review) for every entity in YELP 's summarization test set and 200 Does the summary discuss the specified aspect: Exclusively Partially Not HOTEL 46.63 % 43.09 % 10.28 % Table 4: User study on aspect-specific summaries.",
"random entities in the HOTEL dataset, and asked judges to indicate the best and worst summary according to three criteria: informativeness (I), coherence (C), and non-redundancy (R).",
"The systems' scores were computed using Best-Worst Scaling (Louviere et al., 2015), with values ranging from -100 (unanimously worst) to +100 (unanimously best.) We aggregated users' responses and present the results in Table",
"2(a).",
"As shown, summaries generated by OPINIONDIGEST achieve the best informativeness and coherence scores compared to the baselines.",
"However, OPINIONDIGEST may still generate redundant phrases in the summary.",
"Second, we performed a summary content support study.",
"Judges were given 8 input reviews from YELP , and a corresponding summary produced either by MeanSum or by our system.",
"For each summary sentence, they were asked to evaluate the extent to which its content was supported by the input reviews.",
"Table 3 shows the proportion of summary sentences that were fully, partially, or not supported for each system.",
"OPINIONDIGEST produced sig-nificantly more sentences with full or partial support, and fewer sentences without any support.",
"Finally, we evaluated our framework's ability to generate controllable output.",
"We produced aspect-specific summaries using our HOTEL dataset, and asked participants to judge if the summaries discussed the specified aspect exclusively, partially, or not at all.",
"Table 4 shows that in 46.6% of the summaries exclusively summarized a specified aspect, while only 10.3% of the summaries failed to contain the aspect completely.",
"Example Output: Example summaries in Table 5 further demonstrate that",
"a) OPINIONDIGEST is able to generate abstractive summaries from more than a hundred of reviews and",
"b) produce controllable summaries by enabling opinion filtering.",
"The first two examples in Table 5 show summaries that are generated from 8 and 128 reviews of the same hotel.",
"OPINIONDIGEST performs robustly even for a large number of reviews.",
"Since our framework is not based on aggregating review representations, the quality of generated text is not affected by the number of inputs and may result in better-informed summaries.",
"This is a significant difference to previous work (Chu and Liu, 2019; Asp/Pol/N Input opinions Summary All/All/8 central location [SEP] lovely hotel [SEP] recommend room [SEP] good breakfast [SEP] very nice location [SEP] very dedicated staff [SEP] walking distance to coffee shops [SEP] perfect hotel [SEP] small bathroom [SEP] unkind personnel This hotel is in a perfect location , walking distance to a lot of shops and restaurants .",
"Brazinskas et al., 2019), where averaging vectors of many reviews may hinder performance.",
"Finally, we provide qualitative analysis of the controllable summarization abilities of OPINIONDIGEST , which are enabled by input opinion filtering.",
"As discussed in Section 2.2, we filtered input opinions based on predicted aspect categories and sentiment polarity.",
"The examples of controlled summaries (last 4 rows of Table 5) show that OPINIONDIGEST can generate aspect/sentiment-specific summaries.",
"These examples have redundant opinions and incorrect extractions in the input, but OPINIONDIGEST is able to convert the input opinions into natural summaries.",
"Based on OPINIONDIGEST , we have built an online demo (Wang et al., 2020) 5 that allows users to customize the generated summary by specifying search terms.",
"5 http://extremereader.megagon.info/ 4 Conclusion We described OPINIONDIGEST , a simple yet powerful framework for abstractive opinion summarization.",
"OPINIONDIGEST is a combination of existing ABSA and seq2seq models and does not require any gold-standard summaries for training.",
"Our experiments on the YELP dataset showed that OPINIONDIGEST outperforms baseline methods, including a state-of-the-art unsupervised abstractive summarization technique.",
"Our user study and qualitative analysis confirmed that our method can generate controllable high-quality summaries, and can summarize large numbers of input reviews.",
"We thank Hayate Iso for helping debug the code.",
"We also thank Prof. Mirella Lapata for helpful comments as well as the anonymous reviewers for their constructive feedback."
] | [
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"objective",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"other",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"result",
"abstain",
"other",
"other"
] |
[
"This paper provides a new way to improve the efficiency of the REINFORCE training process.",
"We apply it to the task of instance selection in distant supervision.",
"Modeling the instance selection in one bag as a sequential decision process, a reinforcement learning agent is trained to determine whether an instance is valuable or not and construct a new bag with less noisy instances.",
"However unbiased methods, such as REINFORCE, could usually take much time to train.",
"This paper adopts posterior regularization (PR) to integrate some domain-specific rules in instance selection using REINFORCE.",
"As the experiment results show, this method remarkably improves the performance of the relation classifier trained on cleaned distant supervision dataset as well as the efficiency of the REINFORCE training.",
"Relation extraction is a fundamental work in natural language processing.",
"Detecting and classifying the relation between entity pairs from the unstructured document, it can support many other tasks such as question answering.",
"While relation extraction requires lots of labeled data and make methods labor intensive, (Mintz et al., 2009) proposes distant supervision (DS), a widely used automatic annotating way.",
"In distant supervision, knowledge base (KB) , such as Freebase, is aligned with nature documents.",
"In this way, the sentences which contain an entity pair in KB all express the exact relation that the entity pair has in KB.",
"We usually call the set of instances that contain the same entity pair a bag.",
"In this way, the training instances can be divided into N bags B = { B 1 , B 2 , ..., BN } .",
"Each bag B k are corresponding to an unique entity pair Corresponding author E k = ( e k 1 , e k 2 ) and contains a sequence of instances { x k 1 , x k 2 , ..., x k | B k | } .",
"However, distant supervision may suffer a wrong label problem.",
"In other words, the instances in one bag may not actually have the relation.",
"To resolve the wrong label problem, just like Fig.2 shows, (Feng et al., 2018) model the instance selection task in one bag B k as a sequential decision process and train an agent ( a | s, ) denoting the probability P ( A t = a, | S t = s, t = ) that action a is taken at time t given that the agent is in state s with parameter vector by REINFORCE algorithm (Sutton and Barto, 1998).",
"The action a can only be 0 or 1 indicating whether an instance x ki is truly expressing the relation and whether it should be selected and added to the new bag B k .",
"The state s is determined by the entity pair corresponding to the bag, the candidate instance to be selected and the instances that have already been selected.",
"Accomplishing this task, the agent gets a new bag B k at the terminal of the trajectory with less wrong labeled instances.",
"With the newly constructed dataset B = { B 1 , B 2 , ..., BN } with less wrong labeling instances, we can train bag level relation predicting models with better performance.",
"Meanwhile, the relation predicting model gives reward to the instance selection agent.",
"Therefore, the agent and the relation classifier can be trained jointly.",
"However, REINFORCE is a Monte Carlo algorithm and need stochastic gradient method to optimize.",
"It is unbiased and has good convergence properties but also may be of high variance and slow to train (Sutton and Barto, 1998).",
"Therefore, we train a REINFORCE based agent by integrating some other domain-specific rules to accelerate the training process and guide the agent to explore more effectively and learn a better policy.",
"Here we use a rule pattern as the Fig.1 shows ( ? ).",
"The instances that return true (match the pattern and label in any one of the rules) are denoted as x MI and we adopt posterior regularization method (Ganchev, 2010) to regularize the posterior distribution of ( a | s, ) on x MI .",
"In this way, we can construct a rule-based agent r .",
"r tends to regard the instances in x MI valuable and select them without wasting time in trial-and-error exploring.",
"The number of such rules is 134 altogether and can match nearly four percents of instances in the training data.",
"Our contributions include: We propose PR REINFORCE by integrating domain-specific rules to improve the performance of the original REINFORCE.",
"We apply the PR REINFORCE to the instance selection task for DS dataset to alleviate the wrong label problem in DS.",
"Among the previous studies in relation extraction, most of them are supervised methods that need a large amount of annotated data (Bach and Badaskar, 2007).",
"Distant supervision is proposed to alleviate this problem by aligning plain text with Freebase.",
"However, distant supervision inevitably suffers from the wrong label problem.",
"Some previous research has been done in handling noisy data in distant supervision.",
"An expressed-at-least-once assumption is employed in (Mintz et al., 2009): if two entities participated in a relation, at least one instance in the bag might express that relation.",
"Many follow-up studies adopt this assumption and choose a most credible instance to represent the bag.",
"(Lin et al., 2016; Ji et al., 2017) employs the attention mechanism to put different attention weight on each sentence in one bag and assume each sentence is related to the relation but have a different correlation.",
"neural network including CNN and RNN, these methods perform better than conventional feature-based methods.",
"Reinforcement learning has been widely used in data selection and natural language processing.",
"(Feng et al., 2018) adopts REINFORCE in instance selection for distant supervision which is the basis of our work.",
"Posterior regularization (Ganchev, 2010) is a framework to handle the problem that a variety of tasks and domains require the creation of large problem-specific annotated data.",
"This framework incorporates external problem-specific information and put a constraint on the posterior of the model.",
"In this paper, we propose a rule-based REINFORCE based on this framework.",
"In this section, we focus on the model details.",
"Besides the interacting process of the relation classifier and the instance selector, we will introduce how to model the state, action, reward of the agent and how we add rules for the agent in training process.",
"We need a pretrained basic relation classifier to define the reward and state.",
"In this paper, we adopt the BGRU with attention bag level relation classifier f b (Zhou et al., 2016).",
"With o denoting the output of f b corresponding to the scores associated to each relation, the conditional probability can be written as follows: P f b ( r | B k , b ) = exp ( o r ) (cid:80) n r k =1 exp ( o k ) (1) where r is relation type, n r is the number of relation types, b is the parameter vector of the basic relation classifier f b and B k denotes the input bag of the classifier.",
"In the basic classifier, the sentence representation is calculated by the sentence encoder network BGRU, the BGRU takes the instance x ki as input and output the sentence representation BGRU( x ki ).",
"And then the sentence level(ATT) attention will take { BGRU ( x k 1 ) , BGRU ( x k 2 ) , ..., BGRU ( x k | B k | ) } as input and output o which is the final output of f b corresponding to the scores associated to each relation.",
"Original REINFORCE agent training process is quite similar to (Feng et al., 2018).",
"The instance selection process for one bag is completed in one trajectory.",
"Agent ( a | s, ) is trained as an instance selector.",
"The key of the model is how to represent the state in every step and the reward at the terminal of the trajectory.",
"We use the pretrained f b to address this key problem.",
"The reward defined by the basic relation classifier is as follows: R = log P f b ( r k | B k , b ) (2) In which r k denotes the corresponding relation of B k .",
"The state s mainly contained three parts: the representation of the candidate instance, the representation of the relation and the representation of the instances that have been selected.",
"The representation of the candidate instance are also defined by the basic relation classifier f b .",
"At time step t, we use BGRU( x kt ) to represent the candidate instance x kt and the same for the selected instances.",
"As for the embedding of relation, we use the entity embedding method introduced in TransE model (Bordes et al., 2013) which is trained on the Freebase triples that have been mentioned in the training and testing dataset, and the relation embedding re k will be computed by the difference of the entity embedding element-wise.",
"REINFORCE uses the complete return, which includes all future rewards up until the end of the trajectory.",
"In this sense, all updates are made after the trajectory is completed (Sutton and Barto, 1998).",
"These stochastic properties could make the training slow.",
"Fortunately, we have some domain-specific rules that could help to train the agent and adopt posterior regularization framework to integrate these rules.",
"The goal of this framework is to restrict the posterior of .",
"It can guide the agent towards desired behavior instead of wasting too much time in meaninglessly exploring.",
"Since we assume that the domain-specific rules have high credibility, we designed a rule-based policy agent r to emphasize their influences on .",
"The posterior constrains for is that the policy posterior for x MI is expected to be 1 which indicates that agent should select the x MI .",
"This expectation can be written as follows: EP [ l ( A t = 1)] = 1 (4) where l here is the indicator function.",
"In order to transfer the rules into a new policy r , the KL divergence between the posterior of and r should be minimized, this can be formally defined as minKL ( P ( A t | S t , ) || P r ( A t | S t , )) (5) Optimizing the constrained convex problem defined by",
"Eq.(4) and",
"Eq.(5), we get a new policy r : P r ( A t | S t , ) = P ( A t | S t , ) exp ( l ( A t = 1) 1) Z (6) where Z is a normalization term.",
"Z = 1 (cid:88) A t =0 P r ( A t | X, ) exp ( l ( A t = 1) 1) Algorithm 1 formally define the overall framework of the rule-based data selection process.",
"Our experiment is designed to demonstrate that our proposed methodologies can train an instance selector more efficiently.",
"Data: Original DS Dataset: B = { B 1 , B 2 , ..., BN } , Max Episode:M, Basic Relation Classifier: f b , Step Size: Result: An Instance Selector initialization policy weight (cid:48) = ; initialization classifier weight (cid:48) b = b ; for episode m=1 to M do for B k in B do B k = { x k 1 , x k 2 , ..., x k | B k | } , B k = {} ; for step i in | B k | do construct s i by B k , x ki , re k ; if x ki x MI then construct r ; sample action A i follow r ( a | s i , (cid:48) ) ; else sample action A i follow ( a | s i , (cid:48) ) ; end if A i =1 then Add x ki in B k ; end end Get terminal reward: R = log P f b ( r k | B k , (cid:48) b ) ; Get step delayed reward: R i =R; Update agent: + (cid:80) | B k | i =1 R i log end (cid:48) = + (1 ) (cid:48) ; Update the classifier f b ; end Algorithm 1: PR REINFORCE We tuned our model using three-fold cross validation on the training set.",
"For the parameters of the instance selector, we set the dimension of entity embedding as 50, the learning rate as 0.01.",
"The delay coefficient is 0.005.",
"For the parameters of the relation classifier, we follow the settings that are described in (Zhou et al., 2016).",
"The comparison is done in rule-based reinforcement learning method, original reinforcement learning and method with no reinforcement learning which is the basic relation classifier trained on original DS dataset.",
"We use the last as the baseline.",
"A widely used DS dataset, which is developed by (Riedel et al., 2010), is used as the original dataset to be selected.",
"The dataset is generated by aligning Freebase with New York Times corpus.",
"We compare the data selection model performance by the final performance of the basic model trained on newly constructed dataset selected by different models.",
"We use the precision/recall curves as the main metric.",
"Fig.3 presents this comparison.",
"PR REINFORCE constructs cleaned DS dataset with less noisy data compared with the original REINFORCE so that the BGRU+2ATT classifier can reach better performance.",
"In this paper, we develop a posterior regularized REINFORCE methodology to alleviate the wrong label problem in distant supervision.",
"Our model makes full use of the hand-crafted domain-specific rules in the trial and error search during the training process of REINFORCE method for DS dataset selection.",
"The experiment results show that PR REINFORCE outperforms the original REINFORCE.",
"Moreover, PR REINFORCE greatly improves the efficiency of the REINFORCE training.",
"This work has been supported in part by NSFC (No.61751209, U1611461), 973 program (No. 2015CB352302), Hikvision-Zhejiang University",
"Joint Research Center, Chinese Knowledge Center of Engineering Science and Technology (CK-CEST), Engineering Research Center of Digital Library, Ministry of Education.",
"Xiang Ren's research has been supported in part by National Science Foundation SMA 18-29268."
] | [
"objective",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"result",
"method",
"method",
"abstain",
"abstain",
"objective",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"objective",
"method",
"method",
"method",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other"
] |
[
"Pretrained masked language models (MLMs) require finetuning for most NLP tasks.",
"Instead, we evaluate MLMs out of the box via their pseudo-log-likelihood scores (PLLs), which are computed by masking tokens one by one.",
"We show that PLLs outperform scores from autoregressive language models like GPT-2 in a variety of tasks.",
"By rescoring ASR and NMT hypotheses, RoBERTa reduces an end-to-end LibriSpeech model's WER by 30% relative and adds up to +1.7 BLEU on state-of-the-art baselines for low-resource translation pairs, with further gains from domain adaptation.",
"We attribute this success to PLL's unsupervised expression of linguistic acceptability without a left-to-right bias, greatly improving on scores from GPT-2 (+10 points on island effects, NPI licensing in BLiMP).",
"One can finetune MLMs to give scores without masking, enabling computation in a single inference pass.",
"In all, PLLs and their associated pseudo-perplexities (PP-PLs) enable plug-and-play use of the growing number of pretrained MLMs; e.g., we use a single cross-lingual model to rescore translations in multiple languages.",
"We release our library for language model scoring at https: //github.com/awslabs/mlm-scoring .",
"BERT (Devlin et al., 2019) and its improvements to natural language understanding have spurred a rapid succession of contextual language representations (Yang et al., 2019; Liu et al., 2019; inter alia ) which use larger datasets and more involved training schemes.",
"Their success is attributed to their use of bidirectional context, often via their masked language model (MLM) objectives.",
"Here, a token w t is replaced with [MASK] and predicted using all past and future tokens W \\ t := ( w 1 , . . . , w t 1 , w t +1 , . . . , w | W | ) .",
"In contrast, conventional language models (LMs) predict w t using only past tokens W <t := ( w 1 , . . . , w t 1 ) .",
"However, this allows LMs to estimate log probabilities for a sentence W via the chain rule ( log PLM ( W ) = (cid:80) | W | t =1 log PLM ( w t | W <t ) ), which can be used out of the box to rescore hypotheses in end-to-end speech recognition and machine translation (Chan et al., 2016; Gulcehre et al., 2015), and to evaluate sentences for linguistic acceptability (Lau et al., 2017).",
"Our work studies the corresponding pseudo-log-likelihood scores (PLLs) from MLMs (Wang and Cho, 2019), given by summing the conditional log probabilities log PMLM ( w t | W \\ t ) of each sentence token (Shin et al., 2019).",
"These are induced in BERT by replacing w t with [MASK] (Figure 1).",
"PLLs and their corresponding pseudo-perplexities (PPPLs) (Section 2.3) are intrinsic values one can assign to sentences and corpora, allowing us to use MLMs in applications previously restricted to conventional LM scores.",
"Furthermore, we show that one can finetune BERT to compute PLLs in a single, non-recurrent inference pass (Section 2.2).",
"Existing uses of pretrained MLMs in sequence-to-sequence models for automatic speech recognition (ASR) or neural machine translation (NMT) involve integrating their weights (Clinchant et al., 2019) or representations (Zhu et al., 2020) into the encoder and/or decoder during training.",
"In contrast, we train a sequence model independently, then rescore its n -best outputs with an existing MLM.",
"For acceptability judgments, one finetunes MLMs for classification using a training set (Warstadt et al., 2019; Devlin et al., 2019); instead, PLLs give unsupervised, relative judgements directly.",
"In Section 3, we show that scores from BERT compete with or even outperform GPT-2 (Radford et al., 2019), a conventional language model of similar size but trained on more data.",
"Gains scale with dataset and model size: RoBERTa large (Liu et al., 2019) improves an end-to-end ASR model with relative WER reductions of 30%, 18% on LibriSpeech test-clean , test-other respectively (with further gains from domain adaptation), and improves state-of-the-art NMT baselines by up to +1.7 BLEU on low-resource pairs from standard TED Talks corpora.",
"In the multilingual case, we find that the pretrained 15-language XLM (Conneau and Lample, 2019) can concurrently improve NMT systems in different target languages.",
"In Section 4, we analyze PLLs and propose them as a basis for other ranking/scoring schemes.",
"Unlike log probabilities, PLL's summands are more uniform across an utterance's length (no left-to-right bias), helping differentiate fluency from likeliness.",
"We use PLLs to perform unsupervised acceptability judgments on the BLiMP minimal pairs set (Warstadt et al., 2020); BERT and RoBERTa models improve the state of the art (GPT-2 probabilities) by up to 3.9% absolute, with +10% on island effects and NPI licensing phenomena.",
"Hence, PLLs can be used to assess the linguistic competence of MLMs in a supervision-free manner.",
"Bidirectional contextual representations like BERT come at the expense of being true language models PLM ( W ) , as there may appear no way to generate text (sampling) or produce sentence probabilities (density estimation) from these models.",
"This handicapped their use in generative tasks, where they at best served to bootstrap encoder-decoder models (Clinchant et al., 2019; Zhu et al., 2020) or unidirectional LMs (Wang et al., 2019).",
"However, BERT's MLM objective can be viewed as stochastic maximum pseudolikelihood estimation (MPLE) (Wang and Cho, 2019; Besag, 1975) on a training set W , where { w t } | W | t =1 are random variables in a fully-connected graph.",
"This approximates conventional MLE, with MLM training asymptotically maximizing the objective: JPL (; W ) = 1 | W | (cid:88) W WPLL ( W ; ) .",
"In this way, MLMs learn an underlying joint distribution whose conditional distributions w t | W \\ t are modeled by masking at position t .",
"We include a further discussion in Appendix B. This enabled text generation with BERT via Gibbs sampling, leading to the proposal (but not evaluation) of a related quantity, the sum of logits, for sentence ranking (Wang and Cho, 2019).",
"More recent work (Shin et al., 2019) extended past research on future-conditional LMs in ASR (Sec-tion 5) with deeply-bidirectional self-attentive language models (bi-SANLMs).",
"They trained shallow models from scratch with the [MASK] scoring method, but did not relate their work to pseudolikelihood and fluency, which provide a framework to explain their success and observed behaviors.",
"Experimentally, we extend both works by evaluating pretrained models, domain adaptation, and usage in NMT and multilingual settings (Section 3), along with acceptability judgements and PLL's intrinsic numerical properties (Section 4).",
"A practical point unaddressed in both works is that computing PLLs from an MLM requires a sentence copy for each position, making the number of inference passes dependent on length (though these can be parallelized).",
"The cost of a softmax is also incurred, which is dependent on vocabulary size V ; together this gives O ( | W | V ) .",
"We propose reducing this to O (1) by training a network q with parameters S to match BERT's PLLs without [MASK] tokens: | PLL ( W ) q ( W ; S ) | 2 .",
"We propose finetuning q from the pretrained MLM directly (i.e., initializing S with ), via regression over the [CLS] token (Figure 2): Figure 2: We learn a linear map after the [CLS] token, supervised by the PLLs from the pretrained MLM.",
"More generally, one could use any student model q , as in knowledge distillation (Hinton et al., 2014).",
"Here, the teacher gives individual token probabilities ( | W | inference passes) while the student approximates their sum (one inference pass).",
"This is reminiscent of distilling an autoregressive teacher to a parallel student, as in the case of WaveNet (Oord et al., 2018).",
"Other [MASK] less bidirectional models like XLNet (Yang et al., 2019) can also give PLLs; we leave this to future work.",
"Analogous to conventional LMs, we propose the pseudo-perplexity (PPPL) of an MLM as an intrinsic measure of how well it models a corpus of sentences W .",
"Let N denote the number of tokens in the corpus.",
"Then a model's PPPL on W is PPPL ( W ) := exp (cid:32) 1 N (cid:88) W WPLL ( W ) (cid:33) .",
"Past work (Chen et al., 2017) also computed this quantity with bi-RNNLMs for ASR, although such models are not deeply bidirectional like self-attentive MLMs (see Section 5).",
"These PPPLs can be used in lieu of perplexities.",
"can perform early stopping with respect to development PPPL.",
"This is in contrast to MLM accuracy, which is not a continuous loss and is often stochastic (e.g., when performing dynamic masking as in RoBERTa).",
"In Section 4.1, we see that PPPLs naturally separate out sets of acceptable and unacceptable sentences.",
"Unlike previous works (Chen et al., 2017; Shin et al., 2019) we use pretrained BERTs, which are open-vocabulary (subword) bidirectional LMs.",
"However, PPPLs are only comparable under the same subword vocabulary, which differs between e.g., BERT and RoBERTa.",
"Normalizing with N as the number of words mitigates this.",
"In Appendix C, we show that word-normalized PPPLs correlate with domain adaptation, and with downstream metrics like ASR and BLEU after rescoring.",
"Let X denote audio features or source text tokens, and let W = ( w 1 , . . . , w | W | ) denote target text tokens.",
"For non-end-to-end ASR and MT systems, having separate acoustic/translation models PAM / TM ( X | W ) and language models PLM ( W ) is motivated by the Bayes rule decomposition used to select the best hypothesis W (Jelinek et al., 1975; Brown et al., 1993): W = arg max W [ P ( W | X )] = arg max W [ PAM / TM ( X | W ) PLM ( W )] .",
"End-to-end ASR and NMT use encoder-decoder architectures that are trained discriminatively.",
"Though less principled, many still adopt a log-linear model W = arg max W [log P ( W | X )] arg max W [log f ( W , X ) + log g ( W )] with learned functions f, g and a hyperparameter , to good effect (Sutskever et al., 2014; Chan et al., 2016).",
"One often takes f = P S2S ( W | X ) as the sequence-to-sequence model and g = PLM ( W ) as the language model.",
"Since the sequence-level arg max is intractable, one can do fusion , which decomposes f = (cid:81) f t and g = (cid:81) g t over time (Gul-cehre et al., 2015), restricting to the top N intermediate candidates at each step (beam search).",
"Instead, our work considers N -best rescoring , which computes f ( W , X ) first, still using beam search to maintain the top N candidates and scores.",
"Then, g ( W ) is computed for the resulting hypotheses and interpolated with these scores, giving a new top-1 hypothesis.",
"The sequence model is now solely responsible for capturing the best hypothesis W in its beam.",
"However, there are two advantages to N -best rescoring, which motivate PLLs as well as our maskless finetuning approach, respectively: Decoupling of scale.",
"Fusion requires correspondence between f t and g t at every t .",
"This requires the sequence model and LM to be autoregressive and share tokenizations.",
"In rescoring, f = P S2S does not require g to decompose over time or to be a true probability at all, though g should scale with f so that remains valid for all lengths | W | ; e.g., taking g ( W ) to be a relevance score between 0 and 1 would not satisfy this property.",
"The choice of log-linear is relevant here (Appendix B).",
"Length-independent inference.",
"If g is nonrecurrent, then g ( W ) may be computed in a single inference pass.",
"This difference manifests with self-attentive LMs like SANLMs and Transformer-XL (Dai et al., 2019), as recently explored for N -best rescoring in ASR (Li et al., 2019; Shin et al., 2019).",
"Further implementation and experimental details can be found in Appendix A and our code release:",
"LMs.",
"We rescore sequence-to-sequence hypotheses as in Section 3.1.",
"Each hypothesis is assigned its log probability (uni-SANLM, GPT-2) or pseudo-log-likelihood score (bi-SANLM, BERT, M-BERT, RoBERTa, XLM).",
"We tune the LM weight on the development set to minimize word error rate (WER) for ASR or maximize tokenized BLEU for NMT.",
"We then evaluate on the test set.",
"ASR.",
"Our 100-best hypotheses are from an end-to-end, 5-layer BLSTMP model (Shin et al., 2019) from ESPnet (Watanabe et al., 2018) on the 960-hour LibriSpeech corpus (Panayotov et al., 2015).",
"Though this baseline is not state-of-the-art, we use their lists to enable direct comparison in Table 5.",
"NMT.",
"Our 100-best hypotheses are from strong Transformer baselines with BPE subwords.",
"One was pretrained for WMT 2014 English-German (Vaswani et al., 2017); the others are state-of-the-art low-resource models we trained for five pairs from the TED Talks corpus (Qi et al., 2018) and for IWSLT 2015 English-Vietnamese (Cettolo et al., 2015), which we also describe in a dedicated, concurrent work (Nguyen and Salazar, 2019).",
"For the low-resource models we scored tokenized hypotheses (though with HTML entities unescaped, e.g., " (cid:55) \" ).",
"Length normalization (Wu et al., 2016) is applied to NMT ( = 0 . 6 ) and LM ( = 1 . 0 ) scores (Section 4.3).",
"We consider BERT (Devlin et al., 2019), GPT-2 (Radford et al., 2019), and RoBERTa (Liu et al., 2019), which are trained on 17GB, 40GB, and 160GB of written text respectively.",
"Each model comes in similarly-sized 6-layer (117M / base) and 12-layer (345M / large) versions.",
"GPT-2 is autoregressive, while BERT and RoBERTa are MLMs.",
"We begin by rescoring ASR outputs in Table 2: Model dev test clean other clean other baseline (100-best) 7.17 19.79 7.26 20.37 GPT-2 (117M, cased) 5.39 16.81 5.64 17.60 BERT (base, cased) 5.17 16.44 5.41 17.41 RoBERTa (base, cased) 5.03 16.16 5.25 17.18 GPT-2 (345M, cased) 5.15 16.48 5.30 17.26 BERT (large, cased) 4.96 16.26 5.25 16.97 RoBERTa (large, cased) 4.75 15.81 5.05 16.79 oracle (100-best) 2.85 12.21 2.81 12.85 Table 2: WERs on LibriSpeech after rescoring.",
"As GPT-2 is trained on cased, punctuated data while the ASR model is not, we use cased MLMs and append . to hypotheses to compare out-of-the-box performance.",
"BERT outperforms its corresponding GPT-2 models despite being trained on less data.",
"RoBERTa reduces WERs by 30% relative on LibriSpeech test-clean and 18% on test-other .",
"We repeat the same on English-target NMT in Table",
"3. As 100-best can be worse than 4-best due to the beam search curse (Yang et al., 2018; Murray and Chiang, 2018), we first decode both beam sizes to ensure no systematic degradation in our models.",
"Hypothesis rescoring with BERT (base) gives up to +1.1 BLEU over our strong 100-best baselines, remaining competitive with GPT-2.",
"Using RoBERTa (large) gives up to +1.7 BLEU over the baseline.",
"Incidentally, we have demonstrated conclusive improvements on Transformers via LM rescoring for the first time, despite only using N -best lists; the most recent fusion work (Stahlberg et al., 2018) only used LSTM-based models.",
"We also consider a non-English, higher-resource target by rescoring a pre-existing WMT 2014 English-German system (trained on 4.5M sentence pairs) with German BERT (base) models 1 trained on 16GB of text, similar to English BERT.",
"From 27.3 BLEU we get +0.5, +0.3 from uncased, cased; a diminished but present effect that can be improved as in Table 3 with more pretraining, a larger model, or domain adaptation (Section 3.5).",
"To assess the limits of our modular approach, we ask whether a shared multilingual MLM can improve translation into different target languages.",
"We use the 100+ language M-BERT models, and the 15-language XLM models (Conneau and Lam-ple, 2019) optionally trained with a crosslingual translation LM objective (TLM).",
"Monolingual training was done on Wikipedia, which gives e.g., 6GB of German text; see Table",
"4. The 100-language M-BERT models gave no consistent improvement.",
"The 15-language XLMs fared better, giving +0.2-0.4 BLEU, perhaps from their use of language tokens and fewer languages.",
"Our 1 https://github.com/dbmdz/german-bert Model IWSLT'15 TED Talks en vi en de en ar Wang et al. (2018) 29.09 Aharoni et al. (2019) 23.31 12.95 our baseline (4-best) 31.94 30.50 13.95 our baseline (100-best) 31.84 30.44 13.94 M-BERT (base, uncased) 32.12 30.48 13.98 M-BERT (base, cased) 32.07 30.45 13.94 XLM (base*, uncased) 32.27 30.61 14.13 + TLM objective 32.26 30.62 14.10 de-BERT (base, uncased) 31.27 de-BERT (base, cased) 31.22 Table 4: Test BLEU scores for language pairs with non-English targets, after hypothesis rescoring.",
"German BERT results suggest an out-of-the-box upper bound of +0.8 BLEU, as we found with English BERT on similar resources.",
"We expect that increasing training data and model size will boost XLM performance, as in Section 3.3.",
"Out-of-the-box rescoring may be hindered by how closely our models match the downstream text.",
"For example, our uncased multilingual models strip accents, exacerbating their domain mismatch with the cased, accented gold translation.",
"We examine this effect in the setting of LibriSpeech, which has its own 4GB text corpus and is fully uncased and unpunctuated, unlike the cased MLMs in Section 3.3.",
"We rescore using in-domain models in Table 5: Model dev test clean other clean other baseline (100-best) 7.17 19.79 7.26 20.37 uni-SANLM 6.08 17.32 6.11 18.13 bi-SANLM 5.52 16.61 5.65 17.44 BERT (base, Libri. only) 4.63 15.56 4.79 16.50 BERT (base, cased) 5.17 16.44 5.41 17.41 BERT (base, uncased) 5.02 16.07 5.14 16.97 + adaptation, 380k steps 4.37 15.17 4.58 15.96 oracle (100-best) 2.85 12.21 2.81 12.85 Table 5: WERs on LibriSpeech after hypothesis rescoring.",
"Using a BERT model trained only on the text corpus outperforms RoBERTa (Table 2) which is trained on far more data, underscoring the tradeoff between in-domain modeling and out-of-the-box integration.",
"Even minor differences like casing gives +0.3-0.4 WER at test time.",
"In Section 4.3 we see that these domain shifts can be visibly observed from the positionwise scores log PMLM ( w t | W \\ t ) .",
"The best results (adaptation) still come from adapting a pretrained model to the target corpus.",
"We proceed as in BERT, i.e., performing MLM on sequences of concatenated sentences (more details in Appendix A).",
"In contrast, the 3-layer SANLMs (Shin et al., 2019) do per-utterance training, which is slower but may reduce mismatch even further.",
"Finally, we show in Appendix C that even before evaluating WER or BLEU, one can anticipate improvements in the downstream metric by looking at improvements in word-normalized PPPL on the target corpus.",
"The domain-adapted MLM has lower PPPLs than the pretrained models, and RoBERTa has lower PPPLs than BERT.",
"We finetune BERT to produce scores without [MASK] tokens.",
"For LibriSpeech we take the normalized text corpus and keep sentences with length | W | 384, score them with our adapted BERT (base), then do sentence-level regression (Section 2.2).",
"We train using Adam with a learning rate of 10 5 for 10 epochs (Table 6): Model dev clean other baseline (100-best) 7.17 19.79 GPT-2 (117M, cased) 5.39 16.81 BERT (base, uncased, adapted) 4.37 15.17 + no masking 5.79 18.07 + sentence-level finetuning 4.61 15.53 Table 6: WERs on LibriSpeech upon rescoring, showing the effects of single-copy, maskless scoring.",
"Sentence-level finetuning degrades performance by +0.2-0.4 WER, leaving room for future improvement.",
"This still outperforms GPT-2 (117M, cased), though this gap may be closed by adaptation.",
"For now, maskless finetuning could be reserved for cases where only a masked language model is available, or when latency is essential.",
"Remarkably, we found that out-of-the-box scoring without [MASK] still significantly improves the baseline.",
"This is likely from the 20% of the time BERT does not train on [MASK] , but instead inputs a random word or the same word (Devlin et al., 2019).",
"Future work could explore finetuning to positionwise distributions, as in word-level knowledge distillation (Kim and Rush, 2016), for which our results are a nave performance bound.",
"Although end-to-end models f = P S2S ( W | X ) predict W directly from X , interpolation with the unconditional g = PLM ( W ) remains helpful (Toshniwal et al., 2018).",
"One explanation comes from cold and simple fusion (Sriram et al., 2018; Stahlberg et al., 2018), which further improve on shallow fusion (Section 3.1) by learning g ( W ) first.",
"They argue g expresses fluency ; fixing g early allows f ( W , X ) to focus its capacity on adequacy in encoding the source, and thus specializing the two models.",
"With this perspective in mind, we compare log PLM and PLL as candidates for log g .",
"In this work we interpret fluency as linguistic acceptability (Chomsky, 1957); informally, the syntactic and semantic validity of a sentence according to human judgments (Schutze, 1996).",
"Its graded form is well-proxied by neural language model scores ( log PLM ) once length and lexical frequency are accounted for (Lau et al., 2017).",
"This can be seen in a controlled setting using minimal pairs and GPT-2 (345M) scores: Raymond is selling this sketch.",
"This example is from the Benchmark of Linguistic Minimal Pairs (BLiMP) (Warstadt et al., 2020), a challenge set of 67k pairs which isolate contrasts in syntax, morphology, and semantics (in this example, determiner-noun agreement).",
"While its predecessor, the Corpus of Linguistic Acceptability (CoLA), has a training set and asks to label sentences as acceptable or not in isolation (Warstadt et al., 2019), BLiMP provides an unsupervised setting: language models are evaluated on how often they give the acceptable sentence a higher (i.e., less negative) score.",
"This is equivalent to 2-best rescoring without sequence model scores ( log f = 0 ).",
"Since most minimal pairs only differ by a single word, the effect of length on log probabilities and PLLs (discussed in Section 4.3) is mitigated.",
"We compute PLLs on the sentences of each pair using cased BERT and RoBERTa, then choose the sentence with the highest score.",
"Our results are in Table",
"7. Despite using less than half the data and a Model (cased) O v e r a ll AN A .",
"third of the capacity, BERT (base) already outperforms the previous state of the art (GPT-2) by 1.6% absolute, increasing to 3.9% with RoBERTa (large).",
"There are 4 of 12 categories where all four PLLs outperform log probabilities by 1% absolute (val-ues marked by *), and 7 where three or more PLLs outperform by this margin.",
"Interestingly, PLLs do consistently worse on quantifiers, though all are relatively bad against the human baseline.",
"The ratio of token-level PPPLs between unacceptable and acceptable sentences overall increases with performance, separating the two sentence sets.",
"RoBERTa improves by around 10% on filler-gap dependencies, island effects, and negative polarity items (NPIs), largely closing the human gap.",
"This suggests that the difficulty of these BLiMP categories was due to PLM decomposing autoregres-sively, and not intrinsic to unsupervised language model training, as the original results may suggest (Warstadt et al., 2020).",
"For some intuition, we include examples in Table",
"8. In the subject-verb agreement example, BERT sees The pamphlets and resembled those photographs when scoring have vs. has , whereas GPT-2 only sees The pamphlets , which may not be enough to counter the misleading adjacent entity Winston Churchill at scoring time.",
"We observed that log g = PLL ( W ) is not unduly affected by unconditional token frequencies; this mitigates degradation in adequacy upon interpolation with P S2S .",
"Consider a two-word proper noun, e.g., W = San Francisco: log PLM ( W ) = log PLM ( San ) + log PLM ( Francisco | San ) (cid:28) log PMLM ( San | Francisco ) + log PMLM ( Francisco | San ) = PLL ( W ) .",
"It is a highly-fluent but low-probability bigram and thus gets penalized by log PLM ( W ) .",
"Informally, PLL ( W ) expresses how likely each token is given other tokens (self-consistency), while log PLM ( W ) expresses the unconditional probability of a sentence, beginning with the costly unconditional term PLM ( San ) .",
"We see this in practice when we take LM to be GPT-2 (345M) and MLM to be RoBERTa (large).",
"Substituting in the actual scores: log P GPT-2 ( W ) = 8 .",
"693 = ( 7 . 749) + ( 0 . 944) (cid:28) ( 0 . 006) + ( 1 . 000) = 1 .",
"006 = PLL RoBERTa ( W ) .",
"Both give similar probabilities P ( Francisco | San ) e 1 .",
"0 37%, but differ in the first summand.",
"We examine the interplay of this bias with our sequence models, in cases where the baseline, GPT-2, and BERT gave different top-1 hypotheses (Ta-ble 8).",
"In our examples, GPT-2 restores fluency using common and repeated words, at the cost of adequacy: clasping truth and (cid:55) class in truth and , Union by the Union Sivities (cid:55) Union by the Union by the Union Civities .",
"One can view these as exacerbations of the rare word problem due to overconfident logits (Nguyen and Chiang, 2018), and of over-translation (Tu et al., 2016).",
"Meanwhile, BERT rewards self-consistency, which lets rarer but still-fluent words with better acoustic or translation scores to persist: clasping truth and (cid:55) clasping truth in , Union by the Union Sivities (cid:55) Union by the Union of LiberCivities , System Model Output sentence BLiMP (S-V agreement) BERT The pamphlets about Winston Churchill have resembled those photographs.",
"which preserves the p sound in the ground truth ( clapping ) for ASR, and promotes the more globally-fluent Union by the Union of LiberCivities .",
"We also see the under-translation (i.e., omission) of Liber being corrected, without being discouraged by the rare sequence LiberCivities .",
"Given the differences between PLLs and log probabilities, we explore whether ensembling both improves performance in Appendix D. Similar to the largely-dominant results of MLMs on BLiMP over GPT-2 (Section 4.1), we find that as the MLM gets stronger, adding GPT-2 scores has negligible effect, suggesting that their roles overlap.",
"PLL's numerical properties make it an ideal foundation for future ranking or scoring schemes.",
"For example, given fixed | W | one expects log PMLM ( w t | W \\ t ) to be in the same range for all t .",
"Meanwhile log PLM ( w t | W <t ) decreases as t | W | , the rate of which was studied in recurrent language models (Takahashi and Tanaka-Ishii, 2018).",
"We validate this with GPT-2 (Figure 3) and BERT (Figure 4).",
"In particular, we see the outsized cost of the unconditional first unigram in Figure",
"3. This also explains why bi-SANLM was more robust than uni-SANLM at shorter and earlier positions (Shin et al., 2019); the difference is intrinsic to log probabilities versus PLLs, and is not due to model or data size.",
"Figure 4 also shows that domain adaptation (Sec-tion 3.5) affects PLL's positionwise cross-entropies.",
"Cased BERT spikes at position 1, as it observes a lowercase word where a capitalized word is expected.",
"All MLMs spike at the final token of an utterance, before our appended period . .",
"Terminal 1 3 5 7 9 11 13 15 17 19 Context length ( t 1) 4.0 4.5 5.0 5.5 6.0 6.5 7.0 C r o ss e n t r o p y GPT-2 (117M, cased), test-clean GPT-2 (117M, cased), test-other GPT-2 (345M, cased), test-clean GPT-2 (345M, cased), test-other Figure 3: Cross-entropy (natural base) of w t | W <t versus context length ( t 1 ) from GPT-2 models, averaged over LibriSpeech's test utterances.",
"words are difficult to predict in general, but here more so as the BERT+LibriSpeech text corpora and the LibriSpeech test set are mismatched; the latter's ground-truth utterances were segmented by voice activity and not punctuation (Panayotov et al., 2015).",
"Otherwise, the averaged cross-entropies are flat.",
"This, plus our success on BLiMP, suggest positionwise scores as a way of detecting disfluencies (at least, those in the form of domain mismatches) by observing spikes in cross-entropy; with log PLM , spikes are confounded by the curve in Figure",
"3. In Appendix C, we plot sentence-level PLLs versus | W | and observe linearity as | W | , with spikes from the last word and lowercase first word smoothing out.",
"This behavior motivates our choice of = 1 .",
"0 when applying the Google NMT-style length penalty (Wu et al., 2016) to PLLs, which corresponds to the asymptotically-linear LPMLM = (5 + | W | ) / (5 + 1) .",
"In contrast, autoregressive scores like PLM ( W ) integrate over the inverse power-law curve in Figure",
"3. We speculate that this explains the effectiveness of their hyperparameter = 0 .",
"6 , widely used in NMT baselines like ours, as there exists C such that LP S2S ( W ) = (5 + | W | ) 0 .",
"Our work extends the closest previous works (Wang and Cho, 2019; Shin et al., 2019) with regards to experiments and tasks, as outlined in Section 2.1.",
"Furthermore, neither work considers the inference cost of masked rescoring, which we address with our maskless scoring approach, or analyze PLL's numerical properties.",
"Future context.",
"Log probabilities conditioned on past and future context have been used in MT (Finch and Sumita, 2009; Xiong et al., 2011) and perennially in ASR (Shi et al., 2013; Arisoy et al., 2015; Chen et al., 2017) to positive effect.",
"However, these are not deep bidirectional as they model interactions between W <t and W >t via the for-ward and backward context vectors, while MLMs model all pairwise interactions w s and w s (cid:48) via dot-product attention (compare ELMo versus BERT).",
"Their PLLs would have different properties from ours (e.g., their cross-entropies in Figure 4 may be convex instead of flat).",
"Discriminative language modeling.",
"Previous works (Roark et al., 2004; Huang et al., 2018) have explored training language models that directly optimize for a downstream metric (WER, BLEU).",
"While we also eschew using log probabilities from conventional LMs, our approach remains generative.",
"Log probabilities model the joint distribution; PLL does so as well, albeit implicitly (Appendix B).",
"PLL's summands (conditional probabilities) remain accessible for Gibbs sampling and are not tailored to any metric.",
"The two approaches are complementary; for example, one could use PLL as a prior or regularizer for scores given by discriminatively-finetuned BERT models in tasks like passage re-ranking (Nogueira and Cho, 2019).",
"Language model integration.",
"Beyond finetuning pretrained LMs and MLMs, monolingual pretraining has also improved NMT performance (Ra-machandran et al., 2017; Conneau and Lample, 2019).",
"However, modular integration of language representation models remains prevalent for various pragmatic reasons, similar to fusion in ASR.",
"Contemporary examples are the use of finetuned BERT scores in a question-answering pipeline (Nogueira and Cho, 2019), or as-is cosine similarity scores from BERT to evaluate generated text (Zhang et al., 2020).",
"For example, one might have no pretrained multilingual LMs for decoder initialization or fusion, as such models are difficult to train (Ragni et al., 2016).",
"However, one may have an M-BERT or XLM for the target lan-guage/domain.",
"Finally, N -best rescoring and pretraining are not mutually exclusive, though pretraining may already go partway to improve fluency.",
"We studied scoring with MLM pseudo-log-likelihood scores in a variety of settings.",
"We showed the effectiveness of N -best rescoring with PLLs from pretrained MLMs in modern sequence-to-sequence models, for both ASR and lowto medium-resource NMT.",
"We found rescoring with PLLs can match or outperform comparable scores from large unidirectional language models (GPT-2).",
"We attributed this to PLL's promotion of fluency via self-consistency, as demonstrated by improvement on unsupervised acceptability judgements and by qualitative analysis.",
"We examined the numerical properties of PLLs, proposed maskless scoring for speed, and proposed pseudo-perplexities for intrinsic evaluation of MLMs, releasing a codebase implementing our work.",
"Future work could find additional modular uses of MLMs, simplify maskless PLL computations, and use PLLs to devise better sentenceor document-level scoring metrics.",
"We thank Phillip Keung and Chris Varano for their thoughtful suggestions on this work."
] | [
"abstain",
"method",
"result",
"abstain",
"result",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"abstain",
"method",
"result",
"abstain",
"objective",
"abstain",
"result",
"other",
"result",
"objective",
"abstain",
"result",
"abstain",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"objective",
"other",
"other",
"objective",
"objective",
"other",
"other",
"other",
"abstain",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"method",
"other",
"other",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"result",
"result",
"objective",
"objective",
"abstain",
"other"
] |
[
"The success of language models based on the Transformer architecture appears to be inconsistent with observed anisotropic properties of representations learned by such models.",
"We resolve this by showing, contrary to previous studies, that the representations do not occupy a narrow cone, but rather drift in common directions.",
"At any training step, all of the embeddings except for the ground-truth target embedding are updated with gradient in the same direction.",
"Compounded over the training set, the embeddings drift and share common components, manifested in their shape in all the models we have empirically tested.",
"Our experiments show that isotropy can be restored using a simple transformation.",
"1 1 Introduction Word embeddings, both static (Mikolov et al., 2013a; Pennington et al., 2014) and contextualized (Peters et al., 2018), have been instrumental to the progress made in Natural Language Processing over the past decade (Turian et al., 2010; Wu et al., 2016; Liu et al., 2018; Peters et al., 2018; Devlin et al., 2019).",
"In recent years, language models based on Transformer architecture (Vaswani et al., 2017) have led to state-of-the-art performance on problems such as machine translation (Vaswani et al., 2017), question answering (Devlin et al., 2019; Liu et al., 2019b), and Word Sense Disambiguation (Bevilacqua and Navigli, 2020), among others.",
"However, it has been observed that representations from Transformers exhibit undesirable properties, such as anisotropy, that is tend to occupy only a small subspace of the embedding space.",
"The observation has been documented by a number of studies (Gao et al., 2019; Ethayarajh, 2019; Wang et al., 2020).",
"A similar property has been iden-tified in the past in static word embeddings (Mu 1 The code and datasets used in this paper are available at https://github.com/danielbis/tooMuchInCommon. and Viswanath, 2018).",
"To address the issues, postprocessing methods (Mu and Viswanath, 2018), and regularization terms have been proposed (Gao et al., 2019; Wang et al., 2019c, 2020).",
"However, the mechanism that leads to undesirable properties remains unclear.",
"Without understanding the mechanism, it is going to be difficult to address the fundamental issue properly.",
"The deficiencies are most pronounced in the representations of rare words, as we will show in Section 4.",
"Performance of pretrained language models is inconsistent and tends to decrease when input contains rare words (Schick and Schtze, 2020b,a).",
"Schick and Schtze (2020a) observe that replacing a portion of words in the MNLI (Williams et al., 2018) entailment data set with less frequent synonyms leads to decrease in performance of BERT-base and RoBERTa-large by 30% and 21 .",
"8% respectively.",
"2 After enriching rare words with surface-form features and additional context, Schick and Schtze (2020a) decrease the performance gap to 20 .",
"7% for BERT and 17% for RoBERTa, but the gap remains large nonetheless.",
"Why do even the large-scale, pretrained language models struggle to learn good representations of rare words?",
"Consider a language model with an embedding matrix shared between the input and output layers, a standard setup known as weight tying trick (Inan et al., 2017).",
"Intuitively, at any training step t , optimization of the cross-entropy loss can be characterized as pulling\" the target embedding, w T , closer to the model's output vector h t , while pushing\" all other embeddings, W \\ w T , in the same direction, away from the output vector h t .",
"This leads to what we call common enemies effect the effect of the target words producing gradients of the same direction for all of the nontarget words.",
"Compounded over the training set, the embeddings drift and share common components, manifested in their shape in all the models 2 Based on the results reported by authors.",
"we have empirically tested; see Figure 1.",
"Although Gao et al. (2019) report a closely related phenomenon and call it representation degeneration , their analysis is based on an assumption that the embedding matrix is learned after all other parameters of the model are well-optimized and fixed, which is not the case in practice.",
"We conduct our analysis in a more realistic setting, and arrive at different conclusions.",
"We show that embeddings do not occupy a narrow cone, but are shifted in one common direction and only appear as a cone when projected to a lower dimensional space (Section 4.1).",
"In fact simply removing the mean vector of all embeddings, thus centering them, shifts the embeddings back onto a more spherical shape.",
"We evaluate embeddings, before and after centering, on four standard benchmarks and observe significant performance improvement across all of them.",
"Why is removing the mean so effective?",
"We find that the common enemies effect applies to most, if not all, words in the vocabulary but in non-uniform manner.",
"As language is known to follow an approximately Zipfian distribution (Zipf, 1949; Manning and Schtze, 2001; Piantadosi, 2014) even common words will not occur frequently in a text corpus, and in result will be often pushed\" by other target words in the same direction as rare words. Consequently, all embeddings share a significant common direction. We will focus on the analysis of auto-regressive GPT-2 (Radford et al., 2019) and two masked language models, BERT (Devlin et al., 2019) and RoBERTa (Liu et al., 2019b). Our contributions can be summarized as follows: We show that as word embeddings repeatedly share same direction gradients, they are shifted in one dominant direction in the vector space. The effects are the most evident in representations of rare words, but are also present in representations of frequent words. The shift causes the distribution of projected embeddings to appear as a narrow cone; we show that simply removing the mean vector is enough to restore the spherical distribution. We provide empirical evidence of our analyses using state-of-the-art pretrained language models and demonstrate that removing the mean dramatically improves isotropy of the representations. 2 Background 2.1 Distributed Word Representations Distributed representations induce a rich similarity space, in which semantically similar concepts are close in distance (Goodfellow et al., 2016; Bengio et al., 2003; Mikolov et al., 2013c). In a language model, the regularities of embeddings space facilitate generalization, assigning a high probability to a sequence of words that has never been seen before but consists of words that are similar to words forming an already seen sentence (Bengio et al., 2003; Mikolov et al., 2013c). Although models such as BERT or GPT-2 produce representations from a function of the entire input sequence, the representations are a result of a series of transformations applied to the input vectors. Consider an example sentence: The building was dilapidated.\" , and the sentences resulting from replacing dilapidated\" with either ruined\" or reconditioned\" . If the distance in the embeddings space between the two rather infrequent, but antonymous, words dilapi-dated\" and reconditioned\" is not larger than the distance between dilapidated\" and its relatively frequent synonym ruined\" , then by the aforementioned generalization principle there is little to no reason to believe that the distance will become larger in the output layer.",
"3 2.2 Tokenization Do the subword tokenization methods (Schuster and Nakajima, 2012; Wu et al., 2016; Sennrich et al., 2016; Radford et al., 2019) preserve the word frequency imbalance?",
"Examination of the common tokenization methods, such as Byte-Pair Encoding (Sennrich et al., 2016) and WordPiece (Schuster and Nakajima, 2012; Wu et al., 2016), suggests that subword units induced by tokenization algorithms exhibit similar frequency imbalance to that of full vocabulary.",
"This can be explained by the greedy nature of the vocabulary induction process.",
"Although different methods use different base vocabulary symbols to begin with (i.e., Unicode code points, or bytes), all of the methods construct the vocabulary through iterative merging of the most frequent symbols.",
"As a result, the most frequent units are preserved as words, while the rare words are segmented into subword units.",
"Moreover, the words which are segmented into subword units are 3 In fact, all three sentences are assigned a negative sentiment, with scores between 97% to 100% by RoBERTa finetuned on SST.",
"Masked Language Modeling (MLM) pretraining objective is to maximize the likelihood of masked tokens conditioned on the (noisy) input sequence.",
"Given a sequence of tokens w = [ w 1 , ..., w N ] , a corrupted version w is constructed by randomly setting a portion of tokens in w to a special [MASK] symbol.",
"Although MLM estimates the token probabilities of all masked positions, w , simultaneously and renders the factorization from Subsection 3.1 no longer applicable, the mechanism used to un-mask\" a token differs only slightly from that in AR, specifically: max log p ( w | w ) N (cid:88) t =1 m t log p ( w t | w ) (2) = N (cid:88) t =1 m t log exp (cid:16)(cid:10) h ( w ) (cid:62) t , e ( w t (cid:11)(cid:17) (cid:80) V w (cid:48) exp (cid:16)(cid:10) h ( w ) (cid:62) t , e ( w (cid:48) ) (cid:11)(cid:17) = N (cid:88) t =1 m t log softmax (cid:0) h ( w ) W (cid:62) (cid:1) label t , where m t = 1 indicates w t is masked, and h ( w ) t is the output representations computed as function of the full, noisy, input sequence.",
"Note, that the main difference between the equations 1 and 2 is the context used to condition the estimation.",
"Models trained with MLM objective, like BERT and RoBERTa, compute the output vector utilizing bidirectional context through the self-attention mechanism, while the unidirectional models use only the context to the left of the target token.",
"Moreover, only the probabilities of masked words, w i such that w i w, are estimated.",
"Although the two objectives described above differ in terms of the distribution modeled (Yang et al., 2019), both AR and MLM models rely on the softmax function and cross-entropy loss.",
"Using the notation established above, the cross-entropy loss function for an AR model is optimized by minimizing: J ( ) = E w data [log p ( w )] , (3) and for a MLM model it takes a form of: J ( ) = E w data [log p ( w | w )] .",
"The gradient of the cross-entropy loss with respect to the embedding matrix W is a sum of the gradient flowing through two paths: first one is through",
"the output layer where the embeddings are used to create the targets for the softmax, the second path flows through the encoder stack to the input layer.",
"The gradient flowing through the embedding stack to the input layer is complex, and depends on minute details of a model.",
"Although its contribution is not irrelevant, it is not necessary to illustrate the main point of this section.",
"Thus, we focus on the update rule resulting from the gradient with respect to embeddings in the top layer of a model.",
"For prediction of a token w t , let h be the output vector of either AR model (at index t 1 ) or MLM model (at index t ), let y = softmax ( f t ) , where f t = h W (cid:62) , and let y be the true probability distribution, then: J t W = h ( x ) (cid:62) t ( y y ) .",
"where be the learning rate.",
"Since y is equal to 0 for all the indices except for the index of the target word w t , all the embeddings will become less similar to the representation produced by a model with the exception of the target word embedding.",
"This leads to what we define as the common enemies effect target words producing gradients of the same direction for all of the non-target words.",
"As the parameters are updated during the optimization process, the h changes even when the model is provided with the same input.",
"Therefore, the direction of the gradient for the non-target words changes accordingly, but at a particular step the direction of the update is the same for all the nontarget words.",
"This is fundamentally different from the conclusion of Gao et al. (2019), who states that there exists a uniformly negative direction such that its minimization yields a nearly optimal solution for rare words' embeddings.",
"We find that the common enemies effect is the most pronounced in the representations of rare words, which are less likely to appear as targets, but it is evident in all embeddings nonetheless.",
"Previous studies (Gao et al., 2019; Wang et al., 2020) suggest that word embeddings learned by",
"representations. increasing standard embedding MEN Test Collection (Bruni et al., measures the relatedness of words. WordSim353 (Agirre et al., 2009) consists of two parts, one measures similarity, other measures relatedness of words.",
"become more similar, the resulting representations become closer, creating a positive feedback mechanism for the representations to drift collectively.",
"In addition, while isotropy of representations is desirable and has an overall positive impact on performance, the relationships between isotropy and performance in Table 1 and Table 2 suggest that the role of isotropy in model performance needs to be further analyzed.",
"The dynamics of the interactions are being further investigated to pinpoint the root cause and their relationship with the model's performance.",
"Gao et al. (2019) present an insightful derivation of uniformly negative gradients for nonapparent words and formulate the optimization of rare words as an -strongly convex problem but make strong assumptions that the embedding matrix is learned after all other parameters of the model are well-optimized and fixed, which is not the case in practice.",
"We do not make such assumptions, providing a more realistic explanation for the learning process.",
"Wang et al. (2020) propose to reparametrize the embedding matrix using SVD and propose directly controlling the decay rate of singular values.",
"Our paper's purpose is inherently different from that of Wang et al. (2020); we recognize that the fundamental understanding of the problem is missing and provide an explanation for the observations made in previous studies.",
"Another line of work focuses on limitations of the softmax.",
"Yang et al. (2018) suggest that softmax does not have sufficient capacity to model the complexity of language.",
"Zhang et al. (2019) analyze the skip-gram model to show that optimization based on cross-entropy loss and softmax resembles competitive learning in which words compete among each other for the context vector.",
"This idea is closely related to the common enemies effect reported in this paper, however, skip-gram seems to mitigate this through negative sampling (Mikolov et al., 2013b) but similar approaches do not seem to help Transformer pretraining (Clark et al., 2020).",
"A considerable effort has been made to improve performance of language systems on rare words, but the focus has been on either injecting subword information in non-contextual representations (Lu-ong et al., 2013; Lazaridou et al., 2017; Pinter et al., 2017; Bojanowski et al., 2017), replacing rare words' representations through exploiting their context (Khodak et al., 2018; Liu et al., 2019a), or both (Schick and Schtze, 2019, 2020a).",
"In comparison, we strive to provide an explanation of the underlying problem, which is necessary to render such post-hoc fixes no longer necessary.",
"We find that the embeddings learned by GPT-2, BERT, and RoBERTa do not degenerate into a narrow cone, as has been suggested in the past, but instead drift in one shared direction.",
"We recognize that target words produce gradients in the same direction for all the non-target words at each training step.",
"Combined with the unbalanced distribution of word frequencies, any two words' embeddings will be repeatedly updated with gradients of the same direction.",
"As such updates accumulate, the embeddings drift and share common components.",
"Our experiments show that simply centering the embeddings restores a nearly perfectly isotropic distribution of tested models' embeddings and simultaneously improves embeddings' ability to reflect semantic relations.",
"This understanding of the learning process dynamics opens exciting avenues for future work, such as improving the most affected embeddings of rare words and formulation of more computationally efficient training objectives."
] | [
"abstain",
"result",
"abstain",
"abstain",
"result",
"other",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"result",
"abstain",
"method",
"result",
"abstain",
"result",
"abstain",
"result",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"other",
"abstain",
"other",
"other",
"other",
"method",
"other",
"method",
"result",
"method",
"abstain",
"abstain",
"result",
"abstain"
] |
[
"Modern summarization models generate highly fluent but often factually unreliable outputs.",
"This motivated a surge of metrics attempting to measure the factuality of automatically generated summaries.",
"Due to the lack of common benchmarks, these metrics cannot be compared.",
"Moreover, all these methods treat factuality as a binary concept and fail to provide deeper insights on the kinds of inconsistencies made by different systems.",
"To address these limitations, we devise a typology of factual errors and use it to collect human annotations of generated summaries from state-of-the-art summarization systems for the CNN/DM and XSum datasets.",
"Through these annotations we identify the proportion of different categories of factual errors in various summarization models and benchmark factuality metrics, showing their correlation with human judgement as well as their specific strengths and weaknesses.",
"1 1 Introduction Factuality is defined as a measure of whether eventualities are characterized as corresponding to facts, possibilities, or situations that do not hold in the world (Sauri, 2008; Saur and Pustejovsky, 2012).",
"In summarization, this world is the article, which is taken as ground-truth, and the output summary must be faithful to the article's facts.",
"Despite advancements in neural abstractive summarization (Narayan et al., 2018; Liu and Lapata, 2019; Lewis et al., 2020), 30% of summaries have factual inconsistencies (Cao et al., 2018).",
"With summarization being an integral component of information consumption, this highlights a need for ensuring summarization systems are factually consistent and developing methods for evaluating them.",
"METEOR (Papineni et al., 2002; Lin, 2004; Lavie and Agarwal, 2007) are insufficient to measure the factual correctness of summaries and fail to correlate with the human judgements of factuality (Falke et al., 2019; Kryscinski et al., 2019).",
"More recent metrics proposed to improve the evaluation of summarization factuality (Kryscinski et al., 2020; Durmus et al., 2020; Wang et al., 2020; Maynez et al., 2020) cannot be compared due to the lack of common benchmarks.",
"More critically, while these approaches differ in the way they model factuality, they all consider factuality as a binary concept, labeling summaries of any length as factual or non-factual.",
"They do not provide any fine-grained understanding of the factual errors made by different systems that could serve as an actionable feedback on a system's limitations.",
"The binary factuality of a text can be difficult to determine.",
"Falke et al. (2019) show relatively low crowdexpert agreement, indicating the presence of subjectivity in the annotation process.",
"Moreover, not all factual errors are equally important and the number of errors can have a significant impact on the perceived factuality of a text.",
"This suggests that non-factuality should be modeled as a multidimensional construct and not a label.",
"In this work, we propose a linguistically motivated typology of factual errors for fine-grained analysis of factuality in summarization systems (2).",
"Our typology is theoretically grounded in frame semantics (Fillmore et al., 1976; Palmer et al., 2005) and linguistic discourse theory (Brown and Yule, 1983).",
"It provides several benefits.",
"First, we find that decomposing the concept of factuality in (relatively) well-defined and grounded categories makes the final binary decision more objective leading to near perfect agreement between crowd and expert annotators ( = 0 . 86 ).",
"Second, this approach provides some measure of the degree of non-factuality both in terms of the quantity and the category of factual violations that appear FRANK Benchmark Data Annotation Per sentence error category annotation Typology of Factual Errors Frame semantics, discourse analysis Summarization System Evaluation Which mistakes?",
"in the text.",
"This typology also provides us with the means to categorize the types of errors made by summarization systems, helping us gain deeper insights than simply categorizing content as factual or hallucinated.",
"We define an annotation protocol of factuality based on our typology and collect a dataset of human judgements over a diverse set of model generated summaries on the CNN/DM (Hermann et al., 2015) and XSum (Narayan et al., 2018) datasets (3).",
"Through this dataset, we aim to both assess the factuality of summarization systems and benchmark recently proposed factuality metrics.",
"In 4 we discuss various state-of-art models and show a detailed analysis of the factual errors they make.",
"Finally, in 5 we evaluate multiple summarization metrics against our benchmark and show their strengths and weaknesses in detecting specific types of factual errors.",
"Figure 1 shows an overview of this work.",
"Previous studies of factuality in summarization only distinguish factual and hallucinated content (Kryscinski et al., 2019; Maynez et al., 2020) and provide limited insights on the fine-grained types of factual errors.",
"In the simplest case, factual errors appear within a single proposition.",
"However, as summaries include several sentences, discourse markers describe relations across propositions.",
"These cross-sentence links, such as causality or temporal ordering, can introduce inconsistencies with the article.",
"Furthermore, information in the summary should be verifiable given the article.",
"This understanding outlines different levels of linguistic structure where factual mistakes can arise in summaries: at the semantic frame level, at the discourse level, or because the content cannot be verified.",
"Below we define a typology of factual errors further detailing these three levels.",
"This typology is theoretically grounded in frame semantics (Fillmore et al., 1976; Baker et al., 1998; Palmer et al., 2005) and linguistic discourse analysis (Brown and Yule, 1983).",
"Examples for each category are shown in Table",
"1. 2.1 Semantic Frame Errors A semantic frame is a schematic representation of an event, relation, or state, which consists of a predicate and a list of participants, called frame elements (Baker et al., 1998).",
"A semantic frame has both core and non-core frame elements (FE).",
"Core frame elements are essential to the meaning of the frame, while non-core (e.g. location, time) provide additional descriptive information.",
"Our first three categories capture factual errors in each of these components (frame, core and non-core FE) respectively.",
"Predicate Error (PredE): Category PredE encompasses errors where the predicate in a summary statement is inconsistent with the source text.",
"More generally, this represents cases where the frame from a summary statement does not align with what is expressed in the source text.",
"Entity Error (EntE): Category EntE captures errors where the primary arguments (like entities) of the predicate are wrong or have the wrong attributes, although the relation was expressed in the original text.",
"More generally, these account for cases where the core frame elements in a frame are wrong.",
"This also captures directionality errors where the elements are interchanged (similar to agent-patient swap).",
"Circumstance Error (CircE): In additional to the core arguments, predicates can be further speci-fied using additional information or attributes that describe the circumstance in which the arguments Category Description Example PredE Relation Error The predicate in the summary statement is inconsistent with the source article.",
"and predicates interact (e.g. location, time, manner, direction, modality).",
"Category CircE captures errors where one or more such attributes (non-core frame elements within a frame) are wrong.",
"The communicative intent of an author is also expressed through relations that hold between parts of the text.",
"Factual errors in summarized text can often extend beyond a single semantic frame introducing erroneous links between discourse segments.",
"Below we outline such categories of errors which are grounded in discourse analysis and rhetorical structure theory (RST) (Brown and Yule, 1983; Mann and Thompson, 1988).",
"RST is an elaborate system for annotating coherence relations in discourse.",
"Some examples of such relations include: Elaboration, Background, Motivation, and Volitional Cause.",
"Here we depart from semantic frame terminology as its rooting in a single frame does not allow us to represent such errors.",
"Coreference Error (CorefE): Category CorefE accounts for errors where pronouns and other types of references to previously mentioned entities either are incorrect or have no clear antecedents, making them ambiguous.",
"Discourse Link Error (LinkE): Category LinkE encompasses errors involving a discourse link between different statements.",
"These include errors of incorrect temporal ordering or incorrect discourse links (e.g. RST relations, discourse connectors) between statements.",
"Often statements in a summary cannot be verified against the source text due to difficulty in aligning them to the source.",
"Below we outline two categories of errors for such cases.",
"Out of Article Error (OutE): Since summaries of a document should only contain information that can be deduced from the original text, we include a category for such errors OutE (prior work refers to this as extrinsic hallucinations (Maynez et al., 2020)).",
"Grammatical Error (GramE): We use GramE to categorize statements that are not well formed.",
"When grammatical mistakes make the meaning of a statement incomprehensible or ambiguous, it cannot be verified against the source and is thus considered trivially wrong.",
"Minor grammatical errors are acceptable.",
"Finally, for completeness in our annotation exercise, we add two additional categories Others (OthE) for factually errors that do not correspond to any of the above categories and Not an Error (NE) for statements that do not contain any errors.",
"Beyond theoretical grounding, we empirically verify our typology through large scale human annotations",
"annotations of five abstractive summarization models on the CNN/DM dataset and four on the XSum dataset.",
"Through our dataset, we aim to have a broad coverage of different types of errors made by neural summarization systems, with human judgements on their fine-grained factuality errors.",
"Annotation Data For the annotation, we include model summaries from CNN/DM and XSum datasets as they present different characteristics.",
"CNN/DM summaries are longer, with three sentences on average, while XSum has only single sentence summaries.",
"Having longer summaries is crucial to identify discourse level errors.",
"On the other hand, XSum summaries are more abstractive and include more factual errors on average (Maynez et al., 2020).",
"For a diverse set of model summaries, we collect publicly available model outputs from different summarization models with differing factuality capabilities.",
"For the CNN/DM dataset, we use model outputs from a LSTM Seq-to-Seq model (S2S) (Rush et al., 2015), a Pointer-Generator Network (PGN) model (See et al., 2017), a Bottom-Up Summarization (BUS) model (Gehrmann et al., 2018), a Bert based Extractive-Abstractive model (BertSum) (Liu and Lapata, 2019) and a jointly pretrained transformer based encoder-decoder model BART (Lewis et al., 2020).",
"For the XSum dataset, we collect model outputs from a Topic-Aware CNN Model (Narayan et al., 2018), a Pointer-Generator Network (PGN) model, a randomly initialized (TransS2S) (Vaswani et al., 2017) and one initialized with Bert-Base (BertS2S) (Devlin et al., 2019).",
"2 Details of the models used are provided in A.1.",
"Annotation Collection Using the above model generated summaries, we collect human annotations from three independent annotators for 250 articles from each dataset (with a total of 1250 model outputs on CNN/DM and 1000 on XSum).",
"We annotate each sentence of a summary to break the judgement of factuality into smaller units.",
"We present sentences in the context of the entire summary to identify discourse errors spanning multiple sentences.",
"Annotations are a two step process: for each sentence in the summary, the annotator first selects whether the sentence is factual, and if marked not factual, identifies the category of each 2 As we use publicly available model outputs, the summaries across different datasets are from different models owing to their availability.",
"error based on our typology.",
"3 A sentence can be annotated with more than one category of errors to account for multiple errors within a sentence.",
"We conduct the annotation task on the Amazon Mechanical Turk (MTurk) platform.",
"To achieve high quality crowd-sourced annotations, we build an intuitive interface 4 which combines:",
"1. Clear Instructions: We explain the annotation scheme without assuming linguistic knowledge and give several examples for each category.",
"2. Training and Evaluation: We setup training tutorials for first time users to train and provide feedback on the task.",
"We also setup a qualification test which tests their understanding of our annotation scheme and require annotators to obtain >85% score to qualify.",
"Further, we continuously evaluate annotators during the task against artificially generated factual errors to ensure continued high quality.",
"3. Fair Pay and Bonus: All workers are paid 50% more than the average American minimum wage.",
"We offer bonuses for scores of 60% or above on the continuous evaluation, and for completing sets of 10 annotations.",
"Further details on our interface are added in A.6",
"Inter-Annotator Agreement: We report inter-annotator agreement in terms of Fleiss Kappa (Fleiss, 1971).",
"Following Durmus et al. (2020), we report the percentage p of annotators that agree with the majority class.",
"Each datapoint in our dataset corresponds to a sentence in a summary.",
"We compute agreement on all 4942 annotated sentences.",
"On the annotation of whether a sentence is factual or not we obtain = 0 .",
"58 , with p = 91% of annotators agreeing with the majority class.",
"As a comparison, Durmus et al. (2020) reports p = 76 .",
"7% average agreement.",
"When all three annotators agree that a sentence is not factual, we obtain = 0 .",
"39 with p = 73 .",
"9% of annotators agreeing with the majority class on the eight category annotation (seven categories of errors and other) which indicate a moderate agreement.",
"3 We experimented with Likert scale evaluation of full summaries in a pilot study.",
"Such an annotation would not provide precise information about where in the summary an error appears and also resulted in lower agreement.",
"Hence, we opted for sentence level judgements.",
"4 We make the interface available for future human annotations that follow our typology 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 BART BERTSum BUS PGN S2S CNN/DM 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 BERTS2S PGN TConvS2S TranS2S XSum PredE: Predicate EntE: Entity CircE: Circumstance CorefE: Coreference: LinkE: Connector OutE: Not in article GramE: Grammar OthE: Other Figure 2: Proportion of summaries with factual errors based on collected annotations, with breakdown of the categories of errors within.",
"We find a Kohen Kappa of = 0 .",
"86 indicating nearly perfect agreement.",
"Previous work found agreement of = 0 .",
"65 between three crowd annotators and expert annotations of factuality (Falke et al., 2019).",
"Even with more than nine workers, they report agreement with expert annotations of at most = 0 .",
"74 .",
"This improvement validates the robustness of our annotation interface and protocol which achieves higher agreement with fewer workers.",
"We evaluate the performance of different summarization models in terms of factuality.",
"Figure 2 visualizes the percentage of summaries with factual errors for each category model and dataset, with a breakdown of proportion of different error types within each.",
"A summary is considered incorrect if it contains at least one sentence with a factual error.",
"A sentence contains a factual error if the majority of annotators indicate the presence of an error (here we do not consider annotations where all three annotators disagree on the category).",
"How factual are generated summaries across different datasets?",
"From our annotations, we observe that 60% of the summaries that were annotated contain at least one factual error.",
"From Figure 2, we see that the XSum dataset has more factually incorrect model summaries (92%) than CNN/DM (43%).",
"It poses more significant challenges in terms of factuality as all models produce > 80% summaries with factual errors, with the best model (BertS2S) producing 83% wrong summaries.",
"On the CNN/DM dataset, while state-of-the-art pretrained models like BERTSum and BART have better factuality numbers, the percentage of factually incorrect summaries is still high (23% for BERTSum and 27% for BART).",
"The proportion of errors across different categories vary widely between the two datasets.",
"For the CNN/DM dataset, the most frequent classes of errors are Entity Error (EntE) and Coreference Error (CorefE).",
"For the XSum dataset they are Out of Article Error (OutE) and Entity Error (EntE).",
"Note that there are no discourse errors (CorefE, LinkE) in the XSum dataset because the data only contains single sentence summaries.",
"Additionally, we observe that OthE makes up a very small percentage ( 1%) of errors overall showing that our typology is complete with most errors being mapped to one of our existing categories.",
"How factual are generated summaries across different models?",
"From Figure 2, we observe that LSTM based models like S2S and BUS generate many incorrect summaries.",
"Interestingly, PGN on CNN/DM has fewer summaries with factual errors (26%) compared to S2S (74%) and BUS (62%) potentially due to the extractive nature of CNN/DM and the copy based objective in PGN.",
"PGN has been previously shown to produce highly extractive summaries on CNN/DM copying large portions of text (often entire sentences) (Gehrmann et al., 2018; Balachandran et al., 2021).",
"On the more abstractive dataset XSum, PGN produces > 96% factually incorrect summaries.",
"We also observe that large-scale pretrained models improve factuality on both datasets, as also noted by Durmus et al. (2020), with more significant gains on CNN/DM.",
"On CNN/DM, BERTSum and BART display half the error rate of BUS.",
"In contrast, on XSum, BertS2S improves over non-pretrained models by 10% only, showing that XSum poses a significant challenge for factuality even in pretrained models.",
"models have higher proportion of Grammatical Errors (GramE) while transformer and CNN based models have a lower proportion.",
"For pretrained transformer models, we observe that the improved error-rate on the CNN/DM dataset can be attributed to improvements at the frame level (PredE, EntE, CircE) while the discourse level errors still remain a challenge.",
"Errors CorefE, LinkE account for a higher proportion of errors in BERTSum and BART compared to the other models.",
"We propose the FRANK dataset resulting from the human annotation study as a common benchmark to assess different factuality metrics.",
"We provide an evaluation protocol of factuality metrics, which controls for dataset biases, and a fine grained analysis of the strengths of each metric.",
"The FRANK benchmark provides a diverse dataset for evaluating various metrics on their ability to capture factual errors.",
"Notably, our benchmark has factual error diversity , as it covers all types of errors described in the typology in 2, and data diversity as it combines 2250 summaries from different systems and datasets.",
"Our annotations go beyond binary labels of factuality on a summary by providing fine-grained category annotations for every sentence.",
"This allows us to determine how well each metric can capture each type of error.",
"Furthermore, through averaging of sentence level judgements, we can also obtain a factuality scores (0 to 1 range) for a summary.",
"To measure the degree that automated metrics capture a certain characteristic, we compute their correlation with human judgements and report Pearson correlation and Spearman rank correlation along with their p-values.",
"We evaluate different classes of metrics against the FRANK benchmark.",
"We select four general summarization metrics.",
"ROUGE, BLEU, and Meteor are n-gram based metrics and computed with respect to the reference summary.",
"BERTScore (Zhang et al., 2020) computes BERT (Devlin et al., 2019) contextual embeddings on summary and source article and measures distances between matched embeddings.",
"We select five metrics focused on factuality.",
"As Goodrich et al. (2019), we use a simple OpenIE (Banko et al., 2007) baseline.",
"This involves extracting OpenIE triples and matching them through sentence embeddings (Reimers 0.0 0.5 1.0 0.1 0.2 0.3 0.4 FEQA 0.0 0.5 1.0 0.2 0.4 0.6 0.8 FactCC BART BERTSum BUS PGN S2S BERTS2S-XS PGN-XS TConvS2S-XS TranS2S-XS All data 0.0 0.2 0.4 0.6 0.8 1.0 Human judgement of factuality 0.0 0.2 0.4 0.6 0.8 1.0 M e t r i c s c o r e Figure 3: Correlation between metrics and human judgement on subsets of data. The x and y axis represent the human judgement the metric scores respectively. The red line is a linear regression fitted on full data. Each dotted line is a linear regression fitted on a model-dataset subset. Each colored point has coordinates equal to average factuality judgement, and metric score for its corresponding partition. and Gurevych, 2019).",
"FactCC (Kryscinski et al., 2020) and DAE (Goyal and Durrett, 2020) are entailment based metrics.",
"FactCC operates with sentences as claims, while DAE uses dependency level entailment.",
"FEQA (Durmus et al., 2020) and QAGS (Wang et al., 2020) are two question answering and generation metrics (QGA).",
"More details on the differences between these metrics is in A.2.",
"Since our benchmark contains diverse summaries from different datasets and models, dataset biases can hamper accurate reporting.",
"In Figure 3, we visually show correlations between two factuality metrics (FEQA and FactCC) and human judgement on the entire data and on partitions of the data.",
"For both metrics, we notice that the slope (an unscaled measure of correlation) of the line fitted through the entire data (red line) is significantly larger.",
"In FEQA, the dotted lines (fitted on subsets of the data of each model and dataset) are almost horizontal.",
"This likely indicates the presence of a confounding variable associated with the properties of each system and dataset.",
"This can lead to false measures of high correlation if not accounted for.",
"To address this, we suggest to control for confounding variables using partial correlations.",
"We include details on partial correlations in the Appendix.",
"In this case, both the system and the dataset are taken to be confounding variables.",
"In Table 2, we report the partial Pearson correlation and Spearman rank correlation coefficients with human judgements for each metric, along with their",
"How do different metrics correlate with human judgements?",
"From Table 2 we observe that all metrics exhibit low correlations with human judgements of factuality.",
"The best metric overall is FactCC with 0.20 Pearson and 0.30 Spearman correlation.",
"Interestingly, we observe that general summarization metrics BLEU, Rouge, and METEOR, and the OpenIE baseline have statistically significant correlations with factuality, close to FactCC ( = 0 . 14 for Rouge-1 and METEOR versus = 0 . 20 for FactCC).",
"The entailment metrics (FactCC and DAE) have the two highest correlations and are statistically significant.",
"The two QGA metrics have lower overall correlation.",
"FEQA's correlation is not statistically significant.",
"QAGS has low, but significant correlation of = 0 .",
"06 .",
"different datasets?",
"In Figure 4, we observe that entailment metrics have significantly higher partial Pearson correlation on the CNN/DM dataset than XSum where their correlation is reduced by a factor of four.",
"QAGS and the OpenIE baseline have similar behavior.",
"This suggests that these metrics capture the error types from CNN/DM better that those from XSum.",
"Specifically, XSum has uniquely high Out of Article (OutE) errors which they might not capture well.",
"This also highlights the importance of data diversity in building and benchmarking factuality metrics to avoid overfit-ting to certain types of errors.",
"How well do different metrics capture errors from pretrained and non-pretrained models?",
"On the CNN/DM dataset we observe that entailment metrics and QAGS perform significantly better on non-pretrained models.",
"This indicates that the artificial factual errors on which entailment metrics are trained on are closest to the mistakes that non-pretrained models make.",
"This also suggests that the errors made by pretrained models might be more difficult to capture by these metrics.",
"These trends are less clear on the XSum dataset which we again attribute to high Out of Article (OutE) errors in the pretrained and non-pretrained models (ref Figure 2) 5.4 Error Analysis Figure 4 shows partial Pearson correlation on six subsets of the data.",
"To understand capabilities of metrics across the broad categories of errors (se-mantic frame errors, discourse errors, and content verifiability errors) we perform an ablation study.",
"For each category, we compute the variation in partial correlation with errors from that category omitted.",
"In Figure 5, we visualize the influence of a given type of error using the variation for each metric and category.",
"A higher positive bar indicates that the error type was a significant contributer to the overall correlation (or metric highly correlates with error) causing the correlation without it to 0.0 0.2 0.4 CNN/DM no pretr.",
"General Summarization metrics Unsurprisingly, we observe that Rouge L is best correlated with content verifiability errors (which contains Out of Article Errors) as n-gram matches detect them.",
"Rouge L has negative correlation with semantic frame errors and low correlation with discourse level errors indicating that n-gram matching fails to capture them.",
"We observe that OpenIE is more correlated with semantic frame errors.",
"The metric matches entities and verifies the predicate that relates them and hence is able to capture semantic frame errors.",
"BertScore has low correlation overall, being more correlated with content verifiability errors and negatively correlated with discourse errors.",
"QGA metrics Both QGA metrics have negative correlation with discourse errors suggesting that QGA metrics are not able to capture coreference errors or discourse link errors potentially due to the entity oriented questions in their training data.",
"FEQA additionally is also negatively correlated with semantic frame errors and has low positive correlation with content verifiability errors.",
"In contrast QAGS is best correlated with semantic frame errors.",
"correlation of all metrics with discourse errors suggesting that entailment at the dependency level can help model discourse errors (CorefE and LinkE).",
"FactCC is nearly uncorrelated in this category, indicating that artificially generated factual errors need to go beyond simple pronoun swaps to train models to capture discourse errors.",
"FactCC had best overall partial correlation which can be attributed to FactCC being able to capture semantic frame and content verifiability errors well.",
"Kryscinski et al. (2019) and Fabbri et al. (2020) find that standard n-gram based metrics have low correlation with human judgements of factuality.",
"Motivated by this, several automated metrics falling in two paradigms were proposed to improve the evaluation of factuality.",
"Entailment Classification Goodrich et al. (2019); Kryscinski et al. (2020); Maynez et al. (2020); Goyal and Durrett (2020) model factuality as entailment classification breaking down the summary into smaller units, such as sentences, which are verified against the original article.",
"However, modeling factuality as a classification task requires supervision on factual and hallucinated data.",
"FactCC (Kryscinski et al., 2020) is trained on the CNN/DM dataset augmented with four types of artificial mistakes as supervision.",
"Question Generation and Answering (QGA) FEQA (Durmus et al., 2020) and QAGS (Wang et al., 2020) are two metrics which reduce factuality evaluation to question generation and answering.",
"These methods use a question generation model to obtain questions from the output summary and a question answering model to answer them, separately using the article and the output summary.",
"(2020) have collected annotations on the CNN/DM and XSum dataset respectively.",
"In this work we cover both datasets to ensure greater data diversity.",
"Other efforts (Kryscinski et al., 2020; Wang et al., 2020; Durmus et al., 2020) were smaller in scale Durmus et al. (2020) and Kryscinski et al. (2020) annotated 200 and 503 sentences while Wang et al. (2020) annotated 470 summaries (we collect judgements on 2250 summaries).",
"Crucially, all previous efforts portray factuality as a binary label without variations in degree or type of factual errors.",
"In this work we provide a linguistically grounded typology of factual errors which we use to collect FRANK, a dataset of human annotations of 2250 summaries covering both CNN/DM and XSum datasets.",
"We use FRANK to assess the factuality of summarization systems and benchmark recently proposed factuality metrics highlighting the types of errors they can capture.",
"With the FRANK benchmark we have started moving away from a summary-level binary understanding of factuality.",
"We have collected crowd annotations using the Amazon Mechanical Turk platform.",
"Workers were paid 50% more than the average American minimum wage and offered additional bonuses as an incentive to maintain high quality work.",
"No information about the workers will be released and worker IDs will be anonymized.",
"The authors are grateful to the anonymous reviewers for their feedback, and to Anjalie Field, Rishabh Joshi, Alissa Ostapenko, Dheeraj Ra-jagopal, Evangelia Spiliopoulou, Shuly Wintner, and the members of the Tsvetshop group for their invaluable feedback and support in various stages of the project.",
"This material is based upon work supported by the DARPA CMO under Contract No.",
"HR001120C0124, and in part by the National Science Foundation under Grants No.",
"IIS2040926 and No.",
"IIS2007960.",
"Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily state or reflect those of the United States Government or any agency thereof."
] | [
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"objective",
"method",
"objective",
"result",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"method",
"abstain",
"method",
"objective",
"abstain",
"method",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other"
] |
[
"Natural language inference (NLI) is the task of determining whether a piece of text is entailed, contradicted by or unrelated to another piece of text.",
"In this paper, we investigate how to tease systematic inferences (i.e., items for which people agree on the NLI label) apart from disagreement items (i.e., items which lead to different annotations), which most prior work has overlooked.",
"To distinguish systematic inferences from disagreement items, we propose Artificial Annotators (AAs) to simulate the uncertainty in the annotation process by capturing the modes in annotations.",
"Results on the CommitmentBank, a corpus of naturally occurring discourses in English, confirm that our approach performs statistically significantly better than all baselines.",
"We further show that AAs learn linguistic patterns and context-dependent reasoning.",
"Learning to effectively understand unstructured text is integral to Natural Language Understanding (NLU), covering a wide range of tasks such as question answering, semantic textual similarity and sentiment analysis.",
"Natural language inference (NLI), an increasingly important benchmark task for NLU research, is the task of determining whether a piece of text is entailed, contradicted by or unrelated to another piece of text (i.a., Dagan et al., 2005; MacCartney and Manning, 2009).",
"Pavlick and Kwiatkowski (2019) observed inherent disagreements among annotators in several NLI datasets, which cannot be smoothed out by hiring more people.",
"They pointed out that to achieve robust NLU, we need to be able to tease apart systematic inferences (i.e., items for which most people agree on the annotations) from items inherently leading to disagreement.",
"The last example in Table 1, from the CommitmentBank (de Marneffe et al., 2019), is a typical disagreement item: some annotators consider it to be an entailment (3 or 2), 1 Premise: Some of them, like for instance the farm in Connecticut, are quite small.",
"If I like a place I buy it.",
"I guess you could say it's a hobby.",
"Hypothesis: buying places is a hobby.",
"Entailment (Entailment) [3, 3, 2, 2, 2, 2, 1, 1] 2 Premise: I hope you are settling down and the cat is well.",
"This was a lie.",
"She did not hope the cat was well.",
"Hypothesis: the cat was well.",
"Neutral (Neutral) [0, 0, 0, 0, 0, 0, 0, 0, -3] 3 Premise: All right, so it wasn't the bottle by the bed. What was it, then?",
"Cobalt shook his head which might have meant he didn't know or might have been admonishment for Oliver who was still holding the bottle of wine.",
"Hypothesis: Cobalt didn't know.",
"Neutral (Disagreement) [1, 0, 0, 0, 0, 0, 0, -2] 4 Premise: A: No, it doesn't.",
"B: And, of course, your court system when you get into the appeals, I don't believe criminal is in a court by itself.",
"Hypothesis: criminal is in a court by itself.",
"Contradiction (Contradiction) [-1, -1, -2, -2, -2, -2, -2, -3] 5 Premise: A: The last one I saw was Dances With The Wolves.",
"B: Yeah, we talked about that one too.",
"And he said he didn't think it should have gotten all those awards.",
"Hypothesis: Dances with the Wolves should have gotten all those awards.",
"Contradiction (Disagreement) [0, 0, -1, -1, -2, -2, -2, -3] 6 Premise: Meg realized she'd been a complete fool.",
"She could have said it differently.",
"If she'd said Carolyn had borrowed a book from Clare and wanted to return it they 'd have given her the address.",
"Hypothesis: Carolyn had borrowed a book from Clare.",
"Disagreement (Disagreement) [3, 3, 3, 2, 0, -3, -3, -3] Table 1: Examples from CommitmentBank, with finer-grained NLI labels.",
"while others view it as a contradiction (-3).",
"A common practice to generate an inference label from annotations is to take the average (i.a., Pavlick and Callison-Burch, 2016).",
"In this case, the average of the annotations is 0.25 and the gold label for this item would thus be Neutral, but such label is not accurately capturing the annotation distribution.",
"Alternatively, some work simply ignores items on which annotators disagree and only studies systematic inference items (Jiang and de Marneffe, 2019a,b; Raffel et al., 2019).",
"Here, we aim at teasing apart systematic inferences from inherent disagreements.",
"In line with what Kenyon-Dean et al. (2018) suggested for sentiment analysis, we propose a finer-grained labeling Entailment Neutral Contradiction Disagreement Total Train 177 57 196 410 840 Dev 23 9 22 66 120 Test 58 19 54 109 240 Total 258 85 272 585 1,200 Table 2: Number of items in each class in train/dev/test.",
"for NLI: teasing disagreement items, labeled Dis-agreement, from systematic inferences, which can be Contradiction, Neutral or Entailment.",
"To this end, we propose Artificial Annotators (AAs), an ensemble of BERT models (Devlin et al., 2019), which simulate the uncertainty in the annotation process by capturing modes in annotations.",
"That is, we expect to utilize simulated modes of annotations to enhance finer-grained NLI label prediction.",
"Our results, on the CommitmentBank, show that AAs perform statistically significantly better than all baselines (including BERT baselines) by a large margin in terms of both F1 and accuracy.",
"We also show that AAs manage to learn linguistic patterns and context-dependent reasoning.",
"The CommitmentBank (CB) is a corpus of 1,200 naturally occurring discourses originally collected from news articles, fiction and dialogues.",
"Each discourse consists of up to 2 prior context sentences and 1 target sentence with a clause-embedding predicate under 4 embedding environments (nega-tion, modal, question or antecedent of condi-tional).",
"Annotators judged the extent to which the speaker/author of the sentences is committed to the truth of the content of the embedded clause (CC), responding on a Likert scale from +3 to -3, labeled at 3 points (+3/speaker is certain the CC is true, 0/speaker is not certain whether the CC is true or false, -3/speaker is certain the CC is false).",
"Following Jiang and de Marneffe (2019b), we recast CB by taking the context and target as the premise and the embedded clause in the target as the hypothesis.",
"Common NLI benchmark datasets are SNLI (Bowman et al., 2015) and MultiNLI (Williams et al., 2018), but these datasets have only one annotation per item in the training set.",
"CB has at least 8 annotations per item, which permits to identify items on which annotators disagree.",
"Jiang and de Marneffe (2019b) discarded items if less than 80% of the annotations are within one of the following three ranges: [1,3] Entailment, 0 Neutral, [-3,-1] Contradiction.",
"The gold label for example Entailment-biased Contradiction-biased Neutral-biased MLPPREMISE [SEP] HYPOTHESIS Figure 1: Artificial Annotators setup.",
"3 in Table 1 would thus be Disagreement.",
"However, this seems a bit too stringent, given that 70% of the annotators all agree on the 0 label and there is only one annotation towards the extreme.",
"Likewise, for example 5, most annotators chose a negative score and the item might therefore be better labeled as Contradiction rather than Disagreement.",
"To decide on the finer-grained NLI labels , we therefore also took variance and mean into account, as follows: 1 Entailment: 80% of annotations fall in the range [1,3] OR the annotation variance 1 and the annotation mean",
"> 1. Neutral: 80% of annotations is 0 OR the annotation variance 1 and the absolute mean of annotations is bound within 0.5.",
"Contradiction: 80% of annotations fall in the range [-3, -1] OR the annotation variance 1 and the annotation mean < -1. Disagreement: Items which do not fall in any of the three categories above.",
"We randomly split CB into train/dev/test sets in a 7:1:2 ratio.",
"2 Table 2 gives splits' basic statistics.",
"We aim at finding an effective way to tease items leading to systematic inferences apart from items leading to disagreement.",
"As pointed out by Calma and Sick (2017), annotated labels are subject to uncertainty.",
"Annotations are indeed influenced by several factors: workers' past experience and concentration level, cognition complexities of items, etc.",
"They proposed to simulate the annotation process in an active learning paradigm to make use of the annotations that contribute to uncertainty.",
"Likewise, for NLI, Gantt et al. (2020) observed that directly training on raw annotations using annotator 1 Compared with the labeling scheme in Jiang and de Marneffe (2019b), our labeling scheme results in 59 fewer Disagreement items, 48 of which are labeled as Neutral.",
"2 We don't follow the SuperGLUE splits (Wang et al., 2019) as they do not include disagreement items.",
"The data splits and codes are available at https://github.com/ FrederickXZhang/FgNLI .",
"identifier improves performance.",
"Essentially, Gantt et al. (2020) used a mixed-effect model to learn a mapping from an item and the associated annotator identifier to a NLI label.",
"However, annotator iden-tifiers are not always accessible, especially in many datasets that have been there for a while.",
"Thus, we decide to simulate the annotation process instead of learning from real identifiers.",
"As shown by Pavlick and Kwiatkowski (2019), if annotations of an item follow unimodal distributions, then it is suitable to use aggregation (i.e., take an average) to obtain a inference label; but such an aggregation is not appropriate when annotations follow multi-modal distributions.",
"Without loss of generality, we assume that items are associated with n-modal distributions, where n 1. Usually, systematic inference items are tied to unimodal annotations while disagreement items are tied to multi-modal annotations.",
"We, thus, introduce the notion of Artificial Annotators (AAs), where each individual annotator learns to model one mode.",
"AAs is an ensemble of n BERT models (Devlin et al., 2019) with a primary goal of finer-grained NLI label prediction.",
"n is determined to be 3 as there are up to 3 relationships between premise and hypothesis, excluding the disagreement class.",
"Within AAs, each BERT is trained for an auxiliary systematic inference task which is to predict entail-ment/neutral/contradiction based on a respective subset of annotations.",
"The subsets of annotations for the three BERT are mutually exclusive.",
"A high-level overview of AAs is shown in Figure",
"1. Intuitively, each BERT separately predicts a systematic inference label, each of which represents a mode 3 of the annotations.",
"The representations of these three labels are further aggregated 3 It's possible that three modes collapse to (almost) a point.",
"as augmented information to enhance final fine-grained NLI label prediction (see Eq. 1).",
"If we view the AAs as a committee of three members, our architecture is reminiscent of the Query by Committee (QBC) (Seung et al., 1992), an effective approach for active learning paradigm.",
"The essence of QBC is to select unlabeled data for labeling on which disagreement among committee members (i.e., learners pre-trained on the same labeled data) occurs.",
"The selected data will be labeled by an oracle (e.g., domain experts) and then used to further train the learners.",
"Likewise, in our approach, each AA votes for an item independently.",
"However, the purpose is to detect disagreements instead of using disagreements as a measure to select items for further annotations.",
"Moreover, in our AAs, the three members are trained on three disjoint annotation partitions for each item (see Section 3.2).",
"We first sort the annotations in descending order for each item and divide them into three partitions.",
"4 For each partition, we generate an auxiliary label derived from the annotation mean.",
"If the mean is greater/smaller than +0.5/-0.5, then it's entail-ment/contradiction; otherwise, it's neutral.",
"The first BERT model is always enforced to predict the auxiliary label of the first partition to simulate an entailment-biased annotator.",
"Likewise, the second and third BERT models are trained to simulate neutral-biased and contradiction-biased annotators.",
"Each BERT produces a pooled representation for the [CLS] token.",
"The three representations are passed through a multi-layer perceptron (MLP) to obtain the finer-grained NLI label: P ( y | x ) = softmax( W s tanh( W t [ e ; n ; c ])) (1) 4 For example, if there are 8 annotations for a given item, the annotations are divided into partitions of size 3, 2 and",
"with [ e ; n ; c ] being the concatenation of three learned representations out of e ntailment-biased, n eutral-biased and c ontradiction-biased BERT models.",
"W s and W t are parameters to be learned.",
"The overall loss is defined as the weighted sums of four cross-entropy losses: loss = r loss f + 1 r 3 ( loss e + loss n + loss c ) (2) where r [0 , 1] controls the primary finer-grained NLI label prediction task loss ratio.",
"We include five baselines to compare with: Always 0 : Always predict Disagreement.",
"CBOW (Continuous Bags of Words): Each item is represented as the average of its tokens' GLOVE vectors (Pennington et al., 2014).",
"Heuristic baseline : Linguistics-driven rules (de-tailed in Appendix A), adapted from Jiang and de Marneffe (2019b); e.g., conditional environment discriminates for disagreement items.",
"Vanilla BERT : (Devlin et al., 2019) Straightforwardly predict among 4 finer-grained NLI labels.",
"Joint BERT : Two BERT models are jointly trained, each of which has a different speciality.",
"The first one (2-way) identifies whether a sentence pair is a disagreement item.",
"If not, this item is fed into the second BERT (3-way) which carries out systematic inference.",
"For all baselines involving BERT, we follow the standard practice of concatenating the premise and the hypothesis with [SEP] .",
"Table 3 gives the accuracy and F1 for each baseline and AAs, on the CB dev and test sets.",
"We run each model 10 times, and report the average.",
"CBOW is essentially the same as the Always 0 baseline as it keeps predicting Disagreement regardless of the input.",
"The Heuristic baseline achieves competitive performance on the dev set, though it has a significantly worse result on the test set.",
"Not surprisingly, both BERT-based baselines outperform the Heuristic on the test set: fine-tuning BERT often lead to better performance, including for NLI (Peters et al., 2019; McCoy et al., 2019).",
"These observations are consistent with Jiang and de Marneffe (2019b) who observed a similar trend, though only on systematic inferences.",
"Our proposed AAs perform consistently better than all baselines, and statistically significantly better on the test set (t-test, p 0.01).",
"Also, AAs achieve a smaller standard deviation on the test set within the 10 runs, indi-1 Premise: B: Yeah, it is.",
"A: For instance, B: I'm a historian, and my father had kept them, I think, since nineteen twenty-seven uh, but he burned the ones from twenty-seven to fi-, A: My goodness.",
"B: I could not believe he did that, Hypothesis: his father burned the ones from twenty-seven Heuristics: C V. BERT: D J. BERT: E AAs: E { E , E , E } Gold: E [3, 3, 3, 3, 3, 2, 2, -1] 2 Premise: She was about to tell him that was his own stupid fault and that she wasn't here to wait on him particularly since he had proved to be so inhospitable. But she bit back the words. Perhaps if she made herself useful he might decide she could stay for a while at least just until she got something else sorted out. Hypothesis: she could stay Heuristics: D V. BERT: D J. BERT: D AAs: N { N , N , N } Gold: N [3, 0, 0, 0, 0, 0, 0, 0, 0, 0] 3 Premise: A: but that is one of my solutions. Uh... B: I know here in Dallas that they have just instituted in the last couple of years, uh, a real long period of time that you can absentee vote before the elections. And I do not think they have seen a really high improvement. Hypothesis: they have seen a really high improvement. Heuristics: C V. BERT: C J. BERT: C AAs: C { C , C , C } Gold: C [-1, -2, -2, -2, -2, -2, -2, -2, -3, -3] 4 Premise: B: So did you commute everyday then or, A: No. B: Oh, okay. A: No, no, it was a six hour drive. B: Oh, okay, when you said it was quite a way away, I did not know that meant you had to drive like an hour Hypothesis: speaker A had to drive like an hour Heuristics: C V. BERT: D J. BERT: E AAs: D { E , C , C } Gold: D [3, 2, 2, 1, 0, 0, -1, -1, -1, -3] 5 Premise: The assassin's tone and bearing were completely confident. If he noticed that Zukov was now edging further to the side widening the arc of fire he did not appear to be troubled. Hypothesis: Zukov was edging further to the side Heuristics: DV. BERT: DJ. BERT: D AAs: D { E , E , N } Gold: E [3, 3, 3, 3, 2, 2, 1, 1] 6 Premise: B: Yeah, and EDS is very particular about this, hair cuts, A: Wow. B: I mean it was like you can't have, you know, such and such facial hair, no beards, you know, and just really detailed. A: A: I don't know that that would be a good environment to work in. Hypothesis: that would be a good environment to work in Heuristics: C V. BERT: C J. BERT: D AAs: C { C , C , C } Gold: D [2, 0, 0, 0, 0, -1, -2, -3] 7 Premise: Willy did mention it. I was puzzled, I 'll admit, but now I understand.",
"How did you know Heather had been there? Hypothesis: Heather had been there Heuristics: N V. BERT: E J. BERT: E AAs: E { E , E , E } Gold: D [3, 3, 3, 2, 1, 1, 0, 0, 0] Table 4: Models' predictions for CB test items.",
"cating that it is more stable and potentially more robust to wild environments.",
"Table 3 also gives F1 for each class on the test set.",
"AAs outperform all BERT-based models under all classes.",
"However, compared with the Heuristic, AAs show an inferior result on Neutral items mainly due to the lack of Neutral training data.",
"The first 4 examples in Table 4 show examples for which AAs make the correct prediction while other baselines might not.",
"The confusion matrix in Table 5 shows that the majority ( 60%) of errors come from wrongly predicting a systematic inference item as a disagreement item.",
"In 91% of Predict Gold E N C D Total E 37 2 0 13 52 N 1 10 0 3 14 C 0 0 34 13 47 D 20 7 20 80 127 Total 58 19 54 109 240 Table 5: Confusion matrix for the test set.",
"such errors, AAs predict that there is more than one mode for the annotation (i.e., the three labels predicted by individual annotators in AAs are not unanimous), as in example 5 in Table",
"4. AAs are thus predicting more modes than necessary when the annotation is actually following a uni-modal distribution.",
"On the contrary, when the item is supposed to be a disagreement item but is missed by AAs (as in example 6 and 7 in Table 4), AAs mistakenly predict that there is only one mode in the annotations 78% of the time.",
"It thus seems that a method which captures accurately the number of modes in the annotation distribution would lead to a better model.",
"We also examine the model performance for different linguistic constructions to investigate whether the model learns some of the linguistic patterns present in the Heuristic baseline.",
"The Heuristic rules are strongly tied to the embedding environments.",
"Another construction used is one which can lead to neg-raising reading, where a negation in the matrix clause is interpreted as negating the content of the complement, as in example 3 (Table 4) where I do not think they have seen a really high improvement is interpreted as I think they did not see a really high improvement .",
"Neg-raising readings often occur with know , believe or think in the first person under negation.",
"There are 85 such items in the test set: 41 contradictions (thus neg-raising items), 39 disagreements and 5 entailments.",
"Context determines whether a neg-raising inference is triggered (An and White, 2019).",
"Table 6 gives F1 scores for the Heuristic, BERT models and AAs for items under the different embedding environments and potential neg-raising items in the test set.",
"Though AAs achieve the best overall results, it suffers under conditional and question environments, as the corresponding training data is scarce (9.04% and 14.17%, respec-tively).",
"The Heuristic baseline always assigns contradiction to the I don't know/believe/think items, thus capturing all 41 neg-raising items but missing disagreements and entailments.",
"BERT, a SOTA NLP model, is not great at capturing such items either: 71.64 F1 on contradiction vs. 52.84 on the others (Vanilla BERT); 71.69 F1 vs. 56.16 (Joint BERT).",
"Our AAs capture neg-raising items better with 77.26 F1 vs. 59.38, showing an ability to carry out context-dependent inference on top of the learned linguistic patterns.",
"Table 7, comparing performance on test items correctly predicted by the linguistic rules vs. items for which context-dependent reasoning is necessary, confirms this: AAs outperform the BERT baselines in both categories.",
"We introduced finer-grained natural language inference.",
"This task aims at teasing systematic inferences from inherent disagreements, overlooked in prior work.",
"We show that our proposed AAs, which simulate the uncertainty in annotation process by capturing the modes in annotations, perform statistically significantly better than all baselines.",
"However the best performance obtained ( 66%) is still far from achieving robust NLU, leaving room for improvement.",
"We thank the anonymous reviewers for their valuable feedback.",
"This material is based upon work supported by the National Science Foundation under Grant No.",
"IIS-1845122."
] | [
"abstain",
"objective",
"objective",
"result",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"objective",
"objective",
"objective",
"objective",
"objective",
"objective",
"objective",
"objective",
"objective",
"objective",
"objective",
"objective",
"objective",
"objective",
"abstain",
"objective",
"objective",
"objective",
"objective",
"objective",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"method",
"result",
"result",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"other",
"other",
"other"
] |
[
"Knowledge bases often consist of facts which are harvested from a variety of sources, many of which are noisy and some of which con-flict, resulting in a level of uncertainty for each triple.",
"Knowledge bases are also often incomplete, prompting the use of embedding methods to generalize from known facts, however existing embedding methods only model triple-level uncertainty and reasoning results lack global consistency.",
"To address these shortcomings, we propose BEUrRE , a novel uncertain knowledge graph embedding method with calibrated probabilistic semantics.",
"BEUrRE models each entity as a box (i.e. axis-aligned hyperrectangle), and relations between two entities as affine transforms on the head and tail entity boxes.",
"The geometry of the boxes allows for efficient calculation of intersections and volumes, endowing the model with calibrated probabilistic semantics and facilitating the incorporation of relational constraints.",
"Extensive experiments on two benchmark datasets show that BEUrRE consistently outperforms baselines on confidence prediction and fact ranking due to it's probabilistic calibration and ability to capture high-order dependencies among facts.",
"1 1 Introduction Knowledge graphs (KGs) provide structured representations of facts about real-world entities and relations.",
"In addition to deterministic KGs (Bollacker et al., 2008; Lehmann et al., 2015; Mahdisoltani et al., 2015), much recent attention has been paid to uncertain KGs (or UKGs).",
"UKGs, such as ProBase (Wu et al., 2012), NELL (Mitchell et al., 2018), and ConceptNet (Speer et al., 2017), associate each fact (or triple) with a confidence score representing the likelihood of that fact to be true.",
"Such uncertain knowledge representations critically capture Indicating equal contribution.",
"the uncertain nature of reality, and provide more precise reasoning.",
"For example, while both (Honda, competeswith, Toyota) and (Honda, competeswith, Chrysler) look somewhat correct, the former fact should have a higher confidence than the latter one, since Honda and Toyota are both Japanese car manufacturers and have highly overlapping customer bases.",
"Similarly, while (The Beatles, genre, Rock) and (The Beatles, genre, Pop) are both true, the first one may receive a slightly higher confidence, since the Beatles is generally considered a rock band.",
"Such confidence information is important when answering questions like Who is the main competitor of Honda?",
", or extracting confident knowledge for drug repurposing (Sosa et al., 2020).",
"To facilitate automated knowledge acquisition for UKGs, some UKG embedding models (Chen et al., 2019; Kertkeidkachorn et al., 2019) have recently been proposed.",
"Inspired by the works about deterministic KG embeddings (Yang et al., 2015; Bordes et al., 2013), existing approaches model entities and relations as points in low-dimensional vector space, measure triple plausibility with vector similarity (eg. distance, dot-product), and map the plausibility to the confidence range of [0 , 1] .",
"For instance, the representative work UKGE (Chen et al., 2019) models the triple plausibility in the form of embedding product (Yang et al., 2015), and trains the embedding model as a regressor to predict the confidence score.",
"One interpretation of existing methods is that they model each triple using a binary random variable, where the latent dependency structure between different binary random variables is captured by vector similarities.",
"Without an explicit dependency structure it is difficult to enforce logical reasoning rules to maintain global consistency.",
"In order to go beyond triple-level uncertainty modeling, we consider each entity as a binary random variable.",
"However, representing such a probability distribution in an embedding space and reasoning over it is non-trivial.",
"It is difficult to model marginal and joint probabilities for entities using simple geometric objects like vectors.",
"In order to encode probability distributions in the embedding space, recent works (Lai and Hockenmaier, 2017; Vilnis et al., 2018; Li et al., 2019; Dasgupta et al., 2020) represent random variables as more complex geometric objects, such as cones and axis-aligned hyperrectangles ( boxes ), and use volume as the probability measure.",
"Inspired by such advances of probability measures in embeddings, we present BEUrRE ( B ox E mbedding for U nce r tain RE lational Data) 2 .",
"BEUrRE represents entities as boxes.",
"Relations are modeled as two separate affine transforms on the head and tail entity boxes.",
"Confidence of a triple is modeled by the intersection between the two transformed boxes.",
"Fig. 1 shows how a fact about the genre of the Beatles is represented under our framework.",
"Such representation is not only inline with the human perception that entities or concepts have different levels of granularity, but also allows more powerful domain knowledge representation.",
"UKGE (Chen et al., 2019) has demonstrated that introducing domain knowledge about relation properties (e.g. transitivity) can effectively enhance reasoning on UKGs.",
"While UKGE uses Probabilistic Soft Logic (PSL) (Bach et al., 2017) to reason for unseen facts and adds the extra training samples to training, such a method can lead to error propagation and has limited scope of application when UKG is sparse.",
"In our work, we propose sufficient conditions for these relation properties to be preserved in the embedding space and directly model the relation properties by regularizing relation-specific transforms based on constraints.",
"This technique is more robust to noise and has wide coverage that is not restricted by the scarcity of the existing triples.",
"Extensive experiments on two benchmark datasets show that BEUrRE effectively captures the uncertainty, and consistently outperforms the baseline models on ranking and predicting confidence of unseen facts.",
"UKG Embeddings.",
"A UKG assigns a confidence score to each fact.",
"The development of relation extraction and crowdsourcing in recent years enabled the construction of many large-scale uncertain knowledge bases.",
"ConceptNet (Speer et al., 2017) is a multilingual KG of commonsense concepts, where triples are assigned with confidence measures reflecting crowdsourcing agreement.",
"NELL (Mitchell et al., 2018) collects facts from web pages with an active-learnable information extraction model, and measures their confidence scores by semi-supervised learning with the Expectation-Maximum (EM) algorithm.",
"Probase (Wu et al., 2012) is a general probabilistic taxonomy obtained from syntactic extraction.",
"Aforementioned UKGs have supported numerous knowledge-driven applications, such as literature-based drug repurposing (Sosa et al., 2020).",
"Recently, a few UKG embedding methods have been proposed, which seek to facilitate automated knowledge acquisition for UKGs.",
"UKGE (Chen et al., 2019) is the first work of this kind, which models triple plausibility as product of embedding vectors (Yang et al., 2015), and maps the plausibility to the confidence score range of [0 , 1] .",
"To further enhance the performance, UKGE incorporates PSL based constraints (Bach et al., 2017) to help enforce the global consistency of predicted knowledge.",
"UOKGE (Boutouhami et al., 2020) jointly encodes the graph structure and the ontology structure to improve the confidence prediction performance, which however requires an additional ontology of entity types that is not always available to all KGs.",
"In addition to the above UKG embeddings models, there is also a matrix-factorization-based approach URGE that seeks to embed uncertain graphs (Hu et al., 2017).",
"However, URGE only considers the node proximity in the networks.",
"URGE cannot handle multi-relational data and only generates node embeddings.",
"Geometric Embeddings.",
"Developing embedding methods to represent elements using geometric objects with more complex structures than (Eu-clidean) vectors is an active area of study.",
"Poincar embeddings (Nickel and Kiela, 2017) represent entities in hyperbolic space, leveraging the inductive bias of negative curvature to fit hierarchies.",
"Order embeddings (Vendrov et al., 2016) take a region-based approach, representing nodes of a graph using infinite cones, and using containment between cones to represent edges.",
"Hyperbolic entailment cones (Ganea et al., 2018) combine order embeddings with hyperbolic geometry.",
"While these methods show various degrees of promise when embedding hierarchies, they do not provide scores between entities that can be interpreted probabilistically, which is particularly useful in our setting.",
"Lai and Hockenmaier (2017) extend order embeddings with a probabilistic interpretation by integrating the volume of the infinite cones under the negative exponential measure, however the rigid structure imposed by the cone representation limits the representational capacity, and the resulting model cannot model negative correlation or disjointness.",
"Introduced by Vilnis et al. (2018), probabilistic box embeddings represent elements using axis-aligned hyperrectangles (or boxes ).",
"Box embeddings not only demonstrate improved performance on modeling hierarchies, such embeddings also capture probabilistic semantics based on box volumes, and are capable of compactly representing conditional probability distributions.",
"A few training improvement methods for box embeddings have been proposed (Li et al., 2019; Dasgupta et al., 2020), and we make use of the latter, which is termed GumbelBox after the distribution used to model endpoints of boxes.",
"While box embeddings have shown promise in representing hierarchies, our work is the first use of box embeddings to represent entities in multi-relational data.",
"Query2Box (Ren et al., 2020) and BoxE (Abboud et al., 2020) make use of boxes in the loss function of their models, however entities themselves are represented as vectors, and thus these models do not benefit from the probabilistic semantics of box embeddings, which we rely on heavily for modeling UKGs.",
"In (Patel et al., 2020), the authors demonstrate the capability of box embeddings to jointly model two hierarchical relations, which is improved upon using a learned transform in (Dasgupta et al., 2021).",
"Similarly to Ren et al. (2020) and Dasgupta et al. (2021), we also make use of a learned transform for each relation, however we differ from Ren et al. (2020) in that entities themselves are boxes, and differ from both in the structure of the learned transform.",
"Before we move on to the presented method in this work, we use this section to introduce the background of box embeddings and the addressed task.",
"A UKG consists of a set of weighted triples G = { ( l, s l ) } .",
"For each pair ( l, s l ) , l = ( h, r, t ) is a triple representing a fact where h, t E (the set of entities) and r R (the set of relations), and s l [0 , 1] represents the confidence score for this fact to be true.",
"Some examples of weighted triples from NELL are (Honda, competeswith, Toyota) : 1.00 and (Honda, competeswith, Chrysler) : 0.94.",
"UKG Reasoning.",
"Given a UKG G , the uncertain knowledge graph reasoning task seeks to predict the confidence of an unseen fact ( h, r, t ) .",
"In this section we give a formal definition of probabilistic box embeddings, as introduced by Vilnis et al. (2018).",
"A box is an n -dimensional hyperrect-angle, i.e. a product of intervals d (cid:89) i =1 [ x m i , x M i ] , where x m i < x M i .",
"Given a space Box R n , we define B ( Box ) to be the set of all boxes in Box .",
"Note that B ( Box ) is closed under intersection, and the volume of a box is simply the product of side-lengths.",
"Vilnis et al. (2018) note that this allows one to interpret box volumes as unnormalized probabilities.",
"This can be formalized as follows.",
"Definition 3.1.",
"Let ( Box , E , P Box ) be a probability space, where Box R n and B ( Box ) E .",
"Let Y be the set of binary random variables Y on Box such that Y 1 (1) B ( Box ) .",
"A probabilistic box embedding of a set S is a function : S Y .",
"We typically denote f ( s ) =: Y s and Y 1 s (1) =: Box( s ) .",
"Essentially, to each element of S we associate a box which, when taken as the support set of a binary random variable, allows us to interpret each element of S as a binary random variable.",
"Using boxes for the support sets allows one to easily calculate marginal and conditional probabilities, for example if we embed the elements { CAT , MAMMAL } as boxes in Box = [0 , 1] d with P Box as Lebesgue measure, then P ( MAMMAL | CAT ) = P Box ( XMAMMAL | XCAT ) = Vol(Box( MAMMAL ) Box( CAT )) Vol(Box( CAT )) .",
"We further give a brief description of the GumbelBox method, which we rely on for training our box embeddings (Dasgupta et al., 2020).",
"As described thus far, probabilistic box embeddings would struggle to train via gradient descent, as there are many settings of parameters and objectives which have no gradient signal.",
"(For example, if boxes are disjoint but should overlap.)",
"To mitigate this, Dasgupta et al. (2020) propose a latent noise model, where the min and max coordinates of boxes in each dimension are modeled via Gumbel distributions, that is Box( X ) = d (cid:89) i =1 [ x m i , x M i ] where x m i GumbelMax( m i , ) , x M i GumbelMin( M i , ) .",
"m i thereof is the location parameter, and is the (global) variance.",
"The Gumbel distribution was chosen due to its min/max stability, which means that the set of all Gumbel boxes are closed under intersection.",
"Dasgupta et al. (2020) go on to provide an approximation of the expected volume of a Gumbel box, E [Vol(Box( X ))] d (cid:89) i =1 log (cid:16) 1 + exp (cid:16) M i m i 2 (cid:17)(cid:17) .",
"E [ P Box ( XA | XB )] E [Vol(Box( A ) Box( B ))] E [Vol(Box( B ))]",
"and Dasgupta et al. (2020) empirically demonstrate that this approach leads to improved learning when targeting a given conditional probability distribution as the latent noise essentially ensembles over a large collection of boxes which allows the model to escape plateaus in the loss function.",
"We therefore use this method when training box embeddings.",
"Remark 3.1.",
"While we use Gumbel boxes for training, intuition is often gained by interpreting these boxes as standard hyperrectangles, which is valid as the Gumbel boxes can be seen as a distribution over such rectangles, with the Gumbel variance parameter acting as a global measure of uncertainty.",
"We thus make statements such as Box( X ) Box( Y ) , which, strictly speaking, are not well-defined for Gumbel boxes.",
"However we can interpret this probabilistically as P ( Y | X ) = 1 which coincides with the conventional interpretation when = 0 .",
"In this section, we present our UKG embedding model BEUrRE .",
"The proposed model encodes entities as probabilistic boxes and relations as affine transforms.",
"We also discuss the method to incorporate logical constraints into learning.",
"BEUrRE represents entities as Gumbel boxes, and a relation r acting on these boxes by translation and scaling.",
"Specifically, we parametrize a Gumbel box Box( X ) using a center cen(Box( X )) R d and offset o(Box( X )) R d + , where the location parameters are given by m i = cen(Box( X )) o(Box( X )) , M i = cen(Box( X )) + o(Box( X )) .",
"We consider transformations on Gumbel boxes parametrized by a translation vector R d and a scaling vector R d + such that cen( f (Box( X ); , )) = cen(Box( X )) + , o( f (Box( X ); , )) = o(Box( X )) , where is the Hadamard product.",
"We use separate actions for the head and tail entities of a relation, which we denote f r and g r , and omit the explicit dependence on the learned parameters and .",
"Remark 4.1.",
"Note that these relations are not an affine transformations of the space , Box , rather they perform a transformation of a box .",
"These functions form an Abelian group under composition, and furthermore define a transitive, faithful group action on the set of (Gumbel) boxes.",
"We can think of the box f r (Box( h )) as the support set of a binary random variable representing the concept h in the context of the head position of relation r , for example Box( THEBEATLES ) is a latent representation of the concept of The Beatles, and f GENRE (Box( THEBEATLES )) represents The Beatles in the context of genre classification as the object to be classified.",
"The sparsity of real-world UKGs makes learning high quality representations difficult.",
"To address this problem, previous work (Chen et al., 2019) introduces domain knowledge about the properties of relations (e.g., transitivity) and uses PSL over first-order logical rules to reason for unseen facts and create extra training samples.",
"While this technique successfully enhances the performance by incorporating constraints based on relational properties, the coverage of such reasoning is still limited by the density of the graph.",
"In UKGE, the confidence score of a triple can be inferred and benefit training only if all triples in the rule premise are already present in the KG.",
"This leads to a limited scope of application, particularly when the graph is sparse.",
"In our work, we propose sufficient conditions for these relation properties to be preserved in the embedding space and directly incorporating the relational constraints by regularizing relation-specific transforms.",
"Compared to previous work, our approach is more robust to noise since it does not hardcode inferred confidence for unseen triples, and it has wide coverage that is not restricted by the scarcity of the existing triples.",
"In the following, we discuss the incorporation of two logical constraints transitivity and composition in the learning process.",
"We use capital letters A, B, C to represent universally quantified entities from UKG and use to denote a set of boxes sampled from B ( Box ) .",
"Transitivity Constraint.",
"A relation r is transitive if ( A, r, B ) ( B, r, C ) = ( A, r, C ) .",
"An example of a transitive relation is hypernymy .",
"The objective of imposing a transitivity constraint in learning is to preserve this property of the relation in the embedding space, i.e. to ensure that ( A, r, C ) will be predicted true if ( A, r, B ) and ( B, r, C ) are true.",
"This objective is fulfilled if g r ( Box ( B )) contains f r ( Box ( B )) .",
"An illustration of the box containment relationships is given in Fig",
"2. Thus, we constrain f r and g r so that g r ( u ) Figure 2: Illustration of how the constraint that g r ( u ) contains f r ( u ) preserves transitivity of relation r in the embedding space.",
"L tr ( r ) = 1 | | (cid:88) u (cid:107) P Box ( g r ( u ) | f r ( u )) 1 (cid:107) 2",
"Composition Constraint.",
"A relation r 3 is composed of relation r 1 and relation r 2 if ( A, r 1 , B ) ( B, r 2 , C ) = ( A, r 3 , C ) .",
"For example, the relation atheletePlaysSports can be composed of relations atheletePlaysForTeam and teamPlaysSports .",
"To preserve the relation composition in the embedding space, we constrain that the relation-specific mappings f r 3 and g r 3 are the composite mappings of f r 1 , f r 2 and g r 1 , g r 2 respectively: f r 3 = f r 2 f r 1 ; g r 3 = g r 2 g r 1 .",
"where is the mapping composition operator.",
"Thus, for any u Box , we expect that f r 3 ( u ) is the same as f r 2 ( f r 1 ( u )) and g r 3 ( u ) is the same as g r 2 ( g r 1 ( u )) .",
"We accordingly add the following regularization term L c ( r 1 , r 2 , r 3 ) = 1 | | (cid:88) u f r 3 ( u ) f r 2 ( f r 1 ( u )) + g r 3 ( u ) g r 2 ( g r 1 ( u )) where is defined as Box 1 Box 2 = (cid:107) 1 P Box (Box 1 | Box 2 ) (cid:107) 2 + (cid:107) 1 P Box (Box 2 | Box 1 ) (cid:107) 2 .",
"The learning process of BEUrRE optimizes two objectives.",
"The main objective optimizes the loss for a regression task and, simultaneously, a constrained regularization loss enforces the aforementioned constraints.",
"Let L + be the set of observed relation facts in training data.",
"The goal is to minimize the mean squared error (MSE) between the ground truth confidence score s l and the prediction ( l ) for each relation l L + .",
"Following UKGE (Chen et al., 2019), we also penalize the predicted confidence scores of facts that are not observed in UKG.",
"The main learning objective is as follows: J 1 = (cid:88) l L + | ( l ) s l | 2 + (cid:88) l L | ( l ) | 2 .",
"where L is a sample set of the facts not observed in UKG, and is a hyper-parameter to weigh unobserved fact confidence penalization.",
"Similar to previous works, we sample those facts by corrupting the head and the tail for observed facts to generate L during training.",
"In terms of constraints, let R tr be the set of transitive relations, R c be the set of composite relation groups, and w tr and w c be the regularization co-efficients.",
"We add the following regularization to impose our constraints on relations: J 2 = w tr (cid:88) r R tr L tr ( r ) + w c (cid:88) ( r 1 ,r 2 ,r 3 ) R c L c ( r 1 , r 2 , r 3 ) .",
"Combining both learning objectives, the learning process optimizes the joint loss J = J 1 + J 2 .",
"Once BEUrRE is trained, the model can easily infer the confidence of a new fact ( h, r, t ) based on the confidence score function ( h, r, t ) defined in Section 4.1.",
"This inference mechanism easily supports other types of reasoning tasks, such as inferring the plausibility of a new fact, and ranking multiple related facts.",
"The experiments presented in the next section will demonstrate the ability of BEUrRE to perform those reasoning tasks.",
"In this section we present evaluation of our model on two UKG reasoning tasks, i.e. confidence prediction and fact ranking.",
"More experimentation details are in Appendices.",
"Datasets.",
"We follow Chen et al. (2019) and evaluate our models on CN15k and NL27k benchmarks, which are subsets of ConceptNet (Speer et al., 2017) and NELL (Mitchell et al., 2018) respectively.",
"Table 1 gives the statistics of the datasets.",
"We use the same split provided by Chen et al. (2019): 85% for training, 7% for validation, and 8% for testing.",
"We exclude the dataset PPI5k, the subgraph of the protein-protein interaction (PPI) network STRING (Szklarczyk et al., 2016), where the supporting scores of PPI information are indicators based on experimental and literary verification, instead of a probabilistic measure.",
"Logical constraints.",
"We report results of both versions of our model with and without logical constraints, denoted as BEUrRE (rule+) and BEUrRE respectively.",
"For a fair comparison, we incorporate into BEUrRE (rule+) the same set of logical constraints as UKGE (Chen et al., 2019).",
"Table 2 gives a few examples of the relations on which we impose constraints.",
"Baselines.",
"We compare our models with UKG embedding models as well as deterministic KG embedding models.",
"UKG embedding models include UKGE (Chen et al., 2019) and URGE (Hu et al., 2017).",
"While UKGE has multiple versions incorporated with different regression functions, we report the results of the best performing one with the logistic function.",
"We also include results for both settings with and without constraints, marked as UKGE (rule+) and UKGE in result tables respectively.",
"URGE was originally designed for probabilistic homogeneous graphs and cannot handle multi-relational graphs, so accordingly we ignore relation information when embedding a UKG.",
"UOKGE (Boutouhami et al., 2020) cannot serve as a baseline because it requires additional ontology information for entities that is not available to these UKGs.",
"Deterministic KG embedding models TransE (Bordes et al., 2013), DistMult (Yang et al., 2015), ComplEx (Trouillon et al., 2016), RotatE (Sun et al., 2019), and TuckER (Balazevic et al., 2019) have demonstrated high performance on reasoning tasks for deterministic KGs, and we also include them as baselines.",
"These models cannot predict confidence scores for uncertain facts, so we compare our method with them only on the ranking task.",
"Following Chen et al. (2019), we only use facts with confidence above the threshold = 0 .",
"85 to train deterministic models.",
"Model configurations.",
"We use Adam (Kingma and Ba, 2014) as the optimizer and fine-tune the following hyper-parameters by grid search based on the performance on the validation set, i.e. MSE for confidence prediction and normalized Discounted Cumulative Gain (nDCG) for fact ranking.",
"Hyper-parameter search range and the best hyper-parameter configurations are given in Appendix A.1.",
"Training terminates with early stopping based on the same metric with a patience of 30 epochs.",
"We repeat each experiment five times and report the average results.",
"This task seeks to predict the confidence of new facts that are unseen to training.",
"For each uncertain fact ( l, s l ) in the test set, we predict the confidence of l and report the mean squared error (MSE) and mean absolute error (MAE).",
"Results.",
"Results are reported in Table",
"3. We compare our models with baselines under the uncon-Variants uncons.",
"strained and logically constrained (marked with rule+ ) settings respectively.",
"Under both settings, BEUrRE outperforms the baselines in terms of MSE on both datasets.",
"Under the unconstrained setting, BEUrRE improves MSE of the best baseline UKGE by 0.012 (ca. 14% relative improvement) on CN15k and 0.003 (ca. 11% relative improvement) on NL27k.",
"The enhancement demonstrates that box embeddings can effectively improve reasoning on UKGs.",
"It is worth noting that even without constraints in learning, BEUrRE can still achieve comparable MSE and MAE to the logically constrained UKGE (rule+) on both datasets and even outperforms UKGE (rule+) on CN15k.",
"Considering that constraints of relations in CN15k mainly describe transitivity, the aforementioned observation is consistent with the fact that box embeddings are naturally good at capturing transitive relations, as shown in the recent study (Vilnis et al., 2018).",
"With logical constraints, BEUrRE (rule+) further enhances the performance of BEUrRE and reduces its MSE by 0.0031 (ca. 4% relative improvement) on CN15k and 0.0036 (ca. 15% relative improvement) on NL27k.",
"This is as expected, since logical constraints capture higher-order relations of facts and lead to more globally consistent reasoning.",
"We also observe that BEUrRE (rule+) brings larger gains over BEUrRE on NL27k, where we have both transitivity constraints and composition constraints, than on CN15k with only transitivity constraints incorporated.",
"In general, with box embeddings, BEUrRE effectively improves reasoning on UKGs with better captured fact-wise confidence.",
"Furthermore, the results under the logically constrained setting show the effectiveness of improving reasoning with higher-order relations of uncertain facts.",
"Ablation Study.",
"To examine the contribution from Gumbel distribution to model box boundaries and the effectiveness of representing relations as two Dataset CN15K NL27k Metrics linear exp. linear exp.",
"separate transforms for head and tail boxes, we conduct an ablation study based on CN15k.",
"The results for comparison are given in Table",
"4. First, we resort to a new configuration of BEUrRE where we use smoothed boundaries for boxes as in (Li et al., 2019) instead of Gumbel boxes.",
"We refer to boxes of this kind as soft boxes.",
"Under the unconstrained setting, using soft boxes increases MSE by 0.0033 on CN15k (ca. 4% relative degrada-tion), with even worse performance observed when adding logical constraints.",
"This confirms the find-ing by Dasgupta et al. (2020) that using Gumbel distribution for boundaries greatly improves box embedding training.",
"Next, to analyze the effect of using separate transforms to represent a relation, we set the tail transform g r as the identity function.",
"For logical constraint incorporation, we accordingly update the constraint on transitive relation r as P Box ( u | f r ( u )) = 1 , u Box , which requires that u always contains f r ( u ) , i.e. the translation vector of f r is always zero and elements of the scaling vector are always less than 1.",
"Although there is little difference between using one or two transforms under the unconstrained setting, under the logically constrained setting, the constraint is too stringent to be preserved with only one transform.",
"Case study.",
"To investigate whether our model can encode meaningful probabilistic semantics, we present a case study about box volumes.",
"We examine the objects of the atLocation predicate on CN15k and check which entity boxes have larger volume and cover more entity boxes after the relation transformation.",
"Ideally, geographic entities with larger areas or more frequent mentions should be at the top of the list.",
"When using the BEUrRE (rule+) model, the top 10 in all entities are place, town, bed, school, city, home, house, capital, church, camp , which are general concepts.",
"Among the observed objects of the atLocation predicate, the entities that have the least coverage are Tunisia, Morocco, Algeria, Westminster, Veracruz, Buenos Aires, Emilia-Romagna, Tyrrhenian sea, Kuwait, Serbia .",
"Those entities are very specific locations.",
"This observation confirms that the box volume effectively represents probabilistic semantics and captures specificity/granularity of concepts, which we believe to be a reason for the performance improvement.",
"Multiple facts can be associated with the same entity.",
"However, those relevant facts may appear with very different plausibility.",
"Consider the example about Honda Motor Co. in Section 1, where it was mentioned that (Honda, competeswith, Toyota) should have a higher belief than (Honda, com-peteswith, Chrysler) .",
"Following this intuition, this task focuses on ranking multiple candidate tail entities for a query ( h, r, ? t ) in terms of their confidence.",
"Evaluation protocol.",
"Given a query ( h, r, ? t ) , we rank all the entities in the vocabulary as tail entity candidates and evaluate the ranking performance using the normalized Discounted Cumulative Gain (nDCG) (Li et al., 2009).",
"The gain in retrieving a relevant tail t 0 is defined as the ground truth confidence s ( h,r,t 0 ) .",
"Same as Chen et al. (2019), we report two versions of nDCG that use linear gain and exponential gain respectively.",
"The exponential gain puts stronger emphasis on the most relevant results.",
"Results.",
"We report the mean nDCG over the test query set in Table 5.",
"Although the deterministic models do not explicitly capture the confidence of facts, those models are trained with high-confidence facts and have a certain ability to differentiate high confidence facts from lesser ones.",
"URGE ignores relation information and yields worse predictions than other models.",
"UKGE explicitly models uncertainty of facts and is the best performing baseline.",
"The proposed BEUrRE leads to more improvements under both the unconstrained and logically constrained settings.",
"Under the unconstrained setting, BEUrRE offers consistently better performance over UKGE.",
"Specifically, on CN15k, BEUrRE leads to 0.027 improvement in both linear nDCG and exponential nDCG.",
"On NL27k, it offers 0.009 higher linear nDCG and 0.013 higher exponential nDCG.",
"Similar to the results on the confidence prediction task, even unconstrained BEUrRE is able to outperform the logically constrained UKGE (rule+) on CN15k without incorporating any constraints of relations.",
"This further confirms the superior expressive power of box embeddings.",
"This paper presents a novel UKG embedding method with calibrated probabilistic semantics.",
"Our model BEUrRE encodes each entity as a Gum-ble box representation whose volume represents marginal probability.",
"A relation is modeled as two affine transforms on the head and tail entity boxes.",
"We also incorporate logic constraints that capture the high-order dependency of facts and enhance global reasoning consistency.",
"Extensive experiments show the promising capability of BEUrRE on confidence prediction and fact ranking for UKGs.",
"The results are encouraging and suggest various extensions, including deeper transformation architectures as well as alternative geometries to allow for additional rules to be imposed.",
"In this context, we are also interested in extending the use of the proposed technologies into more downstream tasks, such as knowledge association (Sun et al., 2020) and event hierarchy induction (Wang et al., 2020).",
"Another direction is to use BEUrRE for ontology construction and population, since box embeddings are naturally capable of capturing granularities of concepts.",
"Real-world UKGs often harvest data from open data sources and may include biases.",
"Reasoning over biased UKGs may support or magnify those biases.",
"While not specifically addressed in this work, the ability to inject logical rules could be one way to mitigate bias, and the ability to interpret the learned representation probabilistically allows the investigation of potential learned biases.",
"All the datasets used in this paper are publicly available and free to download.",
"proposed in the paper aims to model uncertainty in knowledge graphs more accurately, and the effectiveness of the proposed model is supported by the empirical experiment results.",
"insightful comments and suggestions.",
"This material is based upon work sponsored by the DARPA MCS program under Contract No.",
"N660011924033 with the United States Office Of Naval Research, and by Air Force Research Laboratory under agreement number FA8750-20-2-10002.",
"We also thank our colleagues within IESL at UMass Amherst, for their helpful discussions.",
"Michael, Shib and Xiang were supported in part by the Center for Intelligent Information Retrieval and the Center for Data Science, in part by the IBM Research AI through the AI Horizons Network, in part by the University of Southern California subcontract No. 123875727 under Office of Naval Research prime contract No.",
"N660011924032 and in part by the University of Southern California subcontract no. 89341790 under Defense Advanced Research Projects Agency prime contract No.",
"FA8750-17-C-0106.",
"The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright notation thereon.",
"The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of the U.S. Government."
] | [
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"result",
"other",
"other",
"other",
"abstain",
"objective",
"abstain",
"other",
"abstain",
"method",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"method",
"method",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other"
] |
[
"One key ingredient of neural machine translation is the use of large datasets from different domains and resources (e.g. Europarl, TED talks).",
"These datasets contain documents translated by professional translators using different but consistent translation styles.",
"Despite that, the model is usually trained in a way that neither explicitly captures the variety of translation styles present in the data nor translates new data in different and controllable styles.",
"In this work, we investigate methods to augment the state-of-the-art Transformer model with translator information that is available in part of the training data.",
"We show that our style-augmented translation models are able to capture the style variations of translators and to generate translations with different styles on new data.",
"Indeed, the generated variations differ significantly, up to +4 .",
"5 BLEU score difference.",
"Despite that, human evaluation confirms that the translations are of the same quality.",
"Translators often translate the original content with provided guidelines for styles.",
"1 However, guidelines are supposed to be high level and not comprehensive.",
"Personal stylistic choices are thus wel-come as creative part of the translator's job, as long as their translation style consistency is ensured to the task.",
"By contrast, although neural machine translation (NMT) models (Cho et al., 2014; Sutskever et al., 2014) are trained from these human translations (e.g. Europarl, TED Talks), the models do not explicitly learn to capture the rich variety of translators' styles from the data.",
"This limits their capability to creatively translate new data with different and consistent styles as translators do.",
"We believe that modeling the style of translators is an Y. Wang carried out this work during an internship with Amazon AI.",
"important yet overlooked aspect in NMT.",
"Our contribution, to the best of our knowledge, is to fill this gap for the first time.",
"In particular, our work investigates ways to integrate translator information into NMT, with an emphasis on mimicking the translator's style.",
"Our study uses the TED talk dataset, with four language pairs with translator annotations.",
"We present and compare a set of different methods of using a discrete translator token to model and control translator-related stylistic variations in translation.",
"Note that using a discrete token is a common approach to model and control not only specific traits in translation such as verbosity, politeness and speaker-related variances (Sennrich et al., 2016a; Michel and Neubig, 2018)) but also other aspects in NMT such as language ids (Johnson et al., 2017; Fan et al., 2020).",
"However, our study is the first to use such a discrete token to model the style of translators.",
"It also provides several insights regarding translation style modeling as follows.",
"First, we show that the state-of-the-art Transformer model implicitly learns the style of translators only to a limited extent.",
"Moreover, methods that add translator information to the decoder surprisingly result in NMT that fully ignores the additional knowledge.",
"This is regardless of whether the token is added to the bottom (i.e. the embedding layer) or to the top (i.e. the softmax layer) of the decoder.",
"Meanwhile, methods that add the information to the encoder seem to model the translator's style effectively.",
"Second, we show that our best style-augmented NMT method is able to control the generation of translation in a way that mimics the translator's style, e.g. lexical and grammatical preferences, verbosity.",
"While output produced by the style-augmented NMT can vary significantly with the translator-token values, with BLEU score variations up to +4 .",
"5 , a human evaluation confirms that observed differences are all about style and not translation quality.",
"Finally, we show that the translator information has more impact on NMT than the speaker information, which was investigated by Michel and Neubig (2018).",
"Style itself is a broad concept (Kang and Hovy, 2019).",
"It includes both simple high-level stylistic aspects of language such as verbosity (Marchisio et al., 2019; Agrawal and Carpuat, 2019; Lakew et al., 2019), formality (Niu et al., 2017; Xu et al., 2019), politeness (Mirkin et al., 2015) and complex aspects such as demography (Vanmassenhove et al., 2018; Moryossef et al., 2019; Hovy et al., 2020) and personal traits (Mirkin and Meunier, 2015; Ra-binovich et al., 2017; Michel and Neubig, 2018).",
"Our study focuses on capturing the personal style of translators.",
"The closest work to our study is thus the work of Michel and Neubig (2018), where they study instead the effects of using the speaker information in NMT.",
"In our results, we show that the translator information has indeed more impact to NMT than the speaker information.",
"Finally, another distantly related research line tries to improve the diversity in the top rank translations of an input (Li et al., 2016; Shen et al., 2019; Agrawal and Carpuat, 2020).",
"In fact, adding the translator information to NMT also provides means to generate translations with significantly different stylistic variations.",
"NMT reads an input sequence x = x 1 , ..., x n in the source language with an encoder and then produces an output sequence y = y 1 , ..., y m in the target language.",
"The generation process is performed in a token-by-token manner and its probability can be factored as (cid:81) mj =1 P ( y j | y <j , x ) , where y <j denotes the previous sub-sequence before j -th token.",
"The prediction for each token over the vocabulary V is based on a softmax function as follows: P ( y j | y <j , x ) = softmax ( WV o j + b V ) .",
"Here, o j R d is an output vector with size d (e.g. 512 or 1024 ), encoding both the context from the encoder and the state of the decoder at time j .",
"Meanwhile, WV R |V| d and b V R |V| are a trainable projection matrix and bias vector.",
"We adjust NMT in different ways as below to let it mimic and control the translator's style.",
"Source Token.",
"In our first approach, we insert the translator token T as the beginning of each input sentence.",
"The translator token is thus assigned with an embedding vector like any other source token.",
"Hence, the embedding sequence E enc for the MT encoder becomes: E enc = [ e ( T ) , e ( x 1 ) , ..., e ( x n )] , (2) where e ( ) is an embedding lookup function.",
"Token Embedding.",
"We also consider adding the embedded translator token e ( T ) to every token embedding in the encoder and/or decoder as follows: E enc = [ e ( T ) + e ( x 1 ) , ..., e ( T ) + e ( x n )] , (3) E dec = [ e ( T ) + e ( y 1 ) , ..., e ( T ) + e ( y m )] .",
"Our motivation is to reinforce the influence of the translator token in MT.",
"Output Bias.",
"Following Michel and Neubig (2018), we add the translator token information to the output bias at the final layer of the decoder (FULL-BIAS variant).",
"Specifically, the method directly modulates the word probability over vocabulary V as follows: P ( y j | y <j , x , T ) = softmax ( WV o j + b V + b T ) .",
"Here, b T R |V| is the translator-specific bias vector, which can be thought of as a translator-token embedding with dimension |V| rather than d .",
"We also explore another variant, named FACT-BIAS , as in Michel and Neubig (2018).",
"This variant instead learns the translator bias through the factorization: b T = Ws T , (6) with parameters W R |V| k and s T R k 1 where k << |V| .",
"Note that while the above methods digest the translator token at an earlier stage, this one consumes translator signals in a late fusion manner.",
"We run experiments with the WIT 3 public dataset of TED talks (Cettolo et al., 2012), with four language pairs: English-German (en-de), English-French (en-fr), English-Italian (en-it) and English-Spanish (en-es).",
"The dataset contains both speaker and translator information for each talk and translation, thus allowing to measure the effects of translators and speakers.",
"We construct training, validation and test sets for each translation direction as follows.",
"We first extract all talks that are translated by the 10 most popular translators (see Figure 1) and split them into parallel sentences.",
"From the data of each translator, we then sample 500 sentences for testing, and, from the remaining data, 90% for training and 10% for validation.",
"All training, testing, and validation sentence pairs are put together and annotated with training and speaker labels.",
"Table 1 shows the data statistics for four language pairs.",
"For preprocessing, we employ Moses (Koehn et al., 2007) tool 2 for tokenization and apply subword-nmt 3 (Sennrich et al., 2016b) to learn subword representations.",
"We choose Transformer (Vaswani et al., 2017) as the baseline and employ Fairseq (Ott et al., 2019) for our implementations.",
"Our Transformer model is comprised of 6 layers of encoder-decoder network, where each layer contains 16 heads with a 2 https://github.com/moses-smt/ mosesdecoder 3 https://github.com/rsennrich/ subword-nmt self-attention hidden state of size 1024 and a feed-forward hidden state of size 4096 .",
"We employ Adam optimizer (Kingma and Ba, 2015) to update model parameters.",
"We warm up the model by linearly increasing the learning rate from 1 10 7 to 5 10 4 for 4000 updates and then decay it with an inverse square root of the rest training steps by a rate of 1 10 4 .",
"We apply a Dropout of 0 .",
"3 for en-de and 0 .",
"1 for both en-fr and en-it.",
"For all MT systems, we load weights from pretrained models to set up a better model initialization.",
"Specifically, we employ models pretrained on WMT data for en-de and en-fr (Ott et al., 2018), and pretrain models for en-it and en-es using our large in-house out-of-domain data, as there are no previous pretrained models for these pairs.",
"We fine-tune models on TED talk data for 10 epochs 4 and select the best model based on the validation loss.",
"During inference, we employ beam search with a beam size of 4 and add a length penalty of 0 .",
"4 .",
"We use the BLEU score (Papineni et al., 2002) to evaluate translation accuracy.",
"We first compare methods to integrate the translator token into the Transformer.",
"Notice that we report performance of the model in two settings:",
"(i) when fed with the oracle translator label (as at training time) and",
"(ii): when fed with randomly assigned labels.",
"Intuitively, if a model really leverages the translator information, we expect to see a performance drop in the randomized setting.",
"Results are shown in Table",
"2. Our findings are as follows.",
"First, it is surprisingly ineffective to add the translator token into the decoder, whether to the input (DEC-EMB ) or to the softmax (FULL-BIAS , FACT-BIAS ).",
"In most cases, our randomization experiment shows that the model simply ignores the information.",
"Second, methods adding the token to the encoder (SRC-TOK , ENC-EMB ) are significantly more effective.",
"Translation accuracy is also consistently better (at most by 0 . 4 BLEU) than with the Transformer baseline, indicating the translator token is useful.",
"For those models, randomizing translator labels results in visible drops in BLEU score (up to 1 . 0 BLEU), indicating that the translator information has an important effect to the model.",
"Following the common practice in evaluating the style imitation (e.g. see (Michel and Neubig, 2018; Hovy et al., 2020)), we train a classifier to predict the translator style of the output of various models.",
"We employ a Logistic Regression classifier based on both uni-gram and bi-gram word features.",
"The classifier, trained on NMT training data, is applied on the outputs of NMT models.",
"Figure 2 shows the results of this experiment.",
"As can be seen, the standard Transformer learns the style of translators only to a limited extent.",
"The style of translation outputs are less consistent with the original translator's style, i.e. accuracy is between 20% and 35% ).",
"Meanwhile, the classification accuracy is significantly higher (up to +12% relative) under SRC-TOK and ENC-EMB .",
"This confirms that explicitly incorporating translator information at the sentence level allows for transferring some of her/his personal traits into the translations.",
"Meanwhile, we notice higher accuracy achieved with the reference translations (e.g. 42% in EN-ES), suggesting there is room for improvement.",
"We analyzed stylistic variations using different translator token labels.",
"In particular, we evaluate model outputs on en-fr after translating the entire test set with the same translator token labels.",
"As in Table 3, translator-informed NMT can produce quite different outputs, resulting in BLEU score variations up to +4 .",
"5 , (i.e. between T 7 and T 3 , en-de translator 22.0 22.5 23.0 23.5 22.31 22.74 23.30 22.37 22.25 22.59 en-fr translator 30 31 32 33 30.93 32.67 32.70 30.51 30.69 30.82 en-it translator 32 33 34 35 32.47 34.24 34.22 32.60 32.45 32.33 en-es translator 34 35 36 37 34.51 35.17 36.48 34.76 34.51 34.61 Base Src-Tok Enc-Emb Dec-Emb Full-Bias Fact-Bias Figure 2: Translator classification accuracy. ENC-EMB yields the best result in most cases. T 8 , T 10 ).",
"We also observe differences in BLEU (albeit smaller) when testing with the WMT 2014 test set.",
"In particular, BLEU score variations are up to +0 .",
"84 between T 7 and T 5 .",
"We also compute the symmetric-BLEU distances between any two of the translators using their predictions for both TED and WMT test set and visualize their heatmaps in Figure",
"3. We observe that a similar BLEU distance between various translators in both test sets.",
"Besides, T7 has a farther distance with others but its gap is closer on WMT than TED.",
"These findings verify the consistency of translator styles in data from different domains.",
"Then, we asked 3 professional translators to grade the quality of translation produced with the labels T 7 and T 3 on the TED talks.",
"The evaluation is on a 1-6 scale (higher is better) on a random sample of 100 sentences.",
"This resulted in average scores of 4 .",
"867 and 4 .",
"860 for T3 and T7, respectively.",
"A similar human evaluation with T 7 and T 5 labels was also run on a random sample of 100 sentences of the WMT 2014 test set.",
"It provided the same conclusion: average scores are very similar: 4 .",
"99 and 5 .",
"0 for T5 and T7 respectively.",
"Both evaluations confirm that there is no difference in translation quality when using different token labels, i.e. the low BLEU score of T7 is only an effect due to stylistic differences.",
"Table 4 shows examples of translations generated with labels T3 and T7.",
"As we can observe, the translations show different use of grammars, words and verbosity.",
"5 5 Note that one could argue that it is not just about style here but also translation fidelity.",
"We thank a reviewer for pointing it out.",
"V e r bo s it y Src: And I'm not the first person to ask this question.",
"T3: Je ne suis pas la premire personne poser cette question.",
"T7: Je ne suis pas la premire poser cette question.",
"Src: And then everybody kind of runs out and goes out.",
"T3: Et puis tout le monde",
"s'enfuit..",
"T7: Tout le monde s'enfuit.",
"W o r d Src: Same story for fairness.",
"T3: Mme histoire pour l'quit.",
"T7: Mme histoire d'quit.",
"G r a mm a r Src: I had just tweeted, Pray for Egypt\". T3: J'avais tweet : Priez pour l'Egypte\". T7: Je venais de tweeter, Priez pour l'Egypte.\""
] | [
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"objective",
"objective",
"method",
"method",
"abstain",
"objective",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"other",
"other",
"method",
"method",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"objective",
"other",
"other",
"other",
"method",
"objective",
"other",
"method",
"other",
"other",
"abstain",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"other",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain"
] |
[
"It is very critical to analyze messages shared over social networks for cyber threat intelligence and cyber-crime prevention.",
"In this study, we propose a method that leverages both domain-specific word embeddings and task-specific features to detect cyber security events from tweets.",
"Our model employs a convolutional neural network (CNN) and a long short-term memory (LSTM) recurrent neural network which takes word level meta-embeddings as inputs and incorporates contextual embeddings to classify noisy short text.",
"We collected a new dataset of cyber security related tweets from Twitter and manually annotated a subset of 2K of them.",
"We experimented with this dataset and concluded that the proposed model outperforms both traditional and neural baselines.",
"The results suggest that our method works well for detecting cyber security events from noisy short text.",
"Twitter has become a medium where people can share and receive timely messages on about anything.",
"People share facts, opinions, broadcast news and communicate with each other through these messages.",
"Due to the low barrier to tweeting, and growth in mobile device usage, tweets might provide valuable information as people often share instantaneous updates such as the breaking news before even being broadcasted in the newswire c.f .",
"Petrovic et al. (2010).",
"People also share cyber security events in their tweets such as zero day exploits, ransomwares, data leaks, security breaches, vulnerabilities etc .",
"Automatically detecting such events might have various practical applications such as taking the necessary precautions promptly as well as creating self-awareness as illustrated in Fig.",
"1. Recently, working with the cyber security Corresponding author.",
"Dear @AppleSupport, we noticed a *HUGE* security issue at MacOS High Sierra.",
"Anyone can login as root with empty password after clicking on login button several times.",
"Are you aware of it @Apple?",
"related text has garnered a lot of interest in both computer security and natural language processing (NLP) communities ( c.f .",
"Joshi et al. (2013); Ritter et al. (2015); Roy et al. (2017)).",
"Nevertheless, detecting cyber security events from tweets pose a great challenge, as tweets are noisy and often lack sufficient context to discriminate cyber security events due to length limits.",
"Recently, deep learning methods have shown to be outperforming traditional approaches in several NLP tasks (Chen and Manning, 2014; Bahdanau et al., 2014; Kim, 2014; Hermann et al., 2015).",
"Inspired by this progress, our goal is to detect cyber security events in tweets by learning domain-specific word embeddings and task-specific features using neural architectures.",
"The key contribution of this work is two folds.",
"First, we propose an end-to-end learning system to effectively detect cyber security events from tweets.",
"Second, we propose a noisy short text dataset with annotated cyber security events for unsupervised and supervised learning tasks.",
"To our best knowledge, this will be the first study that incorporates domain-specific meta-embeddings and contextual embeddings for detecting cyber security events.",
"In the subsequent sections, we address the challenges to solve our task.",
"The proposed system overview is illustrated in Fig.",
"2. Preprocessing Normalization Tokenization word2vec GloVe fastText huge flaw meltdown meltdown ...",
"Word embedding methods might capture different semantic and syntactic features about the same word.",
"To exploit this variety without losing the semantics, we learn meta-embeddings for words.",
"Word Embeddings.",
"Word2vec (Mikolov et al., 2013), GloVe (Pennington et al., 2014), and fastText (Joulin et al., 2016; Bojanowski et al., 2016) are trained for learning domain specific word embeddings on the unlabeled tweet corpus.",
"Meta-Encoder.",
"Inspired by Yin and Schutze (2015) we learn meta-embeddings for words with the aforementioned word embeddings.",
"We use a Convolutional Autoencoder (Masci et al., 2011) for encoding 3 xD size embeddings to a 1 xD dimensional latent variable and to reconstruct the original embeddings from this latent variable.",
"Both encoder and decoder are comprised of 2 convolutional layers where 32 neurons are used on each.",
"The encoder part is shown in Fig.",
"3. We argue that this network learns a much simpler mapping while capturing the semantic and syntactic relations from each of these embeddings, thus leading to a richer word-level representation.",
"Another advantage of learning meta-embeddings for words Meta-Embedding Vector Convolutional Features 3xD Word Embeddings Figure 3: Convolutional encoder as a feature extractor.",
"is that the proposed architecture alleviates the Out-of-Vocabulary (OOV) embeddings problem, as we still get embeddings from the fastText channel, in contrast to GloVe and word2vec, where no embeddings are available for OOV words.",
"task-specific features from tweets.",
"LDA.",
"Latent Dirichlet Allocation (LDA) is a generative probabilistic model to discover topics from a collection of documents (Blei et al., 2003).",
"LDA works in an unsupervised manner and learns a fi-nite set of categories from a collection, thus represents documents as mixtures of topics.",
"We train an LDA model to summarize each tweet by using the topic with the maximum likelihood e.g .",
"with the topic vulnerability for the tweet in Fig",
"1. NER.",
"Named Entity Recognition (NER) tags the specified named entities from raw text into pre-defined categories.",
"Named entities could be more general categories such as people, organizations, or specific entities can be learned by creating a dataset containing specific entity tags.",
"We employ an automatically annotated dataset that contains entities from cyber security domain (Bridges et al., 2013) to train our Conditional Random Field model using handcrafted features, i.e. , uni-gram, bi-gram, and gazetteers.",
"The dataset comprises of 850K tokens that contain named entities such as Relevant Term', Operating System',Hardware', Software', Vendor', in the standard IOB-tagging format.",
"Our NER model tags password as Relevant Term' and Apple as Vendor' for the tweet in Fig",
"1. IE.",
"Uncovering entities and the relations between those entities is an important task for detecting cyber security events.",
"In order to address this we use Information Extraction (IE), in particular OpenIE annotator(Angeli et al., 2015) from the Stanford CoreNLP (Manning et al., 2014).",
"Subsequently, we extract relations between noun phrases with the following dependency triplet (cid:104) arg 1 , rel, arg 2 (cid:105) , where arg 1 , arg 2 denote the arguments and rel represents an implicit semantic relation between those arguments.",
"Hence, the following triplet is extracted from the tweet in Fig. 1, (cid:104) we, noticed, huge security issue (cid:105) .",
"Contextual-Encoder.",
"We use the outputs of LDA, NER and IE algorithms to obtain a combined vector representation using meta-embeddings described in Sec. 2.1.",
"Thus, contextual embeddings are calculated as follows 1 .",
"where function extracts contextual embeddings and denotes a tweet, f , , and represent meta-embedding, LDA, NER, and IE functions, respectively.",
"Lastly, N and M denote the output tokens.",
"Inspired by the visual question answering task (Antol et al., 2015), where different modalities are combined by CNNs and RNNs, we adopt a similar network architecture for our task.",
"Prior to training, and inference we preprocess, normalize and tokenize each tweet as described in Sec.",
"3. CNN.",
"We employ a CNN model similar to that of (Kim, 2014) where we feed the network with static meta-embeddings.",
"Our network is comprised of one convolutional layer with varying filter sizes, that is 2 , 3 , 5 .",
"All tweets are zero padded to the maximum tweet length.",
"We use ReLU as activation and global max pooling at the end of CNN.",
"RNN.",
"We use a bi-directional LSTM (Hochreiter and Schmidhuber, 1997) and read the input in both directions and concatenate forward and backward hidden states to encode the input as a sequence.",
"Our LSTM model is comprised of a single layer and employs 100 neurons.",
"Data Collection.",
"We collected 2 .",
"5 M tweets using the Twitter's streaming API over a period from 2015-01-01 to 2017-12-31 using an initial 1 We used zero vectors for the non-existent relations.",
"set of keywords, henceforth referred as seed keywords to retrieve cyber security related tweets.",
"In particular, we use the main group names of cyber security taxonomy described in Le Sceller et al. (2017) as seed keywords e.g. denial of ser-vice', botnet', malware', vulnerability', phish-ing', data breach' to retrieve relevant tweets.",
"Using seed keywords is a practical way to filter out noise considering sparsity of cyber security related tweets in the whole tweet stream.",
"After the initial retrieval, we use langid.py (Lui and Baldwin, 2012) to filter out non-English tweets.",
"Data Preprocessing.",
"We substitute user handles with $ mention $ , and hyperlinks with $ url $ .",
"We remove emoticons and reserved keyword RT which denotes retweets.",
"We substitute hashtags by removing the prefix # character.",
"We limit characters that repeat more than two times, remove capitalization and tokenize tweets using the Twitter tokenizer in nltk library.",
"We normalize nonstandard forms, i.e .",
"writing cu tmrrw instead of see you tomorrow .",
"Although there are several reasons for that, the most prominent one is that people tend to mimic prosodic effects in speech (Eisen-stein, 2013).",
"To overcome this, we use lexical normalization, where we substitute OOV tokens with in-Vocabulary (IV) standard forms, i.e .",
"a standard form available in a dictionary.",
"In particular we use UniMelb (Han et al., 2012), UTDallas (Liu et al., 2011) datasets.",
"Lastly, we remove identical tweets and check the validity by removing tweets with less than 3 non-special tokens.",
"Data Annotation.",
"We instructed cyber security domain experts for manual labelling of the dataset.",
"Annotators are asked to provide a binary label for whether there is a cyber security event in the given tweet or not.",
"Annotators are told to skip tweets if they are unsure about their decisions.",
"Finally, we validated annotations by only accepting annotations if at least 3 among 4 annotators agreed on.",
"Therefore, we presume the quality of attained ground truth labels is dependable.",
"Overall, 2 K tweets are annotated.",
"Dataset Statistics.",
"After preprocessing, our initial 2 .",
"5 M tweet dataset is reduced to 1 .",
"7 M tweets where 2 K of them are labeled 2 .",
"The labeled dataset is somewhat balanced as there are 843 event-related tweets and 1157 non-event tweets.",
"The training and testing sets have 1600 and 400 samples, respectively.",
"2 Available at http://stmai.github.io/cydec Training.",
"We used Keras with Tensorflow back-end in our neural models.",
"For fastText and word2vec embeddings we used Gensim, and for GloVe we used glove-python library.",
"For training the word embeddings, we use the entire tweet text corpus and obtain 100 dimensional word embeddings.",
"We set word2vec and fastText model's alpha parameter to 0 .",
"025 and window size to 5 .",
"For GloVe embedding model, we set the learning rate to 0 .",
"01 , alpha to 0 .",
"75 and maximum count parameter to 100 .",
"For embedding models, we determined the minimum count parameter to 5 , culminating in the elimination of infrequent words.",
"Consequently, we have 3 , 100 -dimensional word embedding tensor in which first, second and third channels consist of word2vec, fastText and GloVe embeddings respectively.",
"We then, encode these 3 x 100 dimensional embeddings into 1 x 128 dimensional representations by using our Meta-Encoder.",
"We train our two channel architecture that combines both LSTM and CNN with 2 inputs: meta-embeddings and contextual embeddings.",
"We use meta-embeddings for feature learning via LSTM and CNN, and their feature maps are concatenated with contextual embeddings in the Fusion Layer.",
"In the end, fully connected layers and a softmax classifier are added, and the whole network is trained to minimize binary cross entropy loss with a learning rate of 0.01 by using the Adam optimizer (Kingma and Ba, 2014).",
"3 Baselines.",
"To compare with our results, we implemented the following baselines: SVM with BoW : We trained an SVM classifier using Bag-of-words (BoW) which provides a simplified representation of textual data by calculating the occurrence of words in a document.",
"SVM with meta-embeddings : We trained an SVM classifier with the aforementioned meta-embeddings.",
"CNN-Static : We used Kim (2014)'s approach using word2vec embeddings.",
"Results.",
"Table 1 summarizes the overall performance of each method.",
"To compare the models, we used four different metrics: accuracy, recall, precision and F1-score.",
"Each reported result is the mean of a 5-fold cross validation experiment.",
"It is clear that our method outperforms various simple and neural baselines.",
"Also, in Table 2, we provide results of our proposed model along with the ground-truth annotations.",
"We also provide results with the different combinations of contextual fea-3 See supplementary for hyperparameter choices.",
"Human Study.",
"8 different subjects are thoroughly instructed about what is considered as a cyber security event and individually asked to label 50 randomly selected tweets from the test set.",
"The results are provided in Table",
"3. Error Analysis.",
"In order to understand how our system performs, we randomly select a set of erroneously classified instances from the test dataset.",
"Type I Errors.",
"Our model identifies this tweet as an event uk warned following breach in air pollution regulation $url$ whereas it is clearly about the a breach of a regulation.",
"We hypothesize that this is due to the lack of sufficient training data.",
"Following tweet is also identified as an event wannacry ransomware ransomwareattack ransomwarewannacry malware $url$.",
"We suspect that the weights of multiple relevant terms deceive the model.",
"Type II Errors.",
"Our model fails to identify the following positive sample as an event.",
"For playsta-tion network was the target of miraibotnet ddos attack guiding tech rss news feed search our model fails to recognize the 'miraibotnet' from the tweet.",
"We suspect this is due to the lack of hashtag decomposition; otherwise, the model could recognize mirai' and botnet' as separate words.",
"Discussions.",
"Cyber security related tweets are complicated and analysing them requires in-depth domain knowledge.",
"Although human subjects are properly instructed, the results of the human study indicate that our task is challenging and humans can hardly discriminate cyber security events amongst cyber security related tweets.",
"To further investigate this, we plan to increase the number of human subjects.",
"One limitation of this study is that we do not consider hyperlinks and user handles which might provide additional information.",
"One particular problem we have not addressed in this work is hashtag decomposition.",
"Error analysis indicates that our model might get confused by challenging examples due to ambiguities and lack of context.",
"4 See supplementary for feature combination details.",
"Event detection on Twitter is studied extensively in the literature (Petrovic et al., 2010; Sakaki et al., 2010; Weng and Lee, 2011; Ritter et al., 2012; Yuan et al., 2013; Atefeh and Khreich, 2015).",
"Banko et al. (2007) proposed a method to extract relational tuples from web corpus without requiring hand labeled data.",
"Ritter et al. (2012) proposed a method for categorizing events in Twitter.",
"Luo et al. (2015) suggested an approach to infer binary relations produced by open IE systems.",
"Recently, Ritter et al. (2015) introduced the first study to extract event mentions from a raw Twitter stream for event categories DDoS attacks, data breaches, and account hijacking.",
"Chang et al. (2016) proposed an LSTM based approach which learns tweet level features automatically to extract events from tweet mentions.",
"Lately, Le Sceller et al. (2017) proposed a model to detect cyber security events in Twitter which uses a taxonomy and a set of seed keywords to retrieve relevant tweets.",
"Tonon et al. (2017) proposed a method to detect events from Twitter by using semantic analysis.",
"Roy et al. (2017) proposed a method to learn domain-specific word embeddings for sparse cyber security text.",
"Prior art in this direction (Ritter et al., 2015; Chang et al., 2016) focuses on extracting events and in particular predicting the events' posterior given the presence of particular words.",
"Le Sceller et al. (2017); Tonon et al. (2017) focus on detecting cyber security events from Twitter.",
"Our work distinguishes from prior studies as we formulate cyber security event detection problem as a classification task and learn meta-embeddings from domain-specific word embeddings while incorporating task-specific features and employing neural architectures.",
"We introduced a novel neural model that utilizes meta-embeddings learned from domain-specific word embeddings and task-specific features to capture contextual information.",
"We present a unique dataset of cyber security related noisy short text collected from Twitter.",
"The experimental results indicate that the proposed model outperforms the traditional and neural baselines.",
"Possible future research direction might be detecting cyber security related events in different languages.",
"We would like to thank Merve Nur Ylmaz and Benan Bardak for their invaluable help with the annotation process on this project.",
"This research is fully supported by STM A.S.",
"Any opinions, findings and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the view of the sponsor."
] | [
"abstain",
"objective",
"method",
"objective",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"objective",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"objective",
"method",
"abstain",
"abstain",
"other",
"other",
"other"
] |
[
"This paper presents Pyramid , a novel layered model for Nested Named Entity Recognition (nested NER).",
"In our approach, token or text region embeddings are recursively inputted into L flat NER layers, from bottom to top, stacked in a pyramid shape.",
"Each time an embedding passes through a layer of the pyramid, its length is reduced by one.",
"Its hidden state at layer l represents an l -gram in the input text, which is labeled only if its corresponding text region represents a complete entity mention.",
"We also design an inverse pyramid to allow bidirectional interaction between layers.",
"The proposed method achieves state-of-the-art F1 scores in nested NER on ACE-2004, ACE-2005, GENIA, and NNE, which are 80.27, 79.42, 77.78, and 93.70 with conventional embeddings, and 87.74, 86.34, 79.31, and 94.68 with pre-trained contextualized embeddings.",
"In addition, our model can be used for the more general task of Overlapping Named Entity Recognition.",
"A preliminary experiment confirms the effectiveness of our method in overlapping NER.",
"Named Entity Recognition (NER), which aims at identifying text spans as well as their semantic classes, is an essential and fundamental Natural Language Processing (NLP) task.",
"It is typically modeled as a sequence labeling problem, which can be effectively solved by RNN-based approach (Huang et al., 2015; Lample et al., 2016; Ma and Hovy, 2016).",
"However, such formulation oversim-plifies the problem and is based on a very strong as-sumption that entity mentions do not overlap with each other, which is certainly not the real case.",
"In real-world languages, entities might be deeply nested or overlapping, calling for better models to handle such complexity.",
"Many previous studies have focused on recognizing nested entity mentions.",
"A few works use proprietary structures, such as constituency graph (Finkel and Manning, 2009) or hypergraph (Lu and Roth, 2015; Muis and Lu, 2017), to explicitly capture nested entities.",
"These structures, however, do not produce satisfactory performance results.",
"Some other works handle nested entity mentions in a layered model, which employs multiple flat NER layers(Alex et al., 2007; Ju et al., 2018; Fisher and Vlachos, 2019).",
"Each layer is usually responsible for predicting a group of nested entities having the same nesting level.",
"Unfortunately, conventional layered schemes do not address the more general overlapping setting, and also suffer from layer disorientation .",
"The latter is a problem arising when the model might output a nested entity from a wrong layer.",
"For example, entity U.N. Ambassador is labeled as a second-layer entity (containing U.N. and Ambassador).",
"Thus, prediction of it from the first layer is considered an error.",
"Generally, a false positive prediction with the correct span and class but from a wrong layer produces an over-estimated loss (despite the correct entity itself), causing the entire model reluctant to predict positive, and eventually harming the recall.",
"This problem occurs quite often, as the target layer for a nested entity is determined by the nesting levels of its composing entities rather than by its own semantics or structure.",
"A recent study on a layered model (Ju et al., 2018) also reports the error propagation issue, i.e. errors in the first few layers are propagated to the next layers.",
"In this paper, we propose a novel layered model called Pyramid for nested NER.",
"The model consists of a stack of inter-connected layers.",
"Each layer l predicts whether a text region of certain length l , i.e. an l -gram, is a complete entity mention.",
"Between each two consecutive layers of our model, the hidden state sequence is fed into a convolutional network with a kernel of two, allowing a text region embedding in the higher layer to aggregate two adjacent hidden states from the lower layer, and thus forming the pyramid look (as the length of the sequence in the higher layer is one token shorter than the lower layer).",
"Such process enumerates all text spans without breaking the sequence structure.",
"Figure 1 shows a sentence containing eight nested entities being fed into the Pyramid model.",
"These entities are separated into 5 layers according to their number of tokens.",
"The job of each decoding layer is simple and clear it needs to output entity type when it encounters a complete entity.",
"In the above scheme, the higher decoding layer relies on the output of the lower decoding layer in a bottom-up manner (from layer 1 to 5 in Figure 1).",
"It is also desirable to construct an inverse pyramid , where a lower decoding layer receives input from a higher layer (from layer 5 to 1), allowing information to flow in the opposite way.",
"Pyramid outperforms the previous methods in nested NER while addressing all the aforementioned problems with layered model.",
"First, it can be used for more general overlapping NER.",
"Second, it prevents layer disorientation as an l -length entity in the input is only predicted on layer l .",
"Third, it mitigates the error propagation problem, as predictions in one layer do not dictate those in other layers.",
"Our main contributions are as follows: We propose a novel layered model called Pyramid for nested NER.",
"The model recognizes entity mentions by its length without layer disorientation and error propagation.",
"The proposed model can also address the more general overlapping NER task.",
"Besides the normal pyramid, we design an inverse pyramid to allow bidirectional interactions between neighboring layers.",
"We evaluate the proposed method on four datasets, namely ACE-2004 (Doddington et al., 2004), ACE-2005 (Walker et al., 2006), GENIA (Kim et al., 2003) and NNE (Ring-land et al., 2019).",
"The results suggest that our model significantly outperforms the previous methods, and achieves state-of-the-art performance with and without pre-trained language model embeddings (ALBERT (Lan et al., 2019), BERT (Devlin et al., 2019), and Flair (Akbik et al., 2018)).",
"Additionally, we construct a small dataset that contains overlapping but non-nested entities.",
"Preliminary results on this dataset show the potential of our model for handling overlapping entities.",
"Existing approaches for recognizing nonoverlapping named entities usually treat the NER task as a sequence labeling problem.",
"Various sequence labeling models achieve decent performance on regular NER, including probabilistic graph models such as Conditional Random Fields (CRF) (Ratinov and Roth, 2009), and deep neural networks like recurrent neural networks (RNN) and convolutional neural networks (CNN).",
"Recently, LSTM-CRF has become a standard architecture for sequence labeling tasks.",
"Huang et al. 2015 uses hand-crafted spelling features; Ma and Hovy 2016 uses CNN to capture character features; Lample et al. 2016 utilizes LSTM instead.",
"These sequence labeling models can only detect non-overlapping entities and fail to handle nested ones.",
"Nested NER has been intensively studied recently.",
"Finkel and Manning 2009 proposes a CRF-based constituency parser and use a constituency tree to represent a sentence.",
"Lu and Roth 2015 introduces the idea of hypergraph which allows edges to connect to multiple nodes to represent nested entities.",
"Muis and Lu 2017 uses a multigraph representation and introduces the notion of mention separator for nested entity detection.",
"Wang and Lu 2018 presents a neural segmental hypergraph model using neural networks to obtain distributed feature representation.",
"Katiyar and Cardie 2018 also adopts a hypergraph-based formulation but instead uses neural networks to learn the structure.",
"Lin et al. 2019 borrows the Anchor Region Networks (ARNs) architecture to predict nested entity mentions.",
"All the above works design proprietary structures to explicitly capture nested entities.",
"Layered models are common solution for nested NER.",
"Alex et al. 2007 stacks multiple flat NER layers, where the first recognizes the innermost (or outermost) mentions, then the following taggers are used to incrementally recognize next-level mentions.",
"Ju et al. 2018 dynamically stacks multiple flat NER layers and extract outer entities based on the inner ones.",
"Fisher and Vlachos 2019 can also be considered as a layered model with a novel neural network architecture.",
"Our method differs from the above layered models in that (1) it is able to handle overlapping NER, and (2) it does not suffer the layer disorientation or error propagation problem.",
"Exhaustive region classification model enumerates all possible regions of the input sentence.",
"Byrne 2007; Xu et al. 2017; Sohrab and Miwa 2018; Zheng et al. 2019 aggregate all possible adjacent tokens into potential spans.",
"These spans, together with their left and right contexts, are fed into a classifier a maximum entropy tagger (Byrne, 2007) or a neural network (Xu et al., 2017; Sohrab and Miwa, 2018; Zheng et al., 2019).",
"Unfortunately, all these works fail to take advantage of the dependencies among nested entities, but perform prediction merely on individual text fragments, thus limiting the performance.",
"Luan et al. 2019 uses propagation layers to capture relation and coreference between spans.",
"Our method also potentially enumerates all possible spans, while maintaining the sequence structure, which leads to better performance.",
"Pre-trained word embeddings, e.g. Glove (Pen-nington et al., 2014), have proved to be effective in improving NER performance.",
"Recently, with the rapid development of language model techniques, the performance of NER models has been pushed to a new height.",
"The recent pre-trained language model embeddings include ELMo (Peters et al., 2018), Flair (Akbik et al., 2018), BERT (Devlin et al., 2019), ALBERT (Lan et al., 2019), etc.",
"In our experiments, we leverage these embeddings and observe significant performance improvements.",
"In this section, we describe the proposed model and its architecture, which includes an encoder, a pyramid, an inverse pyramid, and a logits layer.",
"Figure 2 shows a toy model with a pyramid (5 bottom-up decoding layers in blue) and its inverse counterpart (5 top-down layers in pink).",
"As shown in the blue pyramid, each decoding layer contains a convolutional network with a kernel of two to reduce the sequence length in its output, so that all possible mention spans can potentially be enumerated.",
"The top-down inverse pyramid will be described later.",
"We shall use the following notations: Embed the embedding layer LSTM the bidirectional LSTM layer LM the language model embedder Linear the fully-connected layer LayerNorm layer normalization The mentioned layers with the same notation, superscript and subscript share the same parameters.",
"For the sake of brevity, we omit the dropout layer in this section.",
"The input is a T -length textual sentence.",
"After the encoder, embedding sequences are recursively fed into flat NER decoding layers, producing L tag sequences in the IOB2-format 1 with length T , T 1 , ..., T L + 1 , where L is the number of decoding layers.",
"Note we only label n-grams that are complete mentions, so I{ class } usually does not appear.",
"Given the running example in Figure 1, input sentence Former U.N. Ambassador Jeane Kirkpatrick ... contains eight entity mentions, namely (U.N., ORG), (Ambassador, ROLE), (Jeane, FIRST), (Kirkpatrick, NAME), (U.N. Ambassador, ROLE), (Jeane Kirkpatrick, PER), (Former U.N. Ambassador, ROLE), and (Former U.N. Ambassador Jeane Kirkpatrick, PER).",
"The output from the pyramid would contain layered tag sequences ( l = 1 , . . . , 5 ) as follows: l=5: B-PER ... l=4: O O ... l=3: B-PER O O ... l=2: O B-ROLE O B-PER ... l=1: O B-ORG B-ROLE B-FIRST B-NAME ...",
"Unfortunately, the above layered sequences cannot include any entities of more than 5 tokens.",
"Generally, a stack of L layers cannot predict entities containing more than L tokens!",
"To address this issue, we propose a remedy solution : to predict all entities longer than L tokens on the topmost flat NER layer.",
"Specifically, the bottom L 1 layers predict B{ class } tags for 1 Label the first token of a mention as B{ class } ; other tokens inside a mention as I{ class } ; tokens outside any mention as O .",
"complete entity mentions; and the topmost layer predicts both B{ class } and I{ class } tags.",
"This stipulates that when two entities are nested, if one of them is longer than L , the other one cannot be longer than L 1 .",
"In the running example, suppose we had only 4 decoding layers ( l = 1 , . . . , 4 ), then the longest mention (Former U.N. Ambassador Jeane Kirkpatrick) would be recognized in the fourth decoding layer as following: l=4: B-PER I-PER ... l=3: B-PER O O ... l=2: O B-ROLE O B-PER ... l=1: O B-ORG B-ROLE B-FIRST B-NAME ...",
"With the remedy solution, our model is able to handle entities longer than L .",
"As most entity mentions are not too long (99% are no longer than 15 tokens), and it is even rarer for both two nested mentions to be longer than 15, we set the default number of flat decoder layers to L = 16 to minimize the impact of the remedy.",
"Parameter L can be tuned for balance between accuracy and inference speed.",
"We represent each word by concatenating character sequence embeddings and word embeddings.",
"First, the character embeddings are dynamically generated by a LSTM (Lample et al., 2016) to capture the orthographic and morphological features of the word.",
"It is suggested that with the introduction of character embeddings the model can better handle out-of-vocabulary (OOV) words.",
"Second, the word embeddings are initialized with pre-trained word vectors.",
"For OOV words, we randomly initialize an embedding for [UNK], which is tuned during training.",
"The concatenated character and word embeddings are fed into a bidirectional LSTM encoding layer to further leverage contextual information.",
"Formally, given the input sentence x : x char = LST M char ( Embed char ( x )) (1) x word = Embed word ( x ) (2) x = LST M enc ([ x char ; x word ]) (3) For better performance, we adopt the popular pre-trained contextualized language model embeddings, such as BERT (Devlin et al., 2019).",
"These embeddings are concatenated to the output of LST M enc , followed by a linear layer to reduce the embedding dimension.",
"i.e.: x = Linear enc ([ x ; LM ( x )]) (4) 3.3 The Pyramid The pyramid recognizes entities in a bottom-up manner.",
"It consists of L decoding layers, each of which corresponds to a flat named-entity recognizer.",
"Each decoding layer has two main components, a LSTM and a CNN with a kernel of two.",
"In layer l , the LSTM recognizes l -length entity mentions, and the CNN aggregates two adjacent hidden states and then feeds the text region embeddings enriched with layer information to the higher ( l + 1 -th) decoding layer.",
"By passing through l decoding layers ACE-2004 ACE-2005 GENIA NNE train dev test train dev test train dev test train dev test sentences # total 6,198 742 809 7,285 968 1,058 15,022 1,669 1,855 43,457 1,989 3,762 # nested 2,718(44%) 294(40%) 388(48%) 2,797(38%) 352(36%) 339(32%) 3,222(21%) 328(20%) 448(24%) 28,606(66%) 1292(65%) 2489(66%) entities # total 22,195 2,514 3,034 24,700 3,218 3,029 47,006 4,461 5,596 248,136 10,463 21,196 # nested 10,157(46%) 1,092(43%) 1,417(47%) 9,946(40%) 1,191(37%) 1,179(39%) 8,382(18%) 818(18%) 1212(22%) 20,6618(83%) 8,487(81%) 17,670(83%) max length 57 35 43 49 31 27 20 20 15 16 15 15 Table 1: Statistics of the datasets used in the experiments.",
"with l 1 CNNs, each hidden state (at t ) actually represents the region of l original tokens (from t to t + l 1 ).",
"Therefore, the l -th decoding layer enumerates text spans of length l .",
"And all these L layers together produce all possible entity spans.",
"One may notice that the pyramid structure intrinsically provides useful inductive bias: The higher the layer, the shorter the input sequence, forcing the model to capture high-level information for predicting long entities and low-level information for predicting short entities.",
"Moreover, as the length of each span representation is reduced to one on its target decoding layer, the prediction task on each layer is simple and clear to predict entities whose representation length is one in this layer.",
"Since the input of the first decoding layer is from the encoder while the others are from the output of their lower neighboring layers, the input bias and scale may differ among layers.",
"This is detrimental to training.",
"To address this issue, we apply layer normalization (Ba et al., 2016) before feeding the region embeddings into the decoding LSTM.",
"Each decoding layer in the bottom-up pyramid takes into account layer information from lower layers.",
"However, a layer cannot get feedback from its higher neighbors, which could potentially help.",
"Moreover, for long entities, their embeddings need to go through numerous lower layers and tend to lose important information.",
"Therefore, we add an inverse pyramid, which recognizes entity mentions in a top-down manner, to address the above issues.",
"While in the pyramid, sequences pass through a CNN to reduce sequence length before being fed into the higher decoding layer, in the inverse pyramid, however, we use an-other CNN with zero paddings and a kernel of two to reconstruct the lower-level text region embeddings.",
"Specifically, to reconstruct the text region embeddings at the l 1 -th decoding layer, we concatenate the hidden states of the l -th normal and inverse decoding layers, and feed it to the inverse CNN (see bottom-left pink box in Figure 2).",
"There are two benefits for using the top-down inverse pyramid: (1) It gives the feedback from higher decoding layers, allowing bidirectional interaction between neighboring decoding layers; (2) Since the inverse pyramid needs to reconstruct lower-level sequence, it requires the pyramid to retain as much original information as possible, thereby mitigating the information loss for long entities.",
"L",
"Finally, with the concatenation of the hidden states of both the normal and inverse decoding layers, we use a feed-forward layer to predict their class: logits l = Linear dec ([ h l ; h (cid:48) l ]) .",
"We evaluate our model on four nested entity recognition corpora: ACE-2004 (Doddington et al., 2004), ACE-2005 (Walker et al., 2006), GENIA (Kim et al., 2003), and NNE (Ringland et al., 2019).",
"For ACE-2004 and ACE-2005, we adopt the train/dev/test split of Lu and Roth 2015 2 , as 2 https://statnlp-research.github.io/ publications/ Setting Value batch size 32,32,64,32 optimizer SGD momentum 0.9 learning rate (lr) 0.01 dropout rate 0.3,0.4,0.4,0.2 hidden dim 200 # stacked layers 16 token emb dim 100,100,200,100 char emb dim 30,30,60,30 gradient clipping 5.0 Table 2: Hyperparameters used in our experiments.",
"used in most previous studies.",
"For GENIA, we use GENIAcorpus3.02p 3 , and follow the train/dev/test split of previous works (Finkel and Manning, 2009; Lu and Roth, 2015) i.e.: (1) split first 81%, subsequent 9%, and last 10% as train, dev and test set, respectively; (2) collapse all DNA, RNA, and protein subtypes into DNA, RNA, and protein, keeping cell line and cell type, and (3) removing other entity types, resulting in 5 entity types.",
"For NNE, we keep the original dataset split and pre-processing.",
"The statistics of each dataset are shown in Table 1.",
"We denote by Pyramid-Basic the model using the normal bottom-up pyramid only; and by Pyramid-Full the one with both the normal and inverse pyramids.",
"We try to use as similar settings as possible on all datasets, and Table 2 describes the settings used in our experiments.",
"For the word embeddings, we use 100-dimensional GloVe word embeddings trained on 6B tokens 4 as initialization.",
"We disable updating the word embeddings during training.",
"Besides, character-based embeddings are generated by a LSTM (Lam-ple et al., 2016).",
"We set the hidden dimension to 200 (100 for each direction in bidirectional LSTM).",
"We use inverse time learning rate decay: lr = lr/ (1+ decay rate steps / decay steps ) , with decay rate 0.05 and decay steps 1000.",
"All results are averaged on 4 runs to ensure reproducibility.",
"The GENIA corpus significantly differs from the others in its distribution, as it belongs to medical domain.",
"So for GENIA, we initialize word embeddings with word vectors pre-trained on biomedical 3 http://www.geniaproject.org/ genia-corpus/pos-annotation 4 https://nlp.stanford.edu/projects/ glove/ corpus (Chiu et al., 2016) 5 , which are in 200 dimensions.",
"We also evaluate our method with pre-trained language model embeddings: [Flair] (Akbik et al., 2018): Pre-trained contextualized character-level embeddings.",
"Here, we use the concatenation of news-forward and news-backward , forming embeddings of dimension 4096.",
"For GENIA, we use pubmed-forward and pubmed-backward .",
"[BERT] (Devlin et al., 2019): Transformer based pre-trained contextual word embeddings.",
"Here we use the bert-large-uncased checkpoint, with embeddings of dimension 1024.",
"For each token, we generate the contextualized word embedding by averaging all BERT subword embeddings in the last four layers without fine-tuning.",
"For GENIA, we use BioBERT v1.1 (Lee et al., 2020) 6 .",
"[ALBERT] (Lan et al., 2019): A lite BERT with shared transformer parameters.",
"Here we use the albert-xxlarge-v2 checkpoint, with embeddings of dimension 4096.",
"For each token, we average all ALBERT subword embeddings in the last four layers without fine-tuning.",
"We generate Flair embeddings with the library provided by Akbik et al. 2019 7 .",
"We use the implementation by Wolf et al. 2019 8 to generate BERT and ALBERT embeddings.",
"Table 3 presents the comparison of our model with existing methods.",
"Our method outperforms all previous methods by a large margin.",
"With conventional word embeddings, our method achieves 80.27, 79.42, 77.78, and 93.70 in terms of F1-score, 5 https://github.com/cambridgeltl/ BioNLP-2016 6 https://github.com/naver/ biobert-pretrained 7 https://github.com/zalandoresearch/ flair 8 https://github.com/huggingface/ transformers ACE-2004 ACE-2005 GENIA NNE Model P R F1 P R F1 P R F1 P R F1 Finkel and Manning 2009 ---75.4 65.9 70.3 -Lu and Roth 2015 70.0 56.9 62.8 66.3 59.2 62.5 74.2 66.7 70.3 -Muis and Lu 2017 72.7 58.0 64.5 69.1 58.1 63.1 75.4 66.8 70.8 -Xu et al. 2017 68.2 54.3 60.5 67.4 55.1 60.6 ---Katiyar and Cardie 2018 73.6 71.8 72.7 70.6 70.4 70.5 79.8 68.2 73.6 -Ju et al. 2018 --74.2 70.3 72.2 78.5 71.3 74.7 -Wang et al. 2018 74.9 71.8 73.3 74.5 71.5 73.0 78.0 70.2 73.9 77.4 70.1 73.6 Wang and Lu 2018 78.0 72.4 75.1 76.8 72.3 74.5 77.0 73.3 75.1 91.8 91.0 91.4 Sohrab and Miwa 2018 ---93.2 64.0 77.1 -Fisher and Vlachos 2019 --75.1 74.1 74.6 ---Lin et al. 2019 --76.2 73.6 74.9 75.8 73.9 74.8 -Strakova et al. 2019 -77.1 -75.4 -76.4 --Pyramid-Basic 80.83 78.86 79.83 79.27 79.37 79.32 77.91 77.20 77.55 93.37 93.91 93.64 Pyramid-Full 81.14 79.42 80.27 80.01 78.85 79.42 78.60 77.02 77.78 93.44 93.95 93.70 LM-based Xia et al. 2019 [ELMO] 81.7 77.4 79.5 79.0 77.3 78.2 ---Fisher and Vlachos 2019 [ELMO] --79.7 78.0 78.9 ---Fisher and Vlachos 2019 [BERT] --82.7 82.1 82.4 ---Shibuya and Hovy 2019 [BERT] --83.0 82.4 82.7 76.3 74.7 75.5 -Luan et al. 2019 [ELMO] -84.7 -82.9 -76.2 -Strakova et al. 2019 [BERT] -84.3 -83.4 -78.2 -Strakova et al. 2019 [BERT+Flair] -84.4 -84.3 -78.3 --Pyramid-Basic [BERT] 86.08 86.48 86.28 83.95 85.39 84.66 79.45 78.94 79.19 93.97 94.79 94.37 Pyramid-Basic [BERT+Flair] 87.01 86.55 86.78 84.90 86.08 85.49 79.98 78.51 79.24 93.97 94.98 94.47 Pyramid-Basic [ALBERT] 86.54 87.44 86.99 85.20 86.56 85.87 80.07 77.60 78.82 94.11 94.91 94.51 Pyramid-Basic [ALBERT+Flair] 86.63 87.15 86.89 85.10 87.22 86.15 78.48 79.39 78.93 94.18 94.79 94.48 Pyramid-Basic [ALBERT+BERT] 87.65 87.74 87.70 85.24 87.32 86.27 80.12 77.82 78.95 94.28 94.99 94.63 Pyramid-Full [BERT+Flair] ---80.31 78.33 79.31 -Pyramid-Full [ALBERT+BERT] 87.71 87.78 87.74 85.30 87.40 86.34 --94.30 95.07 94.68 Table 3: Results of nested NER.",
"even compatible with some LM-based baselines.",
"A close one is from Strakova et al. 2019, which employs many extra features including input forms, lemmas and POS, whereas our method does not.",
"Additionally, our method brings much higher recall values than the other methods.",
"With pre-trained language model embeddings, specifically with ALBERT+BERT for ACE-2004, ACE-2005, NNE and with BERT+Flair for GENIA, our model achieves state-of-the-art F1 scores: 87.74, 86.34, 79.31, and 94.68 respectively.",
"We evaluate our method with different L on all datasets.",
"Due to space limit, we only present the results of ACE-2005 in Table 4.",
"The findings on the other datasets are similar.",
"Results From All Layers We report in Table 4 the detailed results for all entity lengths while tuning L on ACE-2005.",
"Obviously 1-word and 2-word entities account for the majority of entities (77%), where we achieve competitive results.",
"Longer entities see reductions in performance.",
"However, due to our remedy strategy, entities longer than L are still recognized with acceptable performance.",
"Note R(N) is the recall of nested entities, i.e. for layer l , entities nested with other entities shorter than l are also counted in.",
"Inference Speed Table 4 also shows the inference speed with different L for the basic and full models.",
"Although the basic model does not perform as good as the full model, it is significantly faster.",
"Since the time complexity of our method is O ( T L ) with T being the number of tokens and L the number of stacked layers, we can further speed up the inference by using smaller L value (e.g. L = 8 or 4 ), while achieving F1 scores higher than most baselines.",
"We conduct ablation study to verify the effectiveness of components of Pyramid .",
"Likewise, we only present the results on ACE-2005 here.",
"Character Embeddings : Using character is a standard technique for NER to dynamically capture orthographic and morphological features.",
"It provides some improvements.",
"Layer Normalization : LayerNorm eliminates the bias and scale difference of the inputs of each Pyramid-Basic L = 32 L = 16 L = 8 L = 4 len(e) # entities F1 R(N) F1 R(N) F1 R(N) F1 R(N) all -79.3 73.6 79.3 74.4 78.8 73.9 77.6 69.5 1 1706 (56%) 84.0 82.3 84.3 82.5 84.0 83.0 83.4 81.4 2 635 (21%) 79.3 77.5 79.7 78.6 78.8 77.7 78.6 76.2 3 248 (8%) 74.9 75.5 75.3 76.8 75.6 77.5 72.9 73.7 4 140 (5%) 72.1 73.1 71.8 75.0 72.0 73.3 65.7 61.1 5 90 (3%) 73.6 77.5 72.3 78.9 69.3 75.5 63.6 60.3 6-8 106 (3%) 57.9 59.3 56.2 59.3 53.4 56.7 47.7 45.9 9-16 81 (3%) 42.0 36.4 43.1 39.9 42.3 39.5 40.0 36.8 1725 (1%) 33.8 26.1 23.0 18.8 27.2 21.7 23.6 18.8 Inference Speed ( Basic / Full , words per second) on GTX 1080 Ti batch size = 1 708 / 445 842 / 545 1116 / 781 1494 / 1153 batch size = 4 1526 / 955 2085 / 1361 2987 / 2151 4230 / 3280 batch size = 16 2949 / 2084 4372 / 3282 6660 / 5169 8999 / 7852 Table 4: Details of tuning L on ACE-2005.",
"Sharing LSTM dec : The jobs of decoding layers are similar: inheriting information from previous layers and recognizing entity representations of length one.",
"Therefore, sharing weights maximizes the use of training data and prevents overfitting.",
"Method of Reducing Length : We use CNN to reduce the sequence length at each decoding layer.",
"As shown in Table 5, compared with average pooling and maximum pooling, CNN can effectively retain the original semantic information and capture the boundary information.",
"Pyramid Layers : Apart from the results shown in Table 5, we emphasize that the performance gain of Pyramid owes a lot to the pyramid layers (both normal and inverse ones).",
"As shown in Table 4, reducing L to 4 leads to a drop of F1 (-1.7).",
"It is clear that when L = 1 , our method degrades to a flat entity recognizer, which cannot handle nested mentions any more.",
"Overlapping mentions usually occur along with the attributive clause in natural language.",
"For example, sentence The burial site of Sheikh Abbad, who died 500 years ago, is located. contains two overlapping mentions The burial site of Sheikh Abbad and Sheikh Abbad, who died 500 years ago.",
"Due to lack of datasets for overlapping NER, we create a small dataset.",
"For all sentences in NNE, we find 2599 which contain , which or , who.",
"We use the ELMo-based constituency parser 9 to find attributive clauses together with their modified noun phrases (Sheikh Abbad, who ...), and then see if a bigger noun phrase (the burial site of Sheikh Abbad) contains the noun phrase.",
"Next, while keeping the original annotations, we add these two mentions to form a new dataset where around 14% sentences have overlapping but non-nested entity mentions.",
"This dataset is split randomly into training, dev, and test sets containing 1599, 400, and 600 sentences respectively.",
"Note the additional annotations are not verified by human, meaning they might contain some errors.",
"However, it is still useful for testing the performance of our model for overlapping NER.",
"This paper presented Pyramid , a novel layered neural model for nested entity recognition.",
"Our model relies on a layer-wise bidirectional decoding process (with both normal and inverse pyramids), 9 Stern et al. 2017 with ELMo: https: //allennlp.s3.amazonaws.com/models/elmo-constituency-parser-2018.03.14.tar.gz , implemented by Gardner et al. 2018.",
"allowing each decoding layer to take into account the global information from lower and upper layers.",
"Pyramid does not suffer from layer disorientation or error propagation, and is applicable for the more general overlapping NER.",
"The proposed method obtained state-of-the-art results on four different nested NER datasets, confirming its effectiveness.",
"This work was supported by the Natural Science Foundation of China (No. 61672455), the Key Research and Development Program of Zhejiang Province of China (No. 2020C01024), the Natural Science Foundation of Zhejiang Province of China (No. LY18F020005), and the National Research Foundation, Prime Minister's Office, Singapore under its Strategic Capability Research Centres Funding Initiative."
] | [
"objective",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"method",
"abstain",
"result",
"method",
"result",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"other",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"result",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"other",
"abstain",
"abstain",
"abstain",
"other"
] |
[
"Abstract As more and more product reviews are posted in both text and images, Multimodal Review Analysis (MRA) becomes an attractive research topic.",
"Among the existing review analysis tasks, helpfulness prediction on review text has become predominant due to its importance for e-commerce platforms and online shops, i.e. helping customers quickly acquire useful product information.",
"This paper proposes a new task M ultimodal Review Helpfulness P rediction (MRHP) aiming to analyze the review helpfulness from text and visual modalities.",
"Meanwhile, a novel Multi-perspective Coherent Reasoning method (MCR) is proposed to solve the MRHP task, which conducts joint reasoning over texts and images from both the product and the review, and aggregates the signals to predict the review helpfulness.",
"Concretely, we first propose a product-review coherent reasoning module to measure the intraand inter-modal coherence between the target product and the review.",
"In addition, we also devise an intra-review coherent reasoning module to identify the coherence between the text content and images of the review, which is a piece of strong evidence for review helpfulness prediction.",
"To evaluate the effectiveness of MCR, we present two newly collected multimodal review datasets as benchmark evaluation resources for the MRHP task.",
"Experimental results show that our MCR method can lead to a performance increase of up to 8.5% as compared to the best performing text-only model.",
"The source code and datasets can be obtained from https:// github.com/jhliu17/MCR .",
"Product reviews are essential information sources for consumers to acquire useful information and",
"make purchase decisions.",
"Many e-commerce sites such as Amazon.com offer reviewing functions that encourage consumers to share their opinions and experiences.",
"However, the user-generated reviews vary a lot in their qualities, and we are continuously bombarded with ever-growing, noise information.",
"Therefore, it is critical to examine the quality of reviews and present consumers with useful reviews.",
"Motivated by the demand of gleaning insights from such valuable data, review helpfulness prediction has gained increasing interest from both academia and industry communities.",
"Earlier review helpfulness prediction methods rely on a wide range of handcrafted features, such as semantic features (Yang et al., 2015), lexical features (Martin and Pu, 2014), and argument based features (Liu et al., 2017), to train a classifier.",
"The success of these methods generally relies heavily on feature engineering which is labor-intensive and highlights the weakness of conventional machine learning methods.",
"In recent years, deep neural networks such as CNN (Chen et al., 2018, 2019) and LSTM (Fan et al., 2019) have become dominant in the literature due to their powerful performance for helpfulness prediction by learning text representation automatically.",
"Note that these existing works on review helpfulness prediction mainly focus on the pure textual data.",
"As multimodal data become increasingly popular in online reviews, Multimodal Review Analysis (MRA) has become a valuable research direction.",
"In this paper, we propose the Multimodal Review Helpfulness Prediction (MRHP) task which aims at exploring multimodal clues that often convey comprehensive information for review helpfulness prediction.",
"In particular, for the multimodal reviews, the helpfulness of reviews is not only determined by the textual content but rather the combined expression (e.g., coherence) of multimodality data (e.g., texts and images).",
"Taking the reviews in Table 1 as an example, we cannot identify the helpfulness score of Review 3 solely from the text content until reading the attached images that are totally irrelevant to the product Teflon Pans .",
"The reviews that have incoherent text content and images tend to be unhelpful, even be malicious reviews.",
"In contrast, a helpful review (e.g., Review 2 ) should contain not only concise and informative textual content but also coherent text content and images.",
"In this paper, we explore both text and images in product reviews to improve the performance of review helpfulness prediction.",
"We design a novel Multi-perspective Coherent Reasoning method (de-noted as MCR) to tackle the MRHP task.",
"Concretely, we propose a product-review coherent reasoning module to effectively capture the intraand inter-modal coherence between the target product and the review.",
"In addition, we also devise an intra-review coherent reasoning module to capture the coherence between the text content and images of the review, which is a piece of strong evidence for review helpfulness prediction.",
"Finally, we formulate the helpfulness prediction as a ranking problem and employ a pairwise ranking objective to optimize the whole model.",
"We summarize our main contributions as follows.",
"(1) To the best of our knowledge, this is the first attempt to explore both text and images in reviews for helpfulness prediction, which is defined as the MRHP task.",
"(2) We propose a multi-perspective coherent reasoning method for the MRHP task to conduct joint reasoning over texts and images from both the product and the review, and aggregate the signals to predict the helpfulness of multimodal reviews.",
"(3) We present two newly-collected multimodal review datasets for helpfulness prediction of multimodal reviews.",
"To facilitate research in this area, we will release the datasets and source code proposed in this paper, which would push forward the research in this field.",
"(4) Extensive experiments on two collected datasets demonstrate that our MCR method significantly outperforms other methods.",
"Most conventional approaches on review helpfulness prediction focus solely on the text of reviews, which can be generally divided into two categories based on the way of extracting predictive features: machine learning based methods with hand-crafted features (Kim et al., 2006; Krishnamoorthy, 2015)",
"and deep learning based methods (Chen et al., 2019; Fan et al., 2018; Chen et al., 2018).",
"The machine learning based methods employ domain-specific knowledge to extract a variety of hand-crafted features, such as structure features (Kim et al., 2006), lexical features (Krishnamoorthy, 2015), emotional features (Martin and Pu, 2014), and argument features (Liu et al., 2017), from the textural reviews, which are then fed into conventional classifiers such as SVM (Kim et al., 2006) for helpfulness prediction.",
"These methods rely heavily on feature engineering, which is time-consuming and labor intensive.",
"Motivated by the remarkable progress of deep neural networks, several recent studies attempt to automatically learn deep features from textual reviews with deep neural networks.",
"Chen et al. (2019) employs a CNN model to capture the multi-granularity (character-level, word-level, and topic-level) features for helpfulness prediction.",
"Fan et al. (2018) proposes a multi-task neural learning model to identify helpful reviews, in which the primary task is helpfulness prediction and the auxiliary task is star rating prediction.",
"Subsequently, several works have been proposed to explore not only the reviews but also the users and target products for helpfulness prediction of reviews.",
"Fan et al. (2019) argued that the helpfulness of a review should be aware of the meta-data (e.g., title, brand, category, description) of the target product besides the textual content of the review itself.",
"To this end, a deep neural architecture was proposed to capture the intrinsic relationship between the meta-data of a product and its numerous reviews.",
"Qu et al. (2020) proposed to leverage the reviews, the users, and items together for helpfulness prediction of reviews and devised a category-aware graph neural networks with one shared and many item-specific graph convolutions to learn the common features and each item's specific criterion for helpfulness prediction.",
"Different from the above methods, we take full advantage of the text content and images of reviews by proposing a novel hierarchical coherent reasoning method to learn the coherence between text content and images in a review and the coherence between the target product and the review.",
"The overall architecture of our MCR method is illustrated in Figure 1. Our multi-perspective coherent reasoning consists of two perspectives of coherence:",
"(i) the intraand inter-modal coherence between a review and the target product and",
"(ii) the intra-review coherence between the text content and images in the review.",
"In the following sections, we will provide the problem definition of review helpfulness prediction and introduce each component of our MCR model in detail.",
"As mentioned by Diaz and Ng (2018), we formulate the multimodal review helpfulness prediction problem as a ranking task.",
"Specifically, given a product item P i consisting of product related information p i and an associated review set R i = { r i, 1 , , r i,N } , where N is the number of reviews for p i .",
"Each review has a scalar label s i,j { 0 , , S } indicating the helpfulness score of the review r i,j .",
"The ground-truth ranking of R i is the descending sort order determined by the helpfulness scores.",
"The goal of review helpfulness prediction is to predict helpfulness scores for R i which can rank the set of reviews R i into the ground-truth result.",
"The predicted helpfulness score s i,j for the review r i,j is defined as follows: s i,j = f ( p i , r i,j ) , (1) where f is the helpfulness prediction function taking a product-review pair (cid:104) p i , r i,j (cid:105) as input.",
"In multimodal review helpfulness prediction task, the product p i consists of associated description T p and pictures I p , while review r i,j consists of user-posted text T r and images I r .",
"Given a text ( T p or T r ) consisting of l T text tokens { w 1 , , w l T } and an image set ( I p or I r ), we adopt a convolutional neural network to learn the contextualized text representation.",
"Meanwhile, we use a self-attention mechanism on image region features to obtain the image representations.",
"To prevent conceptual confusion, we use the subscripts p and r to indicate variables that are related to the product and the review, respectively.",
"Text Representation Inspired by the great success of convolutional neural network (CNN) in natural language processing (Kim, 2014; Dai et al., 2018), we also apply CNN to learn the text representation.",
"First, we convert each token w i in a review into an embedding vector w i R d via an embedding layer.",
"Then, we pass the learned word embeddings to a one-dimensional CNN so as to extract multi-gram representations.",
"Specifically, the k -gram CNN transforms the token embedding vectors w i into k -gram representations H k : H k = CNN k ( { w 1 , , w l T } ) , (2) where k { 1 , , k max } represents the kernel size.",
"k max represents the maximum kernel size.",
"H k R l T d T is the k -gram representation.",
"All the k -gram representations are stacked to form the final text representation, denoted as H = [ H 1 , , H k max ] .",
"Here, we use H p and H r to represent the representations of text content of the product and the review, respectively.",
"Image Representation We use pre-trained Faster R-CNN to extract the region of interest (RoI) pooling features (Anderson et al., 2018) for the Inter-modal Coherence Intra-modal Coherence Image Encoder Image Encoder Product Image Review Image Intra-modal Coherence Inter-modal Coherence Text Encoder Teflon Pans 1 Set of 3 pcs 1042-Non-stick Set of 3 Product Text Text Encoder Overall, it is quite satisfactory.",
"review and product images, obtaining the fine-grained object-aware representations.",
"All the RoI features v i extracted from image sets I p and I r are then encoded by a self-attention module (Vaswani et al., 2017), resulting in a d I -dimensional semantic space with non-local understanding: V = SelfAttn ( { v 1 , , v l I } ) , (3) where V R l I d I represents the visual semantic representation and l I is the number of extracted RoI features.",
"Here, we use V p and V r to represent the product and review image features, respectively.",
"The helpfulness of a review should be fully aware of the product besides the review itself.",
"In this paper, we propose a product-review coherent reasoning module to effectively capture the intraand inter-modal coherence between the target product and the review.",
"Intra-modal Coherence We propose the intra-modal coherent reasoning to measure two kinds of intra-modal coherence:",
"(i) the semantic alignments between the product text and the review text, and",
"(ii) the semantic alignments between product images and review images.",
"The cosine similarity is utilized to derive the intra-modal coherence matrix.",
"For text representations H ip and H jr , we compute the corresponding coherence matrix as follow: SH i,j = cosine ( H ip , H jr ) , i, j { 1 , . . . , k max } , (4) where SH i,j has the shape of R l Tp l Tr , l T p and l T r indicate the text length of the product and the review, respectively.",
"All the coherence matrices are stacked to form the whole coherence features SH .",
"Without loss of generality, we also compute the image coherence matrix between V p and V r via cosine similarity.",
"In this way, we obtain the image coherence matrix SV with the shape of R l Ip l Ir , where l I p and l I r indicate the number of RoI features of the product and review images, respectively.",
"Subsequently, the text and image coherence matrix (i.e., SH and SV ) are passed to a CNN, and the topK values in each feature map are selected as the pooling features: o intraM = TopK ( CNN ([ SH , SV ])) , (5) where o intraM RK M is the intra-modal coherent reasoning features.",
"M is the number of filters used in the CNN module.",
"Inter-modal Coherence The intra-modal coherence ignores the cross-modal relationship between the product and the review.",
"In order to mitigate this problem, we propose the inter-modal coherent reasoning to capture two kinds of inter-modal coherence:",
"(i) the coherence between the review text and the product images, and",
"(ii) the coherence between the review images and the product text.",
"Since the text representation H and the image representation V lie in two different semantic spaces, we first project them into a d c -dimensional common latent space by: FH = Tanh ( W 1 H + b 1 ) , (6) FV = Tanh ( W 2 V + b 2 ) , (7) where FH R l T d c and FV R l I d c are text and image representations in the common latent space, respectively.",
"Taking the coherence of review image and product text as an example, our inter-modal coherent reasoning aligns the features in review images FV r based on the product text FH p .",
"Specifically, we de-fine the review images as the query Q r = WQFV r and the product text as the key K p = WKFH p , where WQ , WK R d c d c are learnable parameter matrices.",
"Hence, the inter-modal relationship IV r can be formulated as follows: M r = softmax ( Q r K Tp ) , (8) IV r = FV r + M r FH p , (9) where M r R l I l T is the query attended mask.",
"A mean-pooling operation is then conducted to get an aggregated vector of the inter-modal coherence features between the review images and the product text: IV r : IV r = Mean ( IV r ) R d c .",
"Following Equations 8-10, the same procedure is employed to learn the coherence features IH r between the review text and the product images.",
"Finally, we concatenate IV r and IH r to form the final inter-modal coherence features o interM : o interM = [ IV r , IH r ] , (11) where [ ] denotes the concatenate operation.",
"Generally, consumers usually express their opinions in textual reviews and post images as a kind of evidence to support their opinions.",
"To capture the coherence between the text content and images of the review, we should grasp sufficient relational and logical information between them.",
"To this end, we devise an intra-review coherent reasoning module to learn the coherence between the text content and images of the review, which performs message propagation among semantic nodes of a review evidence graph and then obtains an intra-review coherence score of the multimodal review.",
"Specifically, we construct a review evidence graph G r by taking each feature (each row) of FH r and FV r as a semantic node, and connects all node pairs with edges, resulting in a fully-connected review evidence graph with l T + l I nodes.",
"In a similar manner, we can construct a product evidence graph G p with l T + l I nodes from FH p and FV p .",
"The hidden states of nodes at layer t are denoted as G tr = { g tr, 1 , . . . , g tr,n } and G tp = { g tp, 1 , . . . , g tp,n } for the review and product evidence graphs respectively, where n = l T + l I and t denotes the number of hops for graph reasoning.",
"We compute the edge weights of semantic node pairs with an adjacency matrix that can be automatically learned through training.",
"Taking the review evidence graph G r as an example, we initialize the i -th semantic node at the first layer with g 0 i = [ FH r,i , FV r,i ] , i { 1 , , l T + l I } .",
"Then, the adjacency matrix A t representing edge weights at layer t is computed as follows: A ti,j = MLP t 1 ([ g t 1 r,i , g t 1 r,j ]) , (12) A t = softmax ( A t ) , (13) where MLP t 1 is an MLP at layer t 1 .",
"A ti,j represents semantic coefficients between a node i with its neighbor j N i .",
"Softmax operation is used to normalize semantic coefficients A t .",
"Then, we can obtain the reasoning features at layer t by: g tr,i = (cid:88) j N i A ti,j g t 1 r,j .",
"By stacking L graph reasoning layers, the semantic nodes can perform coherence relation reasoning by passing messages with each other.",
"We use g Lr,n and g Lp,n to denote the final reasoning hidden states of the review and product evidence graphs.",
"Subsequently, to obtain the product-related intra-review coherent reasoning features, we adopt an attention mechanism to filter the features that are irrelevant to the product: p = Mean ( h Lp, ) , (15) i = MLP ([ p , g L r,i ]) , (16) where a mean pooling operation is employed to derive the product coherent graph embedding p .",
"MLP is an attention layer to calculate the product-related features and output the attention weight i for the i -th node.",
"After normalizing the attention weight with a softmax function, we use a linear combination to aggregate the intra-review coherent reasoning results o IRC : = softmax ( ) , (17) o IRC = (cid:88) i i g Lr,i .",
"We concatenate the intra-modal product-review coherence features o intraM , the inter-modal product-review coherence features o interM , and the intra-review coherence features o IRC to form the final multi-perspective coherence features o final = o intraM , o interM , o IRC ] .",
"The final helpfulness prediction layer feeds o final into a linear layer to calculate a ranking score: f ( p i , r i,j ) = W r o final + b r , (19) where W r and b r denote the projection parameter and bias term.",
"p i represents information of the i -th product and r i,j is the j -th review for p i .",
"The standard pairwise ranking loss is adopted to train our model: L = (cid:88) i max (0 , f ( p i , r + )+ f ( p i , r )) (20) where r + , r R i are an arbitrary pair of reviews for p i where r + has a higher helpfulness score than r .",
"is a scaling factor that magnifies the difference between the score and the margin.",
"Since our MCR model is fully differentiable, it can be trained by gradient descent in an end-to-end manner.",
"To the best of our knowledge, there is no benchmark dataset for the Multimodal Review Helpfulness Prediction task (MRHP).",
"Hence, we construct two benchmark datasets (Lazada-MRHP and Amazon-MRHP) from popular e-commerce platforms to evaluate our method.",
"Lazada-MRHP in Indonesian Lazada.com is a popular platform in Southeast Asia, which is in the Indonesian language.",
"We construct the Lazada-MRHP dataset by crawling the product information (title, description, and images) and user-generated reviews (text content and images) from Lazada.",
"To make sure that the user feedback of helpfulness voting is reliable, we strictly extract the reviews which were published spanning from 2018 to 2019.",
"We focus on three product categories, including Clothing, Shoes & Jewelry (CS&J), Electronics (Elec.), and Home & Kitchen (H&K).",
"Amazon-MRHP in English The Amazon review dataset (Ni et al., 2019) was collected from Amazon.com, containing meta-data of products Dataset Category Instance Number (#P/#R) Train+Dev Test Lazada CS&J 8,245/130,232 2,062/32,274 Elec.",
"and customer reviews from 1996 to 2018.",
"We extract the product information and associated reviews published from 2016 to 2018.",
"Since there are no review images in the original Amazon dataset, we crawl the images for each product and review from the Amazon.com platform.",
"Similar to Lazada-MRHP, the products and reviews also belong to three categories: Clothing, Shoes & Jewelry (CS&J), Electronics (Elec.), and Home & Kitchen (H&K).",
"Learning from user-feedback in review helpfulness prediction has been revealed effective in (Fan et al., 2019; Chen et al., 2019).",
"Specifically, the helpfulness voting received by each review can be treated as the pseudo label indicating the helpfulness level of the review.",
"Following the same data processing as in (Fan et al., 2019), we filter the reviews that received 0 votes in that they are under an unknown user feedback state.",
"Based on the votes received by a review, we leverage a logarithmic interval to categorize reviews into five helpfulness levels.",
"Specifically, we map the number of votes into five intervals (i.e., [1,2), [2, 4), [4, 8), [8, 16), [16, )) based on an exponential with base 2. The five intervals correspond to five helpfulness scores s i,j { 0 , 1 , 2 , 3 , 4 } , where the higher the score, the more helpful the review.",
"Finally, the statistics of the two datasets are shown by Table 2. For both Lazada-MRHP and Amazon-MRHP, we utilize 20% of the training set per category as the validation data.",
"For a fair comparison, we adopt the same data processing for all baselines.",
"We use the ICU tok-enizer 1 and NLTK toolkit (Loper and Bird, 2002) to separate text data in Lazada-MRHP and Amazon-MRHP, respectively.",
"Each image is extracted as RoI features with 2048 dimensions.",
"For the net-1 http://site.icu-project.org Type Method Clothing Electronics Home MAP N@3 N@5 MAP N@3 N@5 MAP N@3 N@5 Text-only BiMPM 60.0 52.4 57.7 74.4 67.3 72.2 70.6 64.7 69.1 EG-CNN 60.4 51.7 57.5 73.5 66.3 70.8 70.7 63.4 68.5 Conv-KNRM 62.1 54.3 59.9 74.1 67.1 71.9 71.4 65.7 70.5 PRHNet 62.1 54.9 59.9 74.3 67.0 72.2 71.6 65.2 70.0 Multi-modal SSE-Cross 66.1 59.7 64.8 76.0 68.9 73.8 72.2 66.0 71.0 D&R Net 66.5 60.7 65.3 76.1 69.2 74.0 72.4 66.3 71.4 MCR (Ours) 69.7 63.8 68.3 77.4 71.3 75.9 74.0 67.8 72.5 Table 3: Helpfulness review prediction results on the Lazada-MRHP dataset.",
"work configurations, we initialize the word embedding layers with the pre-trained 300D GloVE word embeddings 2 for Amazon-MRHP and the fastText multilingual word vectors 3 for Lazada-MRHP.",
"The text n -gram kernels are set as 1, 3, and 5 with 128 hidden dimensions.",
"For the image representations, we set the encoded size of feature d l I as 128, and the size of common latent space d c is set to 128.",
"We stack two graph reasoning layers (i.e., L = 2 ) where the hidden dimension of each layer is set to 128.",
"We adopt the Adam optimizer (Kingma and Ba, 2014) to train our model, and the batch size is set to 32.",
"The margin hyperparameter is set to 1. 4.3 Compared Methods We compare MCR with several state-of-the-art review helpfulness methods.",
"First, we compare MCR with four strong methods that rely only on the text content of reviews, including the Bilateral Multi-Perspective Matching (BiMPM) model (Wang et al., 2017), Embedding-gated CNN (EG-CNN) (Chen et al., 2018), Convolutional Kernel-based Neural Ranking Model (Conv-KNRM) (Dai et al., 2018), the Product-aware Helpfulness Prediction Network (PRHNet) (Fan et al., 2019).",
"We are the first to leverage images in the re-2 http://nlp.stanford.edu/data/glove.6B.zip 3 https://fasttext.cc/docs/en/crawl-vectors.html view for helpfulness prediction of multimodal reviews, thereby we compare our MCR model with two strong multimodal reasoning techniques: SSE-Cross (Abavisani et al., 2020) that leverages stochastic shared embedding to fuse different modality representations and D&R Net (Xu et al., 2020) that adopts a decomposition and relation network to model both cross-modality contrast and semantic association.",
"In this paper, we propose a pairwise ranking loss function for review helpfulness prediction, which fully benefits from the sampling of informative negative examples.",
"Since the output of MCR is a list of reviews ranked by their helpfulness scores, we adopt two authoritative ranking-based metrics to evaluate the model performance: Mean Average Precision (MAP) and Normalized Discounted Cumulative Gain (NDCG@N) (Jarvelin and Kekalainen, 2017).",
"Here, the value of N is set to 3 and 5 in the experiments for NDCG@N.",
"MAP is a widely-used measure method evaluating the general ranking performance on the whole candidate review set, while NDCG@N merely takes into account the top N reviews in the scenario that the customers only read a limited number of reviews.",
"Since we adopt the pairwise ranking loss for review helpfulness prediction, we treat the product text as the query, and the associated reviews are viewed as candidates for ranking.",
"Table 3 and Table 4 report the results of MCR and baselines on Lazada-MRHP and Amazon-MRHP, respectively.",
"From the results, we can make the following observations.",
"First, EG-CNN performs worse than other text-only baselines, because EG-CNN only considers the hidden features from the review text, while other text-only methods additionally utilize the product information as a helpfulness signal.",
"Second, the multimodal baselines (SSE-Cross and D&R Net) perform significantly better than text-only baselines.",
"This verifies that multimodal information of reviews can help the models to discover helpful reviews.",
"Third, MCR performs even better than strong multimodal competitors.",
"For example, on Lazada-MRHP, MAP and NDCG@3 increase by 2.9% and 3.5% respectively over the best baseline method (i.e., D&R Net).",
"We can observe similar trends on Amzaon-MRHP.",
"The advantage of MCR comes from its capability of capturing the product-review and intra-review coherence.",
"To analyze the effectiveness of different components of MCR, we conduct detailed ablation studies in terms of removing intra-review coherence (de-noted as w/o intra-review), removing intra-modal coherence between product and review images (de-noted as w/o intra-modal-I), removing intra-modal coherence between product and review texts (de-noted as w/o intra-modal-II), removing inter-modal coherence between review text and product images (denoted as w/o inter-modal-I), and removing inter-modal coherence between review images and product text (denoted as w/o inter-modal-II).",
"The ablation test results on the CS&J category of Lazada and Amazon datasets are summarized in Table 5.",
"We can observe that the intra-review coherent reasoning has the largest impact on the performance of MCR.",
"This suggests that the images within a review are informative evidence for review helpfulness prediction.",
"The improvements of the intra-modal and inter-modal coherent reasoning in the product-review coherent reasoning module are also significant.",
"However, intra-modal-I and intra-modal-II have a smaller impact on MCR than the Dataset Model Variant MAP N@3 N@5 Lazada MCR (Ours) 69.7 63.8 68.3 -w/o intra-review 68.4 62.0 66.9 -w/o intra-modal-I 69.1 63.0 67.5 -w/o intra-modal-II 69.2 63.2 67.7 -w/o inter-modal-I 68.9 62.7 67.3 -w/o inter-modal-II 68.9 62.5 67.2 Amazon MCR (Ours) 67.0 58.1 61.1 -w/o intra-review 65.9 57.0 60.1 -w/o intra-modal-I 66.6 57.7 60.7 -w/o intra-modal-II 66.8 57.8 60.7 -w/o inter-modal-I 66.5 57.5 60.5 -w/o inter-modal-II 66.4 57.5 60.4 Table 5: The ablation study on Clothing, Shoes& Jewelry category of Lazada-MRHP and Amazon-MRHP.",
"other two variants.",
"This may be because most product images have been always beautified, and there are significant differences between the product images and the images posted by the consumers.",
"It is no surprise that combining all components achieves the best performance on both datasets.",
"To gain more insight into the multimodal review helpfulness prediction task, we use an exemplary case that is selected from the test set of Home & Kitchen category of Amazon-MRHP to empirically investigate the effectiveness of our model.",
"Table 6 shows a product and two associated reviews with ground-truth helpfulness scores voted by consumers.",
"These two reviews are ranked correctly by our MCR method while being wrongly ranked by strong baselines (e.g., Conv-KNRM and PRHNet).",
"The text content of both reviews contains negative emotion words (e.g., disappointed and sad) and expresses similar information the product size does not meet my expectation .",
"It is hard for text-only methods to discriminate the helpfulness of these two reviews via solely considering the text content of reviews.",
"After analyzing the images within the reviews, we can reveal that the Review 1 is helpful since it provides two appropriate bed images with a brought comforter as evidence that can well support his/her claim in the text content.",
"However, Review 2 provides an inappropriate image with the product package, which cannot well support the claim of product size.",
"This verifies that it is essential to capture the complex semantic relationship between the images and text content within a review for helpfulness prediction.",
"Multimodal review analysis (MRA) is extremely important for helping businesses and consumers quickly acquire valuable information from user-generated reviews.",
"This paper is the first attempt to explore the multimodal review helpfulness prediction (MRHP) task, which aims at analyzing the review helpfulness from text and images.",
"We propose a multi-perspective coherent reasoning (MCR) method to solve MRHP task, which fully explores the product-review coherence and intra-review coherence from both textual and visual modalities.",
"In addition, we construct two multimodal review datasets to evaluate the effectiveness of MCR, which may push forward the research in this field.",
"Extensive experimental results demonstrate that MCR significantly outperforms baselines by comprehensively exploiting the images associated with the reviews.",
"This work was partially supported by National Natural Science Foundation of China (No. 61906185), Natural Science Foundation of Guangdong Province of China (No. 2019A1515011705), Youth Innovation Promotion Association of CAS China (No. 2020357), Shenzhen Science and Technology Innovation Program (Grant No. KQTD20190929172835662), Shenzhen Basic Research Foundation (No. JCYJ20200109113441941)."
] | [
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"method",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"abstain",
"objective",
"objective",
"objective",
"objective",
"method",
"objective",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"other",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"other",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"method",
"abstain",
"other"
] |
[
"Although neural machine translation (NMT) has achieved significant progress in recent years, most previous NMT models only depend on the source text to generate translation.",
"Inspired by the success of template-based and syntax-based approaches in other fields, we propose to use extracted templates from tree structures as soft target templates to guide the translation procedure.",
"In order to learn the syntactic structure of the target sentences, we adopt the constituency-based parse tree to generate candidate templates.",
"We incorporate the template information into the encoder-decoder framework to jointly utilize the templates and source text.",
"Experiments show that our model significantly outperforms the baseline models on four benchmarks and demonstrate the effectiveness of soft target templates.",
"Recently, neural machine translation (NMT) (Wu et al., 2016; Gehring et al., 2017; Vaswani et al., 2017; Chen et al., 2018) has achieved significant progress.",
"Some advanced models (Chatterjee et al., 2016; Niehues et al., 2016; Junczys-Dowmunt and Grundkiewicz, 2017; Geng et al., 2018; Zhou et al., 2019a) predict the ultimate translation by multi-pass generation conditioned on the previous text such as CMLMs (Ghazvininejad et al., 2019), ABD-NMT (Zhang et al., 2018), SynST (Akoury et al., 2019), and Deliberation Network (Xia et al., 2017).",
"Inspired by these works and the successful application of templates for other intriguing tasks, including semantic parsing (Dong and Lapata, 2018), summarization (Cao et al., 2018; Wang et al., 2019a), question answering (Duan et al., 2017; Contribution during internship at Microsoft Research Asia. Corresponding author. I like playing basketball Source S like VP Template Target Figure 1: An example of template guided translation results. S denotes subject and VP denotes verb phrase. Pandey et al., 2018), and other text generation tasks (Wiseman et al., 2018; Guu et al., 2018), we assume the candidate templates of the target sentences can guide the sentence translation process.",
"We denote these templates extracted from the constituency-based parse tree as soft templates, which consist of tags and target words.",
"The templates are soft because no explicit paradigms are inaugurated to build new translation from them, and the target tokens could be modified.",
"In order to effectively use the templates, we introduce soft template-based neural machine translation (ST-NMT), which can use source text and soft templates to predict the final translation.",
"Our approach can be split into two phases.",
"In the first phase, a standard Transformer model is trained to predict soft target templates by using source text and templates extracted from the constituency-based parse tree.",
"Secondly, we use two encoders, including a soft target template encoder and a source language encoder to encode source text and templates and generate the final translation.",
"As shown in Figure 1, given the source text and the target template S like to VP , the final translation I like to play basketball is generated by two encoders.",
"In this work, the templates play a part in guiding, and some target tokens in Self Attention Add & Norm Add & Norm Cross Attention Add & Norm Feed Forward Self Attention Add & Norm Add & Norm Feed Forward Self Attention Add & Norm Add & Norm Feed Forward PositionEncoding PositionEncoding PositionEncoding Linear & Softmax Template Encoder Source Encoder Source Embedding Template Embedding Target Embedding x 1 x 4 x 2 x 3 x 5 1 4 2 t 3 5 1 2 3 5 4 Target Decoder N N N Figure 2: Overview of our ST-NMT.",
"In order to prove the effectiveness of our approach, we conduct main experiments on the popular benchmarks, including IWSLT14 German-English translation task, WMT14 English-German translation task, LDC Chinese-English translation task, and ASPEC Japanese-Chinese translation task.",
"Experiments show that our approach achieves significant improvement compared to the baselines, which demonstrates the soft target templates can provide a positive influence for guiding translation procedure effectively.",
"Our approach can be used for diverse scale data sets, different styles, and multiple language pairs.",
"Our model first reads the source language sequence X = ( x 1 , x 2 , x 3 , . . . , x n ) in the conventional way by a source language Transformer encoder and generates the template sequence T = ( t 1 , t 2 , t 3 , . . . , t m ) by a template Transformer decoder.",
"As shown in Figure 2, our model uses a source language Transformer encoder and a template Transformer encoder, which encodes the source language sequence X and the template sequence T separately.",
"We deploy the target language decoder to generate the final translation.",
"In this section, we present the details of the proposed template-based approach.",
"Our method mainly includes two phases: (1) The training data is constructed by the constituency-based parse tree.",
"Then, we adopt a standard Transformer to convert the source text to the soft target template for the next generation.",
"(2) Based on the source text and the predicted soft target template, we utilize two encoders to encode two sequences into hidden states separately and a target language decoder to generate the ultimate translation.",
"In this procedure, we model the P X T ( T | X ) to predict soft target templates on top of the constructed training data D X,T .",
"To construct D X,T , we use a constituency-based parser to parse the target sequence and get a tree structure.",
"Then, we prune nodes which are deeper than the specific depth and recover the left leaf nodes to the ordered template sequence.",
"Through these operations, we gain the parallel training data D X,T and train a standard Transformer model P X T ( T | X ) to predict the soft target template.",
"The constituency-based parse tree could reveal the structure information of the whole sentence which utilizes the constituency grammar to distinguish terminal and non-terminal nodes.",
"More specifically, the interior nodes are labeled by nonterminal categories which belong to the set of nonterminal tokens S , while the leaf nodes are labeled Pruned Figure 3: The constituency-based parse tree of the example sentence.",
"by terminal categories V .",
"S = { S , VP , NP , . . . , ASBR } and V is the vocabulary set of the target language.",
"For example, the sentence There are some people running could be expressed as Figure 3.",
"In this case, the non-terminal tokens consist of S 0 = { S , NP , VP , EX , VBP , NP , DT , NNS , VBG } and the terminal tokens are composed of V 0 = { There, are some, people, running } .",
"Our template T = { t 1 , t 2 , t 3 , t 4 } is the ordered sequence which is composed of non-terminal tokens and terminal tokens.",
"In this case, t 1 =There, t 2 =are, t 3 = VP and t 4 = NP .",
"Our template extraction aims to extract the sub-tree of the specific depth and use these nonterminal and terminal tokens locating at the leaf node of sub-tree.",
"In order to predict the soft target templates, we train a standard Transformer model given the training data of the source text and extracted templates.",
"The Transformer model reads the source text and predicts the soft target templates using beam search.",
"Then, we select the top-K results of the beam search as templates.",
"The depth of the tree is a trade-off.",
"In Figure 3, One special case is that when the depth equals 1, the template only has one symbol S .",
"The template S cannot provide any useful information.",
"Another special case is that when depth is greater than 6, the template There are some people running only has terminal tokens.",
"The template only contains target words, which cannot provide any additional information.",
"When the depth equals 4, the template is There are NP VP .",
"The template contains sentence syntactic and structural information, which is suitable for our method.",
"With the Transformer model P X T ( T | X ) , we need to construct the pseudo training data D X,T,Y instead of directly using extracted templates by soft template prediction.",
"Given the source text X , we use P X T ( T | X ) to predict the top-1 soft target template T with beam search.",
"Therefore, we get the triple training data D X,T,Y = { X ( i ) , T ( i ) , Y ( i ) } Ni =1 which is prepared for the next phase.",
"The triple training data D X,T,Y is used to model the probability P ( X,T ) Y from the two sequences to the ultimate translation.",
"Our approach could generate the target sentence Y , given the source sequence X and template T .",
"Formulation In formula, we could model the whole procedure on top of the P X T ( T | X ) and P ( X,T ) Y ( Y | X, T ) .",
"where X T and ( X,T ) Y are the parameters for the first and the second phase.",
"The source language Transformer encoder and the soft template Transformer encoder maps the input sequence X and the template T composed of target language words and tags to the hidden states.",
"Then, a Transformer decoder interacting with two encoders generates the final translation Y , described by the Equation",
"1. Encoder In the second phase, our template Transformer encoder and the source language Transformer encoder are stacked by blocks which contain self-attention layers with residual connections, layer normalization and fully connected feedforward network (FFN).",
"Therefore, the hidden states of the source language Transformer encoder and the template Transformer encoder are calculated by: h l = TransformerBlock ( h l 1 ) (2) where h l = h Xl for the source language Transformer encoder and h l = h Tl for the template Transformer encoder.",
"N is the number of layers and l [1 , N ] .",
"Decoder Based on the hidden states h Xl and h Tl , the target language Transformer decoder use the encoder-decoder multi-head attention to jointly use the source language and template information to generate the ultimate translation Y .",
"Besides, the target sequence decoder uses multi-head attention to obtain the representations of target language decoder with the parameters ( WQX , WKX , WVX ) and ( WQT , WKT , WVT ) for different encoders.",
"In each attention head, the input sequence X = ( x 1 , . . . , x m ) and the template T = ( t 1 , . . . , t n ) can be mapped into ZX = ( z X 1 , z X 2 , . . . , z Xm ) and ZT = ( z T 1 , z T 2 , . . . , z Tn ) using the source language Transformer encoder and the template Transformer encoder.",
"On top of the ZX and ZT , the decoder separately calculate the multi-head attention with source sentence context X = ( x 1 , . . . , x m ) and target template sentence T = ( t 1 , . . . , t n ) , then our model obtain two hidden states Z X,Y and Z T,Y by attention with source context and template context.",
"Here, We incorporate the Z X,Y containing source language information and Z X,Y including template information in a reasonable way: Z = Z X,Y + (1 ) Z T,Y (3) where is the parameter to control the degree of incorporation between source text and template.",
"In order to effectively incorporate source and template information, we calculate the parameter as below: = ( WYZ X,Y + UTZ X,T ) (4) where ZY is the decoder hidden state and WY and UT are parameter matrices.",
"Similar to the conventional NMT, in order to make the model predict the target sequence, we use maximum likelihood estimation (MLE) loss function to update the model parameter by maximizing the log likelihood of translation over training set D .",
"When we train the P X Y without the template Transformer encoder, we only need to optimize the following loss function: L X Y ( D ) = (cid:88) X,Y D log P X Y ( Y | X ) (5) where X Y are the parameters of the source language Transformer encoder and the target language Transformer decoder.",
"where ( X,T ) Y are the parameters of the source language Transformer encoder, template language Transformer encoder and target language Transformer decoder.",
"To balance the two objectives, our model is trained on L X Y ( D ) objective for the % iterations, and trained on L ( X,T ) Y ( D ) objective for the (1 )% interations.",
"Therefore, this procedure is equivalent to the following formula: L ( D ) = L X Y ( D ) + (1 ) L ( X,T ) Y ( D ) (7) where is a scaling factor accounting for the difference in magnitude between L X Y ( D ) and L ( X,T ) Y ( D ) .",
"In practice, we find optimizing these two objectives can make training procedure easier and get a higher BLEU score since there exist a few low-quality templates to influence the translation quality.",
"Through optimizing two objectives simultaneously, we can reduce the effect of some low-quality templates and improve the stability of our model.",
"We conducted experiments on four benchmarks, including LDC Chinese-English, WMT14 English-German, IWSLT14 German-English, and ASPEC Japanese-Chinese translation tasks.",
"By conducting experiments on these four benchmarks, these settings prove that our approach is suitable for diverse situations: (1) These four benchmarks provide a wide coverage of both scale and genres.",
"They vary from small scale to large scale (2) We use the different domains, which include news, science, and talk domain.",
"(3) We also conduct the experiments on different language pairs, including the German-English translation task, the English-German translation task, the Chinese-English translation task, and the Japanese-Chinese translation task.",
"In order to verify the effectiveness of our method, we conduct experiments on four benchmarks.",
"WMT14 and LDC datasets are from the news do-main.",
"IWSLT14 dataset is from TED talk.",
"ASPEC dataset is from a scientific paper excerpt corpus.",
"LDC Chinese-English We use a subset from LDC corpus 1 which has nearly 1.4M sentences originally.",
"The training set is selected from the LDC corpus that consists of 1.2M sentence pairs after dropping the low-quality sentence pairs of which the length is more than",
"2. We used the NIST 2006 dataset as the validation set for evaluating performance in the training procedure, and NIST 2003, 2005, 2008 and 2012 as test sets, which all have 4 English references for each Chinese sentence.",
"IWSLT14 German-English This dataset contains 16K training sequence pairs.",
"We randomly sample 5% of the training data as valid test.",
"Besides, we merge the multiple testsets dev2010, dev2012, tst2010, tst2011, tst2012 for testing.",
"WMT14 English-German The training data consists of 4.5M sentence pairs.",
"The validation set is devtest2014, and the test set is newstest2014.",
"ASPEC Japanese-Chinese We use 0.67M sentence pairs from ASPEC Japanese-Chinese corpus (Nakazawa et al., 2016) 2 .",
"We use the devtest as the development data, which contains 2090 sentences, and the test data contains 2107 sentences with a single reference per source sentence.",
"LDC Chinese-English The base Transformer model is used for this task, which includes 6 layers, each layer of which has the hidden dimensions of 512, feedforward dimensions of 2048 , and 8 attention heads.",
"We use Moses (Koehn et al., 2007) to tokenize English sentences and our in-house tool to tokenize Chinese sentences.",
"We use Byte Pair Encoding (BPE) (Sennrich et al., 2016) to encode 1 LDC2002E17, LDC2002E18, LDC2003E07, LDC2003E14, LDC2005E83, LDC2005T06, LDC2005T10, LDC2006E17, LDC2006E26, LDC2006E34, LDC2006E85, LDC2006E92, LDC2006T06, LDC2004T08, LDC2005T10 2 http://orchid.kuee.kyoto-u.ac.jp/ASPEC/ sentences using a shared vocabulary of 40K symbols.",
"IWSLT14 German-English We adopt the small setup of the Transformer model.",
"The model has 6 layers with the embedding size of 512, a feedforward size of 1024, and 4 attention heads.",
"In order to prevent overfitting, we use a dropout of 0.3, a l 2 weight decay of 10 4 , and a label smoothing of 0.1.",
"We use BPE to encode sentences with a shared vocabulary of 10K symbols.",
"WMT14 English-German We use the big setting of Transformer (Vaswani et al., 2017), in which both the encoder and the decoder have 6 layers, with the embedding size of 1024, feedforward size of 4096, and 16 attention heads.",
"The dropout rate is fixed as 0.3.",
"We adopt Adam (Kingma and Ba, 2015) optimizer with a learning rate 0 .",
"1 of the similar learning rate schedule as Transformer (Vaswani et al., 2017).",
"We set the batch size as 6000 and the update frequency as 16 on 8 GPUs for updating parameters (Ott et al., 2018) to imitate 128 GPUs.",
"The datasets are encoded by BPE with a shared vocabulary (Sennrich et al., 2016) of 40K symbols.",
"ASPEC Japanese-Chinese We use the base setting of Transformer the same to the Chinese-English translation task.",
"Following the similar learning rate schedule (Vaswani et al., 2017), we set the learning rate as 0.1.",
"Chinese and Japanese sentences are tokenized with our in-house tools and encoded by BPE with a shared vocabulary of 10K symbols.",
"We evaluate the performance of the translation results.",
"The evaluation metric is BLEU (Papineni et al., 2002).",
"For the Chinese-English and German-English translation tasks, we use case-insensitive tokenized BLEU scores.",
"For the English-German translation task, we use case-sensitive tokenized BLEU scores for evaluation.",
"All the experiments last for 150 epochs and use Stanford parser to generate templates (Manning et al., 2014).",
"For all translation tasks, we use the checkpoint, which has the best valid performance on the valid set.",
"For different test sets, we adapt the beam size and the length penalty to get better performance.",
"In order to avoid the difference of the tokenizer for Chinese translation result evaluation, we adopt the character-level BLEU for testing.",
"Checkpoint averaging is not used, except notification.",
"One-pass Baselines: ConvS2S (Gehring et al., 2017) is a strong CNN-based baseline.",
"We report the results referring to the paper of convolutional sequence to sequence model (ConvS2S).",
"RNMT+ (Chen et al., 2018) is a state-of-the-art RNN-based NMT model.",
"GNMT (Wu et al., 2016) is the typical encoder-decoder framework.",
"We use the similar setting 3 for all experiments.",
"Transformer (Vaswani et al., 2017) is a strong baseline which has the state-of-the-art performance.",
"We reimplement this baseline 4 .",
"LightConv and DynamicConv (Wu et al., 2019) are simpler but effective baselines.",
"We directly report the results in the paper.",
"Multi-pass Baselines: Deliberation network (Xia et al., 2017) and SoftPrototype (Wang et al., 2019b) generates and polishes the raw text by a two-pass manner.",
"SB-NMT (Zhou et al., 2019a) is a synchronous bidirectional neural machine translation which predicts its outputs using two direction simultaneously.",
"ABD-NMT (Zhang et al., 2018) is an encoder-decoder NMT framework with the forward and backward decoder.",
"By considering the agreement of both directions left-to-right (L2R) and right-to-left (R2L), Rerank-NMT (Liu et al., 2016) rescores all candidates.",
"SBSG (Zhou et al., 2019b) is a synchronous bidirectional sequence generation model which predicts its translation from both sides to the middle simultaneously.",
"Insertion Transformer (Stern et al., 2019) is a non-monotonic method which predicts the translation 3 https://github.com/NVIDIA/DeepLearningExamples/tree/ master/PyTorch/Translation/GNMT 4 https://github.com/pytorch/fairseq De En BLEU GNMT (Wu et al., 2016) 31.44 RNMT+ (Chen et al., 2018) 34.51 ConvS2S (Gehring et al., 2017) 30.41 LightConv (Wu et al., 2019) 34.80 DynamicConv (Wu et al., 2019) 35.20 Rerank-NMT (Liu et al., 2016) 34.82 Transformer (our implementation) 34.43 ST-NMT (our proposed) 35.24 Table 2: BLEU-4 scores (%) on IWSLT14 De En task.",
"For the IWSLT14 German-English machine translation task, we present the results of the ST-NMT and other strong baselines in Table",
"2. We compare our method with other various methods, including GNMT, RNMT+, convS2S, LightConv, DynamicConv, and the Transformer model with the small setting.",
"The Rerank-NMT model gets 34.82 BLEU by using the two-pass results, including left-to-right (L2R) and right-to-left (R2L), and selects the best candidates.",
"As shown in Table 2, our model also significantly outperforms others and gains an improvement of 0.81 BLEU points than a strong Transformer baseline model.",
"Moreover, our method outperforms the GNMT by 3.80 BLEU points, ConvS2S by 4.83 BLEU, LightConv by 0.44 BLEU, Dynamic by 0.04 BLEU and Rerank-NMT by 0.42 BLEU.",
"We secondly evaluate our method on the LDC Chinese-English translation task.",
"The evaluation results on all NIST test sets against baselines are listed in Table",
"1. Our ST-NMT beats the other En De BLEU GNMT (Wu et al., 2016) 24.61 ConvS2S (Gehring et al., 2017) 25.16 Transformer (Vaswani et al., 2017) 28.40 RNMT+ (Chen et al., 2018) 28.49 Rerank-NMT (Liu et al., 2016) 27.81 ABD-NMT (Liu et al., 2016) 28.22 Deliberation Network (Xia et al., 2017) 29.11 SoftPrototype (Wang et al., 2019b) 29.46 SB-NMT (Zhou et al., 2019a) 29.21 SBSG (Zhou et al., 2019b) 27.45 Insertion Transformer (Stern et al., 2019) 27.41 Transformer (our implementation) 29.25 ST-NMT (our proposed) 29.68 Table 3: BLEU-4 scores (%) on WMT14 En De task.",
"baselines and outperforms the Transformer baseline by 1.14 BLEU point on average, which shows that the template could effectively improve the performance.",
"More specifically, our model outperforms the Transformer model by 0.76 BLEU on NIST2003, 1.52 BLEU on NIST 2005, 0.91 BLEU on NIST 2008, and 1.39 BLEU on NIST 2012.",
"We further demonstrate the effectiveness of our model on WMT14 English-German translation tasks, and we also compare our model with other competitive models, including ABD-NMT (Zhang et al., 2018), Deliberation Network (Xia et al., 2017), SoftPrototype (Wang et al., 2019b), SB-NMT (Zhou et al., 2019a) and SBSG (Zhou et al., 2019b).",
"As shown in Table 3, our model also significantly outperforms others and gets an improvement of 0.43 BLEU points than a strong Transformer model.",
"To investigate the effect of our approach on the different language pairs, we also evaluate 1 2 3 4 5 6 7 8 The number of templates 29.2 29.3 29.4 29.5 29.6 29.7 BLEU 29.68 29.54 29.44 29.48 29.62 29.55 29.34 29.22 ST-NMT Figure 4: The effect of the multiple templates.",
"our model on the Japanese-Chinese translation task.",
"According to Table 4, ST-NMT outperforms GNMT by 3.72 BLEU points, ConvS2S by 2.52 BLEU points, and the Transformer model by 0.82 BLEU points, which demonstrates that the soft template extracted by constituency-based parse tree can also bring strong positive effects.",
"Because of the diversity of the templates, we investigate the performance with the different num-bers of the templates.",
"On top of the original parallel training data D = { ( x ( i ) , y ( i ) ) } Ni =1 , we construct the training data from the source text to the soft target template DX T = { ( x ( i ) , t ( i ) ) } Ni =1 , by the model P X T .",
"Through this construction procedure, we could use the top-K results of the beam search as multiple templates by model P X T .",
"We could expand the training data of the source text to the target template as DX T = { ( x (1) , t (1) top 1 ) , . . . , ( x (1) , t (1) top K ) , . . . , ( x ( N ) , t ( N ) top 1 ) , . . . , ( x ( N ) , t ( N ) top K ) } .",
"As shown in Figure 4, our model gains the best performance only using the single template.",
"When the number of templates is 8, our model gains the worst BLEU score of 29.22.",
"We can summarize that our model can be more robust but maybe get worse performance with the number of templates rising.",
"Besides, in order to further improve the stability of our model, we expand the dataset by selecting random templates for the source sentence.",
"The different templates confuse our model, although it can make our model more robust.",
"With the value rising, the contribution of template information gradually decreases.",
"We study the influence of the ratio .",
"To investigate the effect of this hyper-parameter, we set the discrete value = { 10% , 20% , 30% , 40% , 50% , 60% , 70% , 80% , 90% , 100% } .",
"According to Figure 5, when the switches from 0.4 to 0.9, our model can get the better performance which is greater than or equal to 29.3 BLEU.",
"The results show that we can set the hyper-parameter in a reasonable interval ( 0 . 4 0 . 9 ) to keep the balance between source text and template.",
"Considering that the template derived from the specific depth can lead to the divergent performance, our model is examined with the different depth.",
"The effect of the template extraction which is described as Section 3 is decided by the sub-tree which is controlled by the depth of sub-tree.",
"For the same constituency-based parse tree, the different sub-tree can be obtained based on the different chosen depth d .",
"When we get the sub-tree, the template could be derived from it.",
"The depth of the constituency-based parse tree is decided by a simple but effective strategy as formula: d = min(max( L , 1 ) , 2 ) (8) where L is the length of the input sentence, 1 is the lower bound, 2 is the upper bound depth MT03 MT05 MT08 MT12 0.10 45.92 45.01 36.55 35.34 0.15 46.56 46.04 37.53 35.99 0.20 46.02 45.20 37.08 35.82 0.25 46.27 44.83 36.88 35.64 0.30 46.08 45.02 36.72 35.54 0.35 46.22 44.92 36.84 35.51 0.40 46.32 45.40 36.94 35.61 Table 5: The results of the different depth on NIST2003, NIST2005, NIST2008 and NIST2012.",
"of the sub-tree and is the ratio of the length of source sentence.",
"When the approximates 1 .",
"0 , the template contains more target tokens and less tags.",
"In addition, we tune the depth on the LDC training data and list the results.",
"According to the Table 5, the soft templates of the specific depth provide helpful information to the translation procedure when the = 0 .",
"15 in the LDC dataset.",
"To measure contribution of the predicted soft target template for final translation, we calculate the overlapping words between the template and the translation.",
"Table 6 gives the specific overlapping words ratio on the different test sets including NIST2003, NIST2005, NIST2008 and NIST2012.",
"The overlapping ratio is calculated by the following formula: ratio = (cid:80) w T min ( Count y ( w ) , Count t ( w )) (cid:80) w T Count t ( w ) (9) where Count y ( ) and Count t ( ) denote the number of w in the target translation Y and the template T , and w is the words in the target language.",
"The overlapping ratio represents the correlation between the predicted template T and the target translation Y .",
"According to Table 6, the correlation between the template T and the translation Y is highly relevant which demonstrates the contribution of our template to the final translation.",
"To further illustrate which aspects of NMT are improved by the target soft template, we provide a Chinese-English translation example shown in 7.",
"Templates provide the structural and grammatical information of the target sentence.",
"For instance, Chinese source sentence , , , our model first predicts the target template on the other hand , if NP VP , we will VP , and then generate the final translation on the other hand , if we react too much, we will be hit by them.",
"Our target template provides the sentence pattern If sb. do sth, sb. will be done.",
"Our method introduces the constituency-based parse tree and utilizes the constituency grammar to distinguish terminal and non-terminal nodes.",
"Therefore, our model can automatically learn sentence patterns, including grammatical and structural information.",
"Many types of encoder-decoder architecture (Bah-danau et al., 2015; Wu et al., 2016; Gehring et al., 2017; Vaswani et al., 2017; Chen et al., 2018) have been proposed in the past few years.",
"Furthermore, Transformer enhances the capability of NMT in capturing long-distance dependencies based on these backbone models, including CNN-based, RNN-based, and Transformer based architecture.",
"To improve the quality of the translation, many authors have endeavored to adopt multi-pass generation decoding method, their models first predict the rough translation and then generate the final translation based on the previous draft (Niehues et al., 2016; Chatterjee et al., 2016; Junczys-Dowmunt and Grundkiewicz, 2017; Xia et al., 2017; Geng et al., 2018; Wang et al., 2019b).",
"Besides, some works (Liu et al., 2016; Zhang et al., 2018; Zhou et al., 2019b,a) use the right-to-left (R2L) and left-to-right (L2R) to improve the quality of machine translation.",
"Non-Autoregressive decoding (Ghazvininejad et al., 2019) first predicts the target tokens and masked tokens, which will be filled in the next iterations.",
"Then, the model predicts the unmasked tokens on top of the source text and a mixed translation consisting of the masked and unmasked tokens.",
"Semi-autoregressive also (Akoury et al., 2019) predicts chunked fragments or the unmasked tokens based on the tree structure before the final translation.",
"In addition, there are many existing works (Eriguchi et al., 2016; Aharoni and Goldberg, 2017; Wu et al., 2017; Wang et al., 2018; Dong and Lapata, 2018; Wang et al., 2018; Gu et al., 2018) which incorporate syntax information or the tree structure into NMT to improve the quality of translation results.",
"In this work, we propose a novel approach that utilizes source text and additional soft templates.",
"More specifically, our approach can extract the templates from the sub-tree, which derives from the specific depth of the constituency-based parse tree.",
"Then, we use a Transformer model to predict the soft target templates conditioned on the source text.",
"On top of soft templates and source text, we incorporate the template information to guide the translation procedure.",
"We compare our soft-template neural machine translation (ST-NMT) with other baselines on four benchmarks and multiple language pairs.",
"Experimental results show that our ST-NMT significantly improves performance on these datasets.",
"This work was supported in part by the National Natural Science Foundation of China (Grant Nos.U1636211, 61672081,61370126), the Beijing Advanced Innovation Center for Imaging Technology (Grant No.BAICIT2016001), and the Fund of the State Key Laboratory of Software Development Environment (Grant No.SKLSDE2019ZX-17)."
] | [
"abstain",
"objective",
"method",
"method",
"objective",
"abstain",
"other",
"other",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"result",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"other",
"abstain",
"result",
"other",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"other",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"method",
"method",
"abstain",
"method",
"result",
"other"
] |
[
"We propose PIGLeT : a model that learns physical commonsense knowledge through interaction, and then uses this knowledge to ground language.",
"We factorize PIGLeT into a physical dynamics model, and a separate language model.",
"Our dynamics model learns not just what objects are but also what they do : glass cups break when thrown, plastic ones don't.",
"We then use it as the interface to our language model, giving us a unified model of linguistic form and grounded meaning.",
"PIGLeT can read a sentence, simulate neurally what might happen next, and then communicate that result through a literal symbolic representation, or natural language.",
"Experimental results show that our model effectively learns world dynamics, along with how to communicate them.",
"It is able to correctly forecast what happens next given an English sentence over 80% of the time, outperforming a 100x larger, text-to-text approach by over 10%.",
"Likewise, its natural language summaries of physical interactions are also judged by humans as more accurate than LM alternatives.",
"We present comprehensive analysis showing room for future work.",
"As humans, our use of language is linked to the physical world.",
"To process a sentence like the robot turns on the stove, with a pan on it (Figure",
"1) we might imagine a physical Pan object.",
"This meaning representation in our heads can be seen as a part of our commonsense world knowledge, about what a Pan is and does.",
"We might reasonably predict that the Pan will become Hot and if there's an Egg on it, it would become cooked .",
"As humans, we learn such a commonsense world model through interaction.",
"Young children learn to reason physically about basic objects by manipulating them: observing the properties they have, Language Model PIGLeT t t+1 The robot turns on the stove, with a pan on it.",
"and how they change if an action is applied on them (Smith and Gasser, 2005).",
"This process is hypothesized to be crucial to how children learn language: the names of these elementary objects become their first real words upon which other language is scaffolded (Yu and Smith, 2012).",
"In contrast, the dominant paradigm today is to train large language or vision models on static data , such as language and photos from the web.",
"Yet such a setting is fundamentally limiting, as suggested empirically by psychologists' failed attempts to get kittens to learn passively (Held and Hein, 1963).",
"More recently, though large Transformers have made initial progress on benchmarks, they also have frequently revealed biases in those same datasets, suggesting they might not be solving underlying tasks (Zellers et al., 2019b).",
"This has been argued philosophically by a flurry of reThe robot throws the vase onto the coffee table.",
"cent work arguing that no amount of language form could ever specify language meaning (McClelland et al., 2019; Bender and Koller, 2020; Bisk et al., 2020); connecting back to the Symbol Grounding Problem of Harnad (1990).",
"In this paper, we investigate an alternate strategy for learning physical commonsense through interaction, and then transferring that into language.",
"We introduce a model named PIGLeT , short for P hysical I nteraction as G rounding for L anguag e T ransformers.",
"We factorize an embodied agent into an explicit model of world dynamics, and a model of language form.",
"We learn the dynamics model through interaction .",
"Given an action heatUp applied to the Pan in Figure 1, the model learns that the Egg on the pan becomes Hot and Cooked , and that other attributes do not change.",
"We integrate our dynamics model with a pretrained language model, giving us a joint model of linguistic form and meaning .",
"The combined PIGLeT can then reason about the physical dynamics implied by English sentences describing actions, predicting literally what might happen next.",
"It can then communicate that result either symbolically or through natural language, generating a sentence like The egg becomes hot and cooked.",
"Our separation between physical dynamics and language allows the model to learn about physical commonsense from the physical world itself, while also avoiding recurring problems of artifacts and biases that arise when we try to model physical world understanding solely through language. We study this through a new environment and evaluation setup called PIGPeN , short for P hysical I nteraction G rounding P air e d with N atural Language. In PIGPeN , a model is given unlimited access to an environment for pretraining, but only 500 examples with paired English annotations. Models in our setup must additionally generalize to novel unseen' objects for which we intentionally do not provide paired language-environment supervision.",
"We build this on top of the THOR environment (Kolve et al., 2017), a physics engine that enables agents to perform contextual interactions (Fig",
"2) on everyday objects.",
"Experiments confirm that PIGLeT performs well at grounding language with meaning.",
"Given a sentence describing an action, our model predicts the resulting object states correctly over 80% of the time, outperforming even a 100x larger model (T5-11B) by over 10%.",
"Likewise, its generated natural language is rated by humans as being more correct than equivalently-sized language models.",
"Last, it can generalize in a zero-shot' way to objects that it has never read about before in language.",
"In summary, we make three key contributions.",
"First , we introduce PIGLeT , a model decoupling physical and linguistic reasoning.",
"Second , we introduce PIGPeN , to learn and evaluate the transfer of physical knowledge to the world of language.",
"Third , we perform experiments and analysis suggesting promising avenues for future work.",
"We introduce PIGPeN as a setting for learning and evaluating physically grounded language understanding.",
"An overview is shown in Figure 2.",
"The idea is that an agent gets access to an interactive 3D environment, where it can learn about the world through interaction for example, that objects such as a Vase can become Broken if thrown.",
"The goal for a model is to learn natural language meaning grounded in these interactions.",
"Task definition.",
"Through interaction, an agent observes the interplay between objects o 2 O (rep-resented by their attributes) and actions a 2 A through the following transition: { o 1 , . . . , o N } | {z } ~ o , state pre-action a !",
"To encourage learning from interaction, and not just language, an agent is given a small number of natural language annotations of transitions.",
"We denote these sentences as s ~ o , describing the state pre-action, s a the action, and s ~ o 0 the state post-action respectively.",
"During evaluation, an agent will sometimes encounter new objects o that were not part of the paired training data.",
"We evaluate the model's transfer in two ways: a .",
"PIGPeN -NLU.",
"A model is given object states ~ o , and an English sentence s a describing an action.",
"It must predict the grounded object states ~ o 0 that result after the action is taken.",
"b .",
"PIGPeN -NLG.",
"A model is given object states ~ o and a literal action a .",
"It must generate a sentence s ~ o 0 describing the state post-action.",
"We next describe our environment, feature representation, and language annotation process.",
"We use AI2-THOR as an environment for this task (Kolve et al., 2017).",
"In THOR, a robotic agent can navigate around and perform rich contextual interactions with objects in a house.",
"For instance, it can grab an Apple , slice it, put it in a Fridge , drop it, and so on.",
"The state of the Apple , such as whether it is sliced or cold, changes accordingly; this is not possible in many other environments.",
"In this work, we use the underlying THOR simulator as a proxy for grounded meaning.",
"Within THOR, it can be seen as a complete' meaning representation (Artzi et al., 2013), as it fully specifies the kind of grounding a model can expect in its perception within THOR.",
"Objects.",
"The underlying THOR representation of each object o is in terms of 42 attributes; we provide a list in Appendix B. We treat these attributes as words specific to an attribute-level dictionary; for example, the temperature Hot is one of three possible values for an object's temperature; the others being Cold and RoomTemp .",
"Actions.",
"An action a in THOR is a function that takes up to two objects as arguments.",
"Actions are highly contextual, affecting not only the arguments but potentially other objects in the scene (Figure 2).",
"We also treat action names as words in a dictionary.",
"Filtering out background objects.",
"Most actions change the state of only a few objects, yet there can be many objects in a scene.",
"We keep annotation and computation tractable by having models predict (and humans annotate) possible changes of at most two key objects in the scene.",
"As knowing when an object doesn't change is also important, we include non-changing objects if fewer than two change.",
"Exploration.",
"Any way of exploring the environment is valid for our task, however, we found that exploring intentionally was needed to yield good coverage of interesting states.",
"Similar to prior work for instruction following (Shridhar et al., 2020), we designed an oracle to collect diverse and interesting trajectories { ~ o , a , ~ o 0 } .",
"Our oracle randomly selects one of ten high level tasks, see Appendix B for the list.",
"These in turn require randomly choosing objects in the scene; e.g. a Vase and a Laptop in Figure 2.",
"We randomize the manner in which the oracle performs the task to discover diverse situations.",
"In total, we sampled 20k trajectories.",
"From these we extracted 280k transitions (Eqn 1's) where at least one object changes state, for training.",
"We select 2k action state-changes from trajectories held out from the training set.",
"We select them while also balancing the distribution of action types to ensure broad coverage in the final dataset.",
"We are also interested in a model's ability to generalize to new object categories beyond what it has read about, or observed in a training set.",
"We thus select 30 objects to be unseen, and exclude these from paired environment-language training data.",
"We sample 500 state transitions, containing only seen objects to be the training set; we use 500 for validation and 1000 for testing.",
"Workers on Mechanical Turk were shown an environment in THOR before and after a given action a .",
"Each view contains the THOR attributes of the two key objects.",
"Workers then wrote three English sentences, corresponding to s ~ o , s a , and s ~ o 0 respectively.",
"Workers were instructed to write at a particular level of detail: enough so that a reader could infer what happens next from s ~ o and s a , yet without mentioning redundant attributes.We provide more details in Appendix C. 3 Modeling PIGLeT In this section, we describe our PIGLeT model.",
"First , we learn a neural physical dynamics model The robot is holding a glass vase.",
"from interactions, and second , integrate with a pretrained model of language form.",
"We take a neural, auto-encoder style approach to model world dynamics.",
"An object o gets encoded as a vector h o 2 R d o .",
"The model likewise encodes an action a as a vector h a 2 R d a , using it to manipulate the hidden states of all objects.",
"The model can then decode any object hidden representation back into a symbolic form.",
"We use a Transformer (Vaswani et al., 2017) to encode objects into vectors o 2 R d o , and then another to decode from this representation.",
"Encoder.",
"Objects o are provided to the encoder as a set of attributes, with categories c 1 ,..., c n .",
"Each attribute c has its own vocabulary and embedding E c .",
"For each object o , we first embed all the attributes separately and feed the result into a Transformer encoder T enc .",
"This gives us (with position embeddings omitted for clarity): h o = T enc E 1 ( o 1 ) , . . . , E c n ( o c n ) (2) Decoder.",
"We can then convert back into the original symbolic representation through a left-to-right Transformer decoder, which predicts attributes one-by-one from c 1 to c n .",
"This captures the inherent correlation between attributes, while making no in-dependence assumptions, we discuss our ordering in Appendix A.2.",
"The probability of predicting the next attribute o c i +1 is then given by: p ( o c i +1 | h o , o : c i )= T dec h o , E 1 ( o 1 ) ,..., E c i ( o c i ) (3) 3.1.2 Modeling actions as functions We treat actions a as functions that transform the state of all objects in the scene.",
"Actions in our environment take at most two arguments, so we embed the action a and the names of its arguments, concatenate them, and pass the result through a multilayer perceptron; yielding a vector representation h a .",
"Applying Actions.",
"We use the encoded action h a to transform all objects in the scene, obtaining updated representations h o 0 for each one.",
"We take a global approach, jointly transforming all objects.",
"This takes into account that interactions are contextual: turning on a Faucet might fill up a Cup if and only if there is one beneath it.",
"Letting the observed objects in the interaction be o 1 and o 2 , with encodings h o 1 and h o 2 respectively, we model the transformation via the following multilayer perceptron: [ h o 0 1 , h o 0 2 ] = MLP apply h a , h o 1 , h o 2 .",
"(4) The result can be decoded into symbolic form using the object decoder (Equation 3).",
"We train our dynamics model on ( ~ o , a , ~ o 0 ) transitions.",
"The model primarily learns by running ~ o , a through the model, predicting the updated output state h o 0 , and minimizing the cross-entropy of generating attributes of the real changed object ~ o 0 .",
"We also regularize the model by encoding objects ~ o , ~ o 0 and having the model learn to reconstruct them.",
"We weight all these cross-entropy losses equally.",
"We discuss our architecture in Appendix A.1; it uses 3-layer Transformers, totalling 17M parameters.",
"After pretraining our physical dynamics model, we integrate it with a Transformer Language Model (LM).",
"In our framework, the role of the LM will be to both encode natural language sentences of actions into a hidden state approximating h a , as well as summarizing the result of an interaction ( ~ o , a , ~ o 0 ) in natural language.",
"Choice of LM.",
"Our framework is compatible with any language model.",
"However, to explore the impact of pretraining data on grounding later in this paper, we pretrain our own with an identical architecture to the smallest GPT2 (Radford et al. (2019); 117M).",
"To handle both classification and generation well, we mask only part of the attention weights out, allowing the model to encode a prefix bidirectionally; it generates subsequent tokens left-to-right (Dong et al., 2019).",
"We pretrain the model on Wikipedia and books; details in Appendix D. We next discuss architectural details of performing the language transfer, along with optimization.",
"English actions to vector form.",
"Given a natural language description s a of an action a , like The robot throws the vase, for PIGPeN -NLU, our model will learn to parse this sentence into a neural representation h a , so the dynamics model can simulate the result.",
"We do this by encoding s a through our language model, TLM , with a learned linear transformation over the resulting (bidirectional) encoding.",
"The resulting vector h s a can then be used by Equation 4.",
"Summarizing the result of an action.",
"For PIGPeN -NLG, our model simulates the result of an action a neurally, resulting in a predicted hidden state h o for each object in the scene o .",
"To write an English summary describing what changed, we first learn a lightweight fused representation of the transition, aggregating the initial and final states, along with the action, through a multilayer perceptron.",
"For each object o i we have: h \u0000 o i = MLP \u0000 ([ h o i , h o 0 i , h a ]) .",
"We then use the sequence [ h \u0000 o 1 , h \u0000 o 2 ] as bidirectional context for our our LM to decode from.",
"Additionally, since our test set includes novel objects not seen in training, we provide the names of the objects as additional context for the LM generator (e.g. Vase, Laptop'); this allows a LM to copy those names over rather than hallucinate wrong ones.",
"Importantly we only provide the surface-form names, not underlying information about these objects or their usage as with few-shot scenarios in the recent GPT-3 experiments (Brown et al., 2020) necessitating that PIGLeT learns what these names mean through interaction.",
"Modeling text generation allows us to incorporate a new loss function, that of minimizing the log-likelihood of generating each s ~ o 0 given previous words and the result of Equation 5: p ( s post i +1 | s ~ o 0 , 1: i ) = TLM ( h \u0000 o 1 , h \u0000 o 2 , s ~ o 0 , 1: i ) .",
"using h o i as the corresponding hidden states.",
"For PIGPeN -NLU, where no generation is needed, optimizing Equation 5 is not strictly necessary.",
"However, as we will show later, it helps provide additional signal to the model, improving overall accuracy by several percentage points.",
"We test our model's ability to encode language into a grounded form ( PIGPeN -NLU), and decode that grounded form into language ( PIGPeN -NLG).",
"We first evaluate models by their performance on PIGPeN -NLU: given objects ~ o , and a sentence s a describing an action, a model must predict the resulting state of objects ~ o 0 .",
"We primarily evaluate models by accuracy; scoring how many objects for which they got all attributes correct.",
"We compare with the following strong baselines: a .",
"No Change: this baseline copies the initial state of all objects ~ o as the final state ~ o 0 .",
"b .",
"GPT3-175B (Brown et al., 2020), a very large language model for few-shot' learning using a prompt.",
"For GPT3, and other text-to-text models, we encode and decode the symbolic object states in a JSON-style dictionary format, discussed in Appendix A.4.",
"c .",
"T5 (Raffel et al., 2019).",
"With this model, we use the same text-to-text' format, however here we train it on the paired data from PIGPeN .",
"We consider varying sizes of T5, from T5-Small the closest in size to PIGLeT , up until T5-11B, roughly 100x the size.",
"d .",
"(Alberti et al., 2019)-style.",
"This paper originally proposed a model for VCR (Zellers et al., Model Accuracy (%) Val Test Overall Seen Unseen No Change 27.4 25.5 29.9 24.0 t e x t t o t e x t GPT3-175B (Brown et al., 2020) 23.8 22.4 22.4 21.4 T5-11B (Raffel et al., 2019) 68.5 64.2 79.5 59.1 T5-3B 66.6 63.3 77.1 58.7 T5-Large 56.5 54.1 69.2 49.1 T5-Base 56.0 53.9 69.2 48.8 T5-Small 39.9 36.2 57.0 38.0 BERT s t y l e Alberti et al.2019, Pretrained Dynamics 61.3 53.9 71.4 48.1 Alberti et al.2019 9.7 6.8 16.2 3.7 G&D2019, Pretrained Dynamics 43.8 35.3 60.9 26.9 G&D2019 15.1 11.3 23.1 7.3 PIGLeT 81.8 81.1 83.8 80.2 Attribute-level accuracy (Test-Overall,%) size distance mass Temperature isBroken 8-way 8-way 8-way 3-way boolean 83.2 84.1 96.3 86.0 94.8 73.7 77.0 89.5 84.2 94.7 83.9 88.9 94.3 95.4 98.1 81.6 90.0 94.0 95.6 98.4 81.8 84.6 94.3 96.3 95.8 81.1 87.5 93.6 96.1 96.5 82.2 84.9 93.8 89.6 93.5 87.7 87.6 97.5 93.4 97.5 53.4 43.6 84.0 88.1 95.1 83.0 86.9 94.0 93.7 97.4 68.6 47.3 82.2 88.3 95.8 92.3 91.9 99.2 99.8 99.0 Table 1: Overall results .",
"2019a), where grounded visual information is fed into a BERT model as tokens; the transformer performs the grounded reasoning.",
"We adapt it for our task by using our base LM and feeding in object representations from our pretrained object encoder, also as tokens.",
"Our object decoder predicts the object, given the LM's pooled hidden state.",
"This is pretrained dynamics, we also consider a version without a randomly initialized dynamics model.",
"e .",
"(Gupta and Durrett, 2019)-style.",
"Thiso paper proposes using Transformers to model physical state, for tasks like entity tracking in recipes.",
"Here, the authors propose decoding a physical state attribute (like isCooked ) by feeding the model a label-specific [CLS] token, and then mapping the result through a hidden layer.",
"We do this and use a similar object encoder as our (Alberti et al., 2019)-style baseline.",
"We discuss hyperparameters in Appendix A.3.",
"Results.",
"From the results (Table 1), we can draw several patterns.",
"Our model, PIGLeT performs best at getting all attributes correct; doing so over 80% on both validation and test sets, even for novel objects not seen during training.",
"The next closest model is T5-11B, which scores 68% on validation.",
"Though when evaluated on objects seen' during training it gets 77%, that number drops by over 18% for unseen objects.",
"On the other hand, PIGLeT has a modest gap of 3%.",
"This suggests that our approach is particularly effective at connecting unpaired language and world representations.",
"At Model Accuracy (val;%) PIGLeT , No Pretraining 10.4 PIGLeT , Non-global MLP apply 72.0 PIGLeT , Global MLP apply 78.5 PIGLeT , Global MLP apply , Gen. loss (6) 81.8 PIGLeT , Symbols Only (Upper Bound) 89.3 Table 2: Ablation study on PIGPeN -NLU's validation set.",
"the other extreme, GPT3 does poorly in its few-shot' setting, suggesting that size is no replacement for grounded supervision.",
"PIGLeT also outperforms BERT style' approaches that control for the same language model architecture, but perform the physical reasoning inside the language transformer rather than as a separate model.",
"Performance drops when the physical decoder must be learned from few paired examples (as in Gupta and Durrett (2019)); it drops even further when neither model is given access to our pretrained dynamics model, with both baselines then underperforming No Change.' This suggests that our approach of having a physical reasoning model outside of an LM is a good inductive bias.",
"In Table 2 we present an ablation study of PIGLeT 's components.",
"Of note, by using a global representation of objects in the world (Equation 4), we get over 6% improvement over a local representation where objects are manipulated independently.",
"We get another 3% boost by adding a generation loss, suggesting that learning to generate summaries helps the model better connect the world to language.",
"Last, we benchmark how much headroom there is on PIGPeN -NLU by evaluating model performance on a symbols only' version of the task, where the symbolic action a is given explicitly to our dynamics model.",
"This upper bound is roughly 7% higher than PIGLeT , suggesting space for future work.",
"Next, we turn to PIGPeN -NLG: given objects ~ o and the literal next action a , a model must generate a sentence s ~ o 0 describing what will change in the scene.",
"We compare with the following baselines: a .",
"T5.",
"We use a T5 model that is given a JSON-style dictionary representation of both ~ o and a , it is finetuned to generate summaries s ~ o 0 .",
"b .",
"LM Baseline.",
"We feed our LM hidden states h o from our pretrained encoder, along with its representation of a .",
"The key difference between it and PIGLeT is that we do not allow it to simulate neurally what might happen next MLP apply is never used here.",
"Size matters.",
"Arguably the most important factor controlling the fluency of a language generator is its size (Kaplan et al., 2020).",
"Since our LM could also be scaled up to arbitrary size, we control for size in our experiments and only consider models the size of GPT2-base (117M) or smaller; we thus compare against T5-small as T5-Base has 220M parameters.",
"We discuss optimization and sampling hyperparameters in Appendix A.3.",
"Evaluation metrics.",
"We evaluate models over the validation and test sets.",
"We consider three main evaluation metrics: BLEU (Papineni et al., 2002) with two references, the recently proposed BERTScore (Zhang et al., 2020), and conduct a human evaluation.",
"Humans rate both the fluency of post-action text, as well as its faithfulness to true action result, on a scale from \u0000 1 to 1 .",
"Results.",
"We show our results in Table 3.",
"Of note, PIGLeT is competitive with T5 and significantly outperforms the pure LM baseline, which uses a pretrained encoder for object states, yet has the physical simulation piece MLP apply removed.",
"This suggests that simulating world dynamics not only allows the model to predict what might happen Model BLEU BERTScore Human (test; [ 9 1 , 1] ) Val Test Val Test Fluency Faithfulness T5 46.6 43.4 82.2 81.0 0.82 0.15 LM Baseline 44.6 39.7 81.6 78.8 0.91 -0.13 PIGLeT 49.0 43.9 83.6 81.3 0.92 0.22 Human 44.5 45.6 82.6 83.3 0.94 0.71 Table 3: Text generation results on PIGPeN -NLG, showing models of roughly equivalent size (up to 117M parameters).",
"next, it leads to more faithful generation as well.",
"We show two qualitative examples in Figure 4, covering both PIGPeN -NLU as well as PIGPeN -NLG.",
"In the first row, the robot empties a held Mug that is filled with water.",
"PIGLeT gets the state, and generates a faithful sentence summarizing that the mug becomes empty.",
"T5 struggles somewhat, emptying the water from both the Mug and the (irrelevant) Sink .",
"It also generates text saying that the Sink becomes empty, instead of the Mug.",
"In the second row, PIGLeT correctly predicts the next object states, but its generated text is incomplete it should also write that the mug becomes filled wtih Coffee.",
"T5 makes the same mistake in generation, and it also underpredicts the state changes, omitting all changes to the Mug .",
"We suspect that T5 struggles here in part because Mug is an unseen object.",
"T5 only experiences it through language-only pretraining, but this might not be enough for a fully grounded representation.",
"The language models that perform best today are trained on massive datasets of text.",
"However, this has unintended consequences (Bender et al., 2021) and it is unlike how children learn language, with children learning novel words from experience (Carey and Bartlett, 1978).",
"The large scale of our pretraining datasets might allow models to learn to perform physical-commonsense like tasks for wrong reasons, overfitting to surface patterns rather than learning meaningful grounding.",
"We investigate the extent of this by training a zero-shot' version of our backbone LM on Wikipedia and books the only difference is that The sink is now empty.",
"In this setting, not only must PIGLeT learn to ground words like mug,' it must do so without having seen the word mug' during pretraining.",
"This is signifi-cant because we count over 20k instances of Mug' words (including morphology) in our dataset.",
"We show results in Figure 5.",
"A version of PIGLeT with the zero-shot LM does surprisingly well achieving 80% accuracy at predicting the state changes for Mug despite never having been pretrained on one before.",
"This even outperforms T5 at the overall task.",
"Nevertheless, PIGLeT outperforms it by roughly 7% at unseen objects, with notable gains of over 10% on highly dynamic objects like Toaster s and Sink s.",
"work, we study language grounding and commonsense",
"commonsense reasoning at the representation and concept level.",
"The aim is to train models that learn to acquire concepts more like humans, rather than performing well on a downstream task that (for humans) requires commonsense reasoning.",
"Thus, this work is somewhat different versus other 3D embodied tasks like QA (Gordon et al., 2018; Das et al., 2018), along with past work for measuring such grounded commonsense reasoning, like SWAG, HellaSWAG, and VCR (Zellers et al., 2018, 2019b,a).",
"The knowledge covered is different, as it is self-contained within THOR.",
"While VCR, for instance, includes lots of visual situations about what people are doing, this paper focuses on learning the physical properties of objects.",
"Zero-shot generalization .",
"There has been a lot of past work involved with learning zero-shot': often learning about the grounded world in language, and transferring that knowledge to vision.",
"Techniques for this include looking at word embeddings (Frome et al., 2013) and dictionary defini-tions (Zellers and Choi, 2017).",
"In this work, we propose the inverse.",
"This approach was used to learn better word embeddings (Gupta et al., 2019) or semantic tuples (Yatskar et al., 2016), but we consider learning a component to be plugged into a deep Transformer language model.",
"Past work evaluating these types of zero-shot generalization have also looked into how well models can compose concepts in language together (Lake and Baroni, 2018; Ruis et al., 2020).",
"Our work considers elements of compositional-ity through grounded transfer.",
"For example, in PIGPeN -NLG, models must generate sentences about the equivalent of dropping a dax', despite never having seen one before.",
"However, our work is also contextual, in that the outcome of dropping a dax' might depend on external attributes (like how high we're dropping it from).",
"Structured Models for Attributes and Objects .",
"The idea of modeling actions as functions that transform objects has been explored in the computer vision space (Wang et al., 2016).",
"Past work has also built formal structured models for connecting vision and language (Matuszek et al., 2012; Krishnamurthy and Kollar, 2013), we take a neural approach and connect today's best models of language form to similarly neural models of a simulated environment.",
"In this paper, we presented an approach PIGLeT for jointly modeling language form and meaning.",
"We presented a testbed PIGPeN for evaluating our model, which performs well at grounding language to the (simulated) world.",
"We thank the reviewers for their helpful feedback, and the Mechanical Turk workers for doing a great job in annotating our data.",
"Thanks also to Zak Stone and the Google Cloud TPU team for help with the computing infrastructure.",
"This work was supported by the DARPA CwC program through ARO (W911NF-15-1-0543), the DARPA MCS program through NIWC Pacific (N66001-19-2-4031), and the Allen Institute for AI."
] | [
"objective",
"abstain",
"abstain",
"method",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"objective",
"objective",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"other",
"method",
"other",
"method",
"other",
"other",
"other",
"objective",
"method",
"other",
"method",
"other",
"abstain",
"other",
"other",
"method",
"method",
"method",
"other",
"other",
"other"
] |
[
"Transfer learning approaches for Neural Machine Translation (NMT) trains a NMT model on an assisting language-target language pair (parent model) which is later fine-tuned for the source language-target language pair of interest (child model), with the target language being the same.",
"In many cases, the assisting language has a different word order from the source language.",
"We show that divergent word order adversely limits the benefits from transfer learning when little to no parallel corpus between the source and target language is available.",
"To bridge this divergence, we propose to pre-order the assisting language sentences to match the word order of the source language and train the parent model.",
"Our experiments on many language pairs show that bridging the word order gap leads to major improvements in the translation quality in extremely low-resource scenarios.",
"Transfer learning for multilingual Neural Machine Translation (NMT) (Zoph et al., 2016; Dabre et al., 2017; Nguyen and Chiang, 2017) attempts to improve the NMT performance on the source to target language pair (child task) using an assisting source language (assisting to target language translation is the parent task).",
"Here, the parent model is trained on the assisting and target language parallel corpus and the trained weights are used to initialize the child model.",
"If source-target language pair parallel corpus is available, the child model can further be fine-tuned.",
"The weight initialization reduces the requirement on the training data for the source-target language pair by transferring knowledge from the parent task, thereby improving the performance on the child task.",
"However, the divergence between the source and the assisting language can adversely impact the benefits obtained from transfer learning.",
"Multiple studies have shown that transfer learning works best when the languages are related (Zoph et al., 2016; Nguyen and Chiang, 2017; Dabre et al., 2017).",
"Zoph et al. (2016) studied the influence of language divergence between languages chosen for training the parent and the child model, and showed that choosing similar languages for training the parent and the child model leads to better improvements from transfer learning.",
"Several studies have tried to address the lexical divergence between the source and the target languages either by using Byte Pair Encoding (BPE) as basic input representation units (Nguyen and Chiang, 2017) or character-level NMT system (Lee et al., 2017) or bilingual embeddings (Gu et al., 2018).",
"However, the effect of word order divergence and its mitigation has not been explored.",
"In a practical setting, it is not uncommon to have source and assisting languages with different word order.",
"For instance, it is possible to find parallel corpora between English (SVO word order) and some Indian (SOV word order) languages, but very little parallel corpora between Indian languages.",
"Hence, it is natural to use English as an assisting language for inter-Indian language translation.",
"To address the word order divergence, we propose to pre-order the assisting language sentences (SVO) to match the word order of the source language (SOV).",
"We consider an extremely resource-constrained scenario, where there is no parallel corpus for the child task.",
"From our experiments, we show that there is a significant increase in the translation accuracy for the unseen source-target language pair.",
"To the best of our knowledge, no work has addressed word order divergence in transfer learning",
"for multilingual NMT.",
"However, some work exists for other NLP tasks in a multilingual setting.",
"For Named Entity Recognition (NER), Xie et al. (2018) use a self-attention layer after the Bi-LSTM layer to address word-order divergence for Named Entity Recognition (NER) task.",
"The approach does not show any significant improvements, possibly because the divergence has to be addressed before/during construction of the contextual embeddings in the Bi-LSTM layer.",
"Joty et al. (2017) use adversarial training for cross-lingual question-question similarity ranking.",
"The adversarial training tries to force the sentence representation generated by the encoder of similar sentences from different input languages to have similar representations.",
"Pre-ordering the source language sentences to match the target language word order has been found useful in addressing word-order divergence for Phrase-Based SMT ( Collins et al., 2005; Ra-manathan et al., 2008; Navratil et al., 2012; Chatterjee et al., 2014).",
"For NMT, Ponti et al. (2018) and Kawara et al. (2018) have explored preordering.",
"Ponti et al. (2018) demonstrated that by reducing the syntactic divergence between the source and the target languages, consistent improvements in NMT performance can be obtained.",
"On the contrary, Kawara et al. (2018) reported drop in NMT performance due to pre-ordering.",
"Note that these works address source-target divergence, not divergence between source languages in multilingual NMT scenario.",
"Consider the task of translating for an extremely low-resource language pair.",
"The parallel corpus between the two languages, if available may be too small to train an NMT model.",
"Similar to Zoph et al. (2016), we use transfer learning to overcome data sparsity between the source and the target languages.",
"We choose English as the assisting language in all our experiments.",
"In our resource-scarce scenario, we have no parallel corpus for training the child model.",
"Hence, at test time, the source language sentence is translated using the parent model after performing a word-by-word translation from source to the assisting language using a bilingual dictionary.",
"Since the source language and the assisting language (English) have different word order, we hypothesize that it leads to inconsistencies in the con-Before Reordering After Reordering S NP 0 VP V NP 1 S NP 0 VP NP 1 VS NP NNP Anurag VP MD will VP VB meet NP NNP Thakur S NP NNP Anurag VP NP NNP Thakur VP MD will VP VB meet Table 1: Example showing transitive verb before and after reordering (Adapted from Chatterjee et al. (2014)) textual representations generated by the encoder for the two languages.",
"Specifically, given an English sentence (SVO word order) and its translation in the source language (SOV word order), the encoder representations for words in the two sentences will be different due to different contexts of synonymous words.",
"This could lead to the attention and the decoder layers generating different translations from the same (parallel) sentence in the source or assisting language.",
"This is undesirable as we want the knowledge to be transferred from the parent model (assisting source ! target) to the child model (source ! target).",
"In this paper, we propose to pre-order English sentences (assisting language sentences) to match the source language word-order and train the parent model on the pre-ordered corpus.",
"Table 1 shows one of the pre-ordering rules (Ramanathan et al., 2008) used along with an example sentence illustrating the effect of pre-ordering.",
"This will ensure that context of words in the parallel source and assisting language sentences are similar, leading to consistent contextual representations across the source languages.",
"Pre-ordering may also be beneficial for other word order divergence scenarios ( e.g., SOV to SVO), but we leave verification of these additional scenarios for future work.",
"In this section, we describe the languages experimented with, datasets used, the network hyper-parameters used in our experiments.",
"Languages : We experimented with English !",
"Hindi translation as the parent task.",
"English is the assisting source language.",
"Bengali, Gujarati, Marathi, Malayalam and Tamil are the source languages, and translation from these to Hindi constitute the child tasks.",
"Hindi, Bengali, Gujarati and Marathi are Indo-Aryan languages, while Malayalam and Tamil are Dravidian languages.",
"All these languages have a canonical SOV word order.",
"Datasets : For training English-Hindi NMT sys-tems, we use the IITB English-Hindi parallel corpus (Kunchukuttan et al., 2018) ( 1 : 46 M sentences from the training set) and the ILCI English-Hindi parallel corpus ( 44 : 7 K sentences).",
"The ILCI (Indian Language Corpora Initiative) multilingual parallel corpus (Jha, 2010) 1 spans multiple Indian languages from the health and tourism domains.",
"We use the 520 -sentence dev-set of the IITB parallel corpus for validation.",
"For each child task, we use 2 K sentences from ILCI corpus as test set.",
"Network : We use OpenNMT-Torch (Klein et al., 2018) to train the NMT system.",
"We use the standard encoder-attention-decoder architecture (Bah-danau et al., 2015) with input-feeding approach (Luong et al., 2015).",
"The encoder has two layers of bidirectional LSTMs with 500 neurons each and the decoder contains two LSTM layers with 500 neurons each.",
"We use a mini-batch of size 50 and a dropout layer.",
"We begin with an initial learning rate of 1 : 0 and continue training with exponential decay till the learning rate falls below 0 : 001 .",
"The English input is initialized with pre-trained fastText embeddings (Grave et al., 2018) 2 .",
"English and Hindi vocabularies consists of 0 : 27 M and 50 K tokens appearing at least 2 and 5 times in the English and Hindi training corpus respectively.",
"For representing English and other source languages into a common space, we translate each word in the source language into English using a bilingual dictionary (we used Google Translate to get single word translations).",
"In an end-to-end solution, it would be ideal to use bilingual embeddings or obtain word-by-word translations via bilingual embeddings (Xie et al., 2018).",
"However, publicly available bilingual embeddings for English-Indian languages are not good enough for obtaining good-quality, bilingual representations (Smith et al., 2017; Jawanpuria et al., 2019) and publicly available bilingual dictionaries have limited coverage.",
"The focus of our study is the in-1 The corpus is available on request from http://",
"fluence of word-order divergence on Multilingual NMT.",
"We do not want bilingual embeddings quality or bilingual dictionary coverage to influence the experiments, rendering our conclusions unreliable.",
"Hence, we use the above mentioned large-coverage bilingual dictionary.",
"Pre-ordering : We use CFILT-preorder 3 for pre-reordering English sentences.",
"It contains two preordering configurations: (1) generic rules (G) that apply to all Indian languages (Ramanathan et al., 2008), and (2) hindi-tuned rules (HT) which improves generic rules by incorporating improvements found through error analysis of English-Hindi reordering (Patel et al., 2013).",
"The Hindi-tuned rules improve translation for other English to Indian language pairs too (Kunchukuttan et al., 2014).",
"We experiment with two scenarios:",
"(a) an extremely resource scarce scenario with no parallel corpus for child tasks,",
"(b) varying amounts of parallel corpora available for child task.",
"The results from our experiments are presented in the Table 2.",
"We report BLEU scores and LeBLEU 4 3 https://github.com/anoopkunchukuttan/ cfilt_preorder 4 LeBLEU (Levenshtein Edit BLEU) is a variant of BLEU that does a soft-match of reference and output words based English the treatment of migraine is done in two ways Gujarati (Original) (cid:230) .",
"scores.",
"We observe that both the pre-ordering models significantly improve the translation quality over the no-preordering models for all the language pairs.",
"The results support our hypothesis that word-order divergence can limit the benefits of multilingual translation.",
"Thus, reducing the word order divergence improves translation in extremely low-resource scenarios.",
"An analysis of the outputs revealed that preordering significantly reduced the number of UNK tokens (placeholder for unknown words) in the test output (Table 3).",
"We hypothesize that due to word order divergence between English and Indian languages, the encoder representation generated is not consistent leading to decoder generating unknown words.",
"However, the pre-ordered models generate better encoder representations leading to lesser number of UNK tokens and better translation, which is also reflected in the BLEU scores and Table",
"4. 5.2 Parallel Corpus for Child Task We study the impact of child task parallel corpus on pre-ordering.",
"To this end, we fine-tune the parent task model with the child task parallel corpus.",
"Table 5 shows the results for Bengali-Hindi , Gujarati-Hindi , Marathi-Hindi , Malayalam-Hindi , and Tamil-Hindi translation.",
"We observe that pre-ordering is beneficial when almost no child task corpus is available.",
"As the child task corpus increases, the model learns the on edit distance, hence it can handle morphological variations and cognates (Virpioja and Grnroos, 2015).",
"word order of the source language; hence, the non pre-ordering models perform almost as good as or sometimes better than the pre-ordered ones.",
"The non pre-ordering model is able to forget the word-order of English and learn the word order of Indian languages.",
"We attribute this behavior of the non pre-ordered model to the phenomenon of catastrophic forgetting (McCloskey and Cohen, 1989; French, 1999) which enables the model to learn the word-order of the source language when sufficient child task parallel corpus is available.",
"We also compare the performance of the fine-tuned model with the model trained only on the available source-target parallel corpus with randomly initialized weights (No Transfer Learning).",
"Transfer learning, with and without pre-ordering, is better compared to training only on the small source-target parallel corpus.",
"In this paper, we show that handling word-order divergence between the source and assisting languages is crucial for the success of multilingual NMT in an extremely low-resource setting.",
"We show that pre-ordering the assisting language to match the word order of the source language significantly improves translation quality in an extremely low-resource setting.",
"If pre-ordering is not possible, fine-tuning on a small source-target parallel corpus is sufficient to overcome word order divergence.",
"While the current work focused on Indian languages, we would like to validate the hypothesis on a more diverse set of languages.",
"We CorpusSize No TransferLearning No Pre-Order Pre-Ordered HT G Bengali -6.72 8.83 9.19 500 0.0 11.40 11.49 11.00 1000 0.0 13.71 13.84 13.62 2000 0.0 16.41 16.79 16.01 3000 0.0 17.44 18.42 y 17.82 4000 0.0 18.86 19.17 18.66 5000 0.07 19.58 20.15 y 19.82 10000 1.87 22.50 22.92 22.53 Gujarati -9.81 14.34 13.90 500 0.0 17.27 17.11 17.75 1000 0.0 21.68 22.12 21.45 2000 0.0 25.34 25.73 25.63 3000 0.29 27.48 27.77 27.83 4000 0.82 29.20 29.49 29.51 5000 0.0 29.87 31.09 y 30.58 y 10000 1.52 33.97 34.25 34.08 Marathi -8.77 10.18 10.30 500 0.0 12.84 13.61 y 12.97 1000 0.0 15.62 15.75 16.10 y 2000 0.0 18.59 19.10 18.67 3000 0.0 20.51 20.76 20.29 4000 0.24 21.78 21.77 21.39 5000 0.29 22.21 22.41 22.73 y 10000 7.90 25.16 25.88 25.36 Malayalam -5.73 6.49 6.95 500 0.0 5.40 5.54 6.17 y 1000 0.0 7.34 7.36 7.63 2000 0.0 8.24 8.66 y 8.31 3000 0.0 9.11 9.30 9.31 4000 0.0 9.65 9.91 9.87 5000 0.03 10.26 10.47 10.28 10000 0.0 11.96 11.85 11.63 Tamil -4.86 6.04 6.00 500 0.0 5.49 5.85 y 5.59 1000 0.0 7.04 7.23 7.44 y 2000 0.0 8.83 8.84 9.24 3000 0.0 9.80 10.04 9.56 4000 0.0 9.69 10.59 y 10.25 y 5000 0.03 10.84 10.93 10.69 10000 0.0 12.71 13.05 12.69 Table 5: Transfer learning results (BLEU) for Indian Language Hindi pair, fine-tuned with varying number of Indian Language Hindi parallel sentences.",
"require expensive parsing of the assisting language corpus.",
"Further, use of pre-ordering to address word-order divergence for multilingual training of other NLP tasks can be explored.",
"We would like to thank Raj Dabre for his helpful suggestions and comments."
] | [
"abstain",
"abstain",
"result",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"result",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"other",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"method",
"result",
"abstain",
"abstain",
"other"
] |
[
"With advances in neural language models, the focus of linguistic steganography has shifted from edit-based approaches to generation-based ones.",
"While the latter's payload capacity is impressive, generating genuine-looking texts remains challenging.",
"In this paper, we revisit edit-based linguistic steganography, with the idea that a masked language model offers an off-the-shelf solution.",
"The proposed method eliminates painstaking rule construction and has a high payload capacity for an edit-based model.",
"It is also shown to be more secure against automatic detection than a generation-based method while offering better control of the security/payload capacity tradeoff.",
"Steganography is the practice of concealing a message in some cover data such that an eavesdropper is not even aware of the existence of the secret message (Simmons, 1984; Anderson and Petitco-las, 1998).",
"While images, videos, and audio have been dominant cover media (Fridrich, 2009), natural language is a promising choice, thanks to the omnipresence of text (Bennett, 2004).",
"Formally, the goal of linguistic steganography is to create a steganographic system ( stegosystem ) with which the sender Alice encodes a secret message, usually in the form of a bit sequence, into a text and the receiver Bob decodes the message, with the requirement that the text is so natural that even if transmitted in a public channel, it does not arouse the suspicion of the eavesdropper Eve .",
"For a stegosystem that creates the text through transformation, we refer to the original text as the cover text and the modified text as the stego text .",
"A stegosystem has two objectives, security and payload capacity .",
"Security is the degree of how unsuspicious the stego text is while payload capacity is the size of the secret message relative to the size of the stego text.",
"The two objectives generally exhibit a trade-off relationship (Chang and Clark, 2014).",
"Edit-based approaches used to dominate the research on linguistic steganography.",
"Arguably, the most effective approach was synonym substitution (Chapman et al., 2001; Bolshakov, 2005; Taski-ran et al., 2006; Chang and Clark, 2014; Wilson and Ker, 2016), where a bit chunk was assigned to each member of a synonym group, for example, 0' to marry and 1' to wed .",
"The cover text She will marry him was then modified to the stego text She will wed him such that the latter carried the secret bit sequence 1'.",
"This conceptual simplicity was, however, overshadowed by the complexity of linguistic phenomena such as part-of-speech ambiguity, polysemy, and context sensitivity.",
"For this reason, edit-based approaches were characterized by the painstaking construction of synonym substitution rules, which were tightly coupled with acceptability checking mechanisms (see Chang and Clark (2014) for a review and their own elaborate method).",
"With all these efforts, edit-based stegosystems suffered from low payload capacity, for example, 2 bits per sentence (Chang and Clark, 2014).",
"With advances in neural language models (LMs), edit-based approaches have been replaced by generation-based ones (Fang et al., 2017; Yang et al., 2019; Dai and Cai, 2019; Ziegler et al., 2019; Shen et al., 2020).",
"In these approaches, bit chunks are directly assigned to the conditional probability distribution over the next word estimated by the LM, yielding impressive payload capacities of 15 bits per word (Shen et al., 2020).",
"However, it remains challenging for an LM to generate so genuine-looking texts that they fool both humans and machines (Ippolito et al., 2020) even if they do not encode secret messages.",
"It is also worth noting that generation-based stegosystems do not necessarily cut out the need for cover texts, as Ziegler et al. (2019) and Shen et al. (2020) Alice Cover text Masking strategy Secret message 1,0,1 Encoding strategy We completed the charitable task.",
"In this paper, we revisit edit-based linguistic steganography.",
"Our key idea is that a masked language model (masked LM), which was first introduced with BERT (Devlin et al., 2019), offers an off-the-shelf solution.",
"Usually treated as an intermediate model with no direct application, the masked LM drastically simplifies an edit-based stegosystem.",
"It eliminates painstaking rule construction because it readily offers a list of words applicable in the context.",
"As illustrated in Figure 1, all Alice and Bob have to share is the masking and encoding strategies in addition to the masked LM.",
"In our experiments, we showed that the proposed method had a high payload capacity for an edit-based model.",
"As expected, the amount was far smaller than those of generation-based models, but the proposed method offers better control of the se-curity/payload capacity trade-off.",
"We also demonstrated that it was more secure against automatic detection than a generation-based method although it was rated slightly lower by human adversaries.",
"Our code is available at https://github.com/ku-nlp/steganography-with-masked-lm.",
"The essential ingredient of the proposed edit-based stegosystem is a masked LM.",
"It was first introduced along with BERT (Devlin et al., 2019) as an effective pretraining strategy for the Transformer-based (Vaswani et al., 2017) neural net.",
"The pre-trained model is usually fine-tuned on downstream tasks, but for our purpose we keep it intact.",
"Given a text in which some tokens were replaced with the special token [MASK] , the masked LM is trained to recover the original tokens based only on their context.",
"As a result of the training, it provides a probability distribution over the vocabulary for each masked token according to the applicability in the given context.",
"Note that high probability items are not necessarily synonymous with the original tokens but nevertheless fit into the context.",
"Our key insight is that we can use these probability distributions to encode a secret message in the form of a bit sequence.",
"As shown in Figure 1, Alice and Bob share some encoding strategy with which bit chunks are assigned to some high probability items.",
"Alice creates a stego text by choosing items that correspond to the secret message.",
"Bob in turn decodes the secret message by selecting bit chunks that correspond to each token in the stego text.",
"The only remaining requirement for Alice is to share some masking strategy with Bob in advance so that Bob can correctly identify the tokens to be masked.",
"We have various design choices for masking and encoding strategies, which affect both security and payload capacity.",
"For masked LM training, BERT randomly masked about 15% of tokens in the input, but we need to ensure that both Alice and Bob mask the same tokens.",
"In this paper, we present a simple strategy.",
"As a general rule, we mask every one in f tokens in the input, but we skip tokens if they match any of the following criteria:",
"1. A punctuation or number.",
"2. A stopword.",
"3. A non-initial subword, which BERT's standard tokenizer marks with the initial ##.",
"Editing subwords is dangerous because there is no 100 percent guarantee that Bob's subword tokeniza-tion reproduces Alice's original segmentation.",
"For example, if ##break in the word un ##break ##able is replaced with #us, the subword tokenizer would segment the new word into un ##us-able, distorting the masking positions.",
"We will revisit this problem in Section 3.4.",
"The hyperparameter f is expected to control the security/payload capacity trade-off.",
"A large f lowers the payload capacity but is likely to increase the difficulty of detection.",
"We also anticipate that since the tokens we decide to skip do not have many good alternatives, not masking them is good for the stego text quality.",
"We use block encoding for simplicity.",
"For each masked token, we select and sort items whose probabilities are greater than p .",
"To avoid distorting masking positions, we drop items that are to be skipped in the masking phase.",
"Let n be the largest integer that satisfies 2 n c , where c is the number of the remaining items.",
"Each item is given a unique bit chunk of size n .",
"Coding is an active research topic (Dai and Cai, 2019; Ziegler et al., 2019; Shen et al., 2020) and is orthogonal to our core proposal.",
"We tested the proposed method with several config-urations and compared it with a generation-based method.",
"To assess security, we employed automatic discriminators and human adversaries.",
"BERT For the proposed edit-based method, we used BERT (Devlin et al., 2019) as the masked LM.",
"Specifically, we used Google's BERT Base, Cased model and Hugging Face's transformers package (Wolf et al., 2020) with default settings.",
"Given a random bit sequence as the secret message and a paragraph as the cover text, the model encoded bit chunks on a sentence-by-sentence basis.",
"When the bit chunks reached the end of the secret message, the process was terminated, discarding the remaining sentences in the given paragraph.",
"The last bit chunk usually exceeded the limit, and the remainder was filled with zeros.",
"GPT-2 Ziegler et al. (2019) built a state-of-the-art generation-based model on top of the GPT-2 neural LM (Radford et al., 2019).",
"We used their original implementation 1 to encode random bit sequences.",
"We set the option finish_sent to true to avoid terminating generation at the middle of a sentence.",
"We tested the temperature parameter = { 0 .",
"4 , 0 .",
"7 , 1 .",
"0 } .",
"Since the generation was conditioned on context sentences, we supplied the first three sentences of a paragraph.",
"Data We extracted paragraphs from the English part of the CC-100 dataset (Wenzek et al., 2020) and used them as the cover texts for BERT and as the contexts for GPT-2.",
"2 For each stegosystem, we also extracted texts that were comparable to the corresponding stego texts in terms of length.",
"We refer to them as real texts .",
"We trained discriminators to distinguish stego texts from real texts.",
"This corresponds to a situation unusually favorable to Eve as she has access to labeled data, though not to secret messages.",
"A practical reason for this is that after all, we cannot build discriminators without training data.",
"Besides, a stegosystem's performance is deemed satisfactory if it manages to fool the discriminator even under such disadvantageous conditions.",
"For each stegosystem, we fine-tuned the same BERT Base, Cased model on the binary classification task.",
"The details are explained in Appendix A. 3.3 Human Evaluation We asked Amazon Mechanical Turk 3 workers to give 5-point scale ratings on the stego and real 1 https://github.com/harvardnlp/NeuralSteganography 2 Ziegler et al. (2019) used the CNN/Dailymail (Hermann et al., 2015; Nallapati et al., 2016) as the contexts.",
"We found, however, that the resulting stego texts were excessively easy for automatic discriminators to distinguish from real news articles, presumably due to domain mismatch with a web corpus on which GPT-2 had been trained.",
"That is why we chose CC-100, a web corpus, in our experiments.",
"Note that this setting may have worked slightly against the proposed method because BERT was mainly trained on Wikipedia.",
"3 https://www.mturk.com/ Model Parameters Bits/word Acc BERT f = 3 p = 0 .",
"texts according to naturality and correctness.",
"Since we found a consistent bias toward shorter texts, we tuned each stegosystem's hyperparameters to generate stego texts with comparable length.",
"The details are explained in Appendix B. 3.4 Results Table 1 shows the result of automatic detection.",
"As expected, the proposed method, BERT, had a much lower payload capacity than the generation-based GPT-2 although it was high for an edit-based method.",
"In practical situations, however, security is given priority over payload capacity.",
"In this respect, BERT's performance was remarkable as its stego texts were nearly indistinguishable from real texts.",
"By contrast, GTP-2's stego texts were easily detectable for the discriminator even though they were much shorter than BERT's.",
"Figure 2 shows the effect of the masking interval parameter f , with p = 0 .",
"02 .",
"We can observe a clear trade-off between the two objectives.",
"Figure 3 indicates the effectiveness of the mask skipping heuristics explained in Section 2.2.",
"With f = 4 and p = 0 .",
"02 , masking stopwords and subwords not only raised detection accuracy but also lowered payload capacities.",
"Because these tokens did not have many good alternatives, they consumed only small bit chunks and simply damaged the stego text quality.",
"As we briefly discussed in Section 2.2, editing subwords may cause distortion in mask positions, leading to decoding failures.",
"We quantified the risk, with the hyperparameter settings of p = 0 .",
"02 and f = 3 .",
"We found that 1.41% of the masked tokens 0.00 0.05 0.10 0.15 0.20 0.25 Bits/word 0.50 0.55 0.60 0.65 0.70 A cc u r a c y Base +stopwords +stopwords,subwords Figure 3: The effect of mask skipping heuristics.",
"had substitution candidates that did not reproduce the original segmentations.",
"Although this danger applies equally to generation-based steganography built on top of subword LMs (Dai and Cai, 2019; Ziegler et al., 2019; Shen et al., 2020), to our knowledge, we are the first to point it out.",
"Figure 4 shows the effect of the probability threshold p .",
"Lowering the threshold increases the payload capacity because the number of alternative tokens increases.",
"It did sacrifice detection accuracy, but not as much as we expected.",
"As for human evaluation, Table 3 summarizes the results with average ratings.",
"Overall, both methods achieved high average ratings, almost equal to that of the real texts.",
"However, BERT slightly un-derperformed GPT-2.",
"We conjecture that the quality of the cover texts affected the edit-based method more directly than the generation-based method.",
"Following Ziegler et al. (2019), we initially used news articles for cover/real texts but switched to web texts because we noticed that the discriminator appeared to exploit the domain mismatch with a web corpus on which GPT-2 had been trained.",
"Considering the massive quality improvement efforts given to GPT-2's training data, however, there seems to be much room to improve the quality of CC-100 (Wenzek et al., 2020).",
"Table 2 shows good and bad stego texts produced by the BERT-based method.",
"In the first example, BERT successfully suggested context-aware words, Cover text Stego text Rating Switzerland also has an amazing scientific community that includes Geneva University and CERN, which is one of the top research institutes in the world and is home to the world 's largest particle physics laboratory .",
"e.g. Basel for a university in Switzerland.",
"In the second example, a single mistake, the unnatural repetition of negative , had a critical impact on human raters.",
"Finally, we confirmed that the current sentence-wise encoding created a risk of discrepancies between the first and second sentences.",
"Editing proper nouns like Geneva is prone to factual errors.",
"One may feel tempted to apply a part-of-speech tagger or a named entity tagger to skip proper nouns.",
"Just like subword substitution, however, a nave application of automatic analysis does not guarantee the sameness of the masking positions.",
"A good compromise with a guarantee of success in decoding is to skip words with capitalized letters.",
"Solving this problem at its source is an interesting direction for future research.",
"In this paper, we demonstrated that the masked language model could revolutionize edit-based linguistic steganography.",
"The proposed method is drastically simpler than existing edit-based methods, has a high payload capacity, and allows easy control of the security/payload capacity trade-off.",
"The masked language model is a general framework adopted by many BERT-like models, of which attempts to handle longer texts (Beltagy et al., 2020; Wang et al., 2020) are particularly relevant to steganography.",
"Tailoring the training procedure to steganography is also an interesting research direction.",
"This paper works on steganography.",
"Unlike cryptography, steganography conceals the fact that a secret message is being transmitted as well as its contents.",
"Steganography can be just fun, but it usually involves a conflict of interest between two parties: those who want to censor media and those who want to evade detection.",
"Depending on value judgments, either one or both can be evil.",
"Steganography is an effective tool to counter censorship in countries where encryption is illegal and visibly encrypted messages may be incriminating.",
"However, it can also be used to transfer malicious data.",
"As such, steganography can be seen as a dual-use technology."
] | [
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"objective",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"result",
"abstain",
"abstain",
"other",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain"
] |
[
"The main goal of machine translation has been to convey the correct content.",
"Stylistic considerations have been at best secondary.",
"We show that as a consequence, the output of three commercial machine translation systems (Bing, DeepL, Google) make demographically diverse samples from five languages sound older and more male than the original.",
"Our findings suggest that translation models reflect demographic bias in the training data.",
"These results open up interesting new research avenues in machine translation to take stylistic considerations into account.",
"Translating what is being said is arguably the most important aspect of machine translation, and has been the main focus of all its efforts so far.",
"However, how something is said also has an impact on how the final translation is perceived.",
"Mirkin et al. (2015) have pointed out that demographic aspects of language do play a role in translation, and could help in personalization.",
"As Vanmassenhove et al. (2018) have shown, gendered inflections like Sono stanco/a ( Italian I am tired) are an important aspect of correct translations.",
"In many cases, capturing the style of a document is equally important as its content: translating a lover's greeting as I am entirely pleased to see you might be semantically correct, but seems out of place.",
"Demographic factors (age, gender, etc.) all manifest in language, and therefore influ-ence style: we do not expect a 6-year old to sound like an adult, and would not translate a person to seem differently gendered.",
"However, in this paper, we show such a change is essentially what happens in machine translation: authors sound on average older and more male.",
"Prior work (Rabinovich et al., 2017) has shown that translation weakens the signal for gender prediction.",
"We substantially extend this analysis in terms of languages, demographic factors, and types of models, controlling for demographically representative samples.",
"We show the direction in which the predicted demographic factors differ in the translations, and find that there are consistent biases towards older and more male profiles.",
"Our findings suggest a severe case of overexposure to writings from these demographics (Hovy and Spruit, 2016), which creates a self-reinforcing loop.",
"In this paper, we use demographically-representative author samples from five languages (Dutch, English, French, German, Italian), and translate them with three commercially available machine translation systems (Google, Bing, and DeepL).",
"We compare the true demographics with the predicted demographics of each translation (as well as a control predictor trained on the same language).",
"Without making any judgment on the translation of the content, we find",
"a) that there are substantial discrepancies in the perceived demographics, and",
"b) that translations tend to make the writers appear older and considerably more male than they are.",
"Contributions We empirically show how translations affect the demographic profile of a text.",
"We release our data set at https://github.com/ MilaNLProc/translation_bias .",
"Our findings contribute to a growing literature on biases in NLP (see Shah et al. (2020) for a recent overview).",
"We use the Trustpilot data set from Hovy et al. (2015), which provides reviews in different languages, and includes information about age and gender.",
"We use only English, German, Italian, French, and Dutch reviews, based on two criteria:",
"1) availability of the language in translation models, and",
"2) sufficient data for representative samples (see below) in the corpus.",
"For the English data, we use US reviews, rather than UK reviews, based on a general prevalence of this variety in translation engines.",
"For each language, we restrict ourselves to reviews written in the respective language (accord-ing to langid 1 (Lui and Baldwin, 2012)) that have both age and gender information.",
"We use the CIA factbook 2 data on age pyramids to sample 200 each male and female.",
"We use the age groups given on the factbook, i.e., 1524, 2554, 5564, and 65+.",
"Based on data sparsity in the Trustpilot data, we do not include the under-15 age group.",
"This sampling procedure results in five test sets of about 400 instances each (the exact numbers vary slightly according to rounding and the proportions in the CIA factbook data), balanced for binary gender.",
"The exception is Italian, where the original data is so heavily skewed towards male reviews that even with downsampling, we only achieve a 48:52 gender ratio.",
"We then translate all non-English test sets into English, and the English test set into all other languages, using three commercially available machine translation tools: Bing, DeepL, and Google Translate.",
"We use all instances that are not part of any test set to create training data for the respective age and gender classifiers (see next section).",
"Since we want to compare across languages fairly, the training data sets need to be of comparable size.",
"We are therefore bounded by the size of the smallest available subset (Italian).",
"We sample about 2500 instances per gender, according to the respective age distributions.",
"This sampling results in about 5000 instances per language (again, the exact number varies slightly based on the availability of samples for each group and rounding).",
"We again subsample to approximate the actual age and gender distribution, since, according to Hovy et al. (2015), the data skews strongly male, while otherwise closely matching the official age distributions.",
"To assess the demographic profile of a text, we train separate age and gender classifiers for each language.",
"These classifiers allow us to compare the predicted profiles in the original language with the predicted profiles of the translation, and compare both to the actual demographics of the test data.",
"We use simple Logistic Regression models with L 2 regularization over 2-6 character-grams, and regularization optimized via 3-fold cross-validation.",
"3 The numbers in Table 1 indicate that both age and gender can be inferred reasonably well across all of the languages.",
"We use these classifiers in the following analyses.",
"For each non-English sample, we predict the age and gender of the author in both the original language and in each of the three English translations (Google, Bing, and DeepL).",
"I.e., we use the respective language's classifier described above (e.g., a classifier trained on German to predict German test data), and the English classifier described above for the translations.",
"E.g., we use the age and gender classifier trained on English data to predict the translations of the German test set.",
"For the English data, we first translate the texts into each of the other languages, using each of the three translation systems.",
"Then we again predict the author demographics in the original English test set (using the classifier trained on En-glish), as well as in each of the translated versions (using the classifier trained on the respective lan-guage).",
"E.g., we create a German, French, Italian, and Dutch translation with each Google, Bing, and DeepL, and classify both the original English and the translation.",
"We can then compare the distribution of age groups and genders in the predictions with the actual distributions.",
"If there is classifier bias , both 3 We also experimented with a convolutional neural network with attention, as well as with BERT-based input representations, but did not see significantly better results, presumably due to the higher number of parameters in each case.",
"the predictions based on the original language and the predictions based on the translations should be skewed in the same direction.",
"We can measure this difference by computing the Kullback-Leibler (KL) divergence of the predicted distribution from the true sample distribution.",
"In order to see whether the predictions differ statistically significantly from the original, we use a use a 2 contingency test and report significance at p < = 0 .",
"05 and p < = 0 .",
"01 .",
"If instead there is a translation bias , then the translated predictions should exhibit a stronger skew than the predictions based on the original language.",
"By using both translations from and into English, we can further tease apart the direction of this effect.",
"Translating into English Table 2 shows the results when translating into English.",
"It shows for each language the test gender ratio, the predicted ratio from classifiers trained in the same language, as well as their KL divergence from the ratio in the test set, and the ratio predictions and KL divergence on predictions of an English classifier on the translations from three MT systems.",
"For most languages, there exists a male bias in predictions of the original language.",
"The translated English versions create an even stronger skew.",
"The notable exception is French, which most translation engines render in a demographically faithful manner.",
"Dutch is slightly worse, followed by Italian (note, though, that the Italian data was so heavily imbalanced that we could not sample an even distribution for the test data).",
"Somewhat surprisingly, the gender skew is strongest for German, swinging by as much as 15 percentage points.",
"Translating from English Table 3 shows the results when translating from English into the various languages.",
"The format is the same as for Table 2.",
"Again we see large swings, normally exacerbating the balance towards men.",
"However, translating into German with all systems produces estimates that are a lot more female than the original data.",
"This result could be the inverse effect of what we observed above.",
"Again, there is little change for French, though we also see some female bias in two MT systems.",
"Figure 1 shows the kernel density plots for the four age groups in each language (rows) in the same language prediction, and in the English translation.",
"In all cases, the distributions are reasonably close, but in all cases, the predictions overestimate the most prevalent class.",
"To delve a bit deeper into this age mismatch, we also split up the sample by decade (i.e., seven classes: 10s, 20s, etc., up to 70s+).",
"Figure 2 shows the results.",
"The caveat here is that the overall performance is lower, due to the higher number of classes.",
"We also can not guarantee that the distribution still follows the true demographics, since we are subsampling within the larger classes given by the CIA factbook.",
"However, the results still strongly suggest that the observed mismatch is driven predominantly by overprediction of the 50s decade.",
"Because this decade often contributed strongly to the most frequent age category (2554), predictions did not differ as much from gold in the previous test.",
"It gold org.",
"also explains the situation of the Italian predictor.",
"In essence, English translations of all these languages, irrespective of the MT system, sound much older than they are.",
"All three tested commercial MT systems are close together in terms of performance.",
"However, they also seem to show the same systematic translation biases.",
"The most likely reason is the use of biased training data.",
"The fact that translations into English are perceived as older and more male than translations into other languages could indicate that there is a larger collection of unevenly selected data in English than for other languages.",
"The work by Rabinovich et al. (2017) is most similar to ours, in that they investigated the effect of translation on gender.",
"However, it differs in a few key points: they show that translation weakens the predictive power, but do not investigate the direction of false predictions.",
"We show that there is a definitive bias.",
"In addition, we extend the analysis to include age.",
"We also use various commercially available MT tools, rather than research systems.",
"Recent research has suggested that machine translation systems reflect cultural and societal biases (Stanovsky et al., 2019; Escude Font and Costa-juss`a, 2019), though mostly focusing on data selection and embeddings as sources.",
"Work by Mirkin et al. (2015); Mirkin and Meunier (2015) has set the stage for considering the impact of demographic variation (Hovy et al., 2015) and its integration in MT more general.",
"There is a growing literature on various types of bias in NLP.",
"For a recent overview, see Shah et al. (2020).",
"We test what demographic profiles author attribute tools predict for the translations from various commercially available machine translation tools.",
"We find that independent of the MT system and the translation quality, the predicted demographics differ systematically when translating into English.",
"On average, translations make the author seem substantially older and more male.",
"Translating from English into any of the other languages shows more mixed results, but similar tendencies.",
"The authors would like to thank Pietro Lesci, Serena Pugliese, and Debora Nozza, as well as the anonymous reviewers, for their kind suggestions.",
"The authors are members of the Bocconi Institute Figure 2: Density distribution and KL for decade prediction in various languages and different systems in original and when translated into English.",
"for Data Science and Analytics (BIDSA) and the Data and Marketing Insights (DMI) unit."
] | [
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"objective",
"objective",
"method",
"method",
"method",
"result",
"abstain",
"abstain",
"objective",
"other",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"abstain",
"objective",
"method",
"other",
"other",
"other",
"other",
"method",
"objective",
"abstain",
"abstain",
"other",
"other",
"other"
] |
[
"Adapter-based tuning has recently arisen as an alternative to fine-tuning.",
"It works by adding light-weight adapter modules to a pretrained language model (PrLM) and only updating the parameters of adapter modules when learning on a downstream task.",
"As such, it adds only a few trainable parameters per new task, allowing a high degree of parameter sharing.",
"Prior studies have shown that adapter-based tuning often achieves comparable results to fine-tuning.",
"However, existing work only focuses on the parameter-efficient aspect of adapter-based tuning while lacking further investigation on its effectiveness.",
"In this paper, we study the latter.",
"We first show that adapter-based tuning better mitigates forgetting issues than fine-tuning since it yields representations with less deviation from those generated by the initial PrLM.",
"We then empirically compare the two tuning methods on several downstream NLP tasks and settings.",
"We demonstrate that 1) adapter-based tuning outperforms fine-tuning on low-resource and cross-lingual tasks; 2) it is more robust to overfitting and less sensitive to changes in learning rates.",
"Large scale pretrained language models (PrLMs) (Devlin et al., 2019; Liu et al., 2019; Conneau et al., 2020a; Brown et al., 2020) have achieved state-of-the-art results on most natural language processing (NLP) tasks, where fine-tuning has become a dominant approach to utilize PrLMs.",
"A standard fine-tuning process copies weights from a PrLM and tunes them on a downstream task, which requires a new set of weights for each task.",
"Adapter-based tuning (Houlsby et al., 2019; Bapna and Firat, 2019) has been proposed as a Equally Contributed Linlin, Qingyu, Bosheng, Liying, and Jia-wei are under the Joint PhD Program between Alibaba and their corresponding universities.",
"more parameter-efficient alternative.",
"For NLP, adapters are usually light-weight modules inserted between transformer layers (Vaswani et al., 2017).",
"During model tuning on a downstream task, only the parameters of adapters are updated while the weights of the original PrLM are frozen.",
"Hence, adapter-based tuning adds only a small amount of parameters for each task, allowing a high degree of parameter-sharing.",
"Though using much less trainable parameters, adapter-based tuning has demonstrated comparable performance with full PrLM fine-tuning (Houlsby et al., 2019; Bapna and Firat, 2019; Stickland and Murray, 2019).",
"Existing work mostly focuses on the parameter-efficient aspect of adapters and attempt to derive useful applications from that, which is still the case in most recent works: Ruckle et al. (2020) explore methods to further improve the parameter and computation efficiency of adapters; Pfeiffer et al. (2020a) combine knowledge from multiple adapters to improve the performance on downstream tasks; Artetxe et al. (2020) and Pfeiffer et al. (2020c) leverage the modular architecture of adapters for parameter-efficient transfer to new languages or tasks, and Wang et al. (2020) utilize the same property for knowledge injection.",
"Besides parameter-efficiency, the unique characteristic of adapter-based tuning, with alternating frozen and learnable layers, might be directly useful for improving model performances.",
"However, this has not yet been discussed in the prior work.",
"In this paper, we first empirically demonstrate that adapter-based tuning better regularizes training than fine-tuning by mitigating the issue of forgetting.",
"We show that it yields representations with less deviation from those generated by the original PrLM.",
"Next, to see what this property of adapters will help when adapting PrLMs, we compare the performance of fine-tuning and adapter-based tuning on a wide range of datasets Self-attention Feed-forward Adapter Feed-forward Adapter + + Layer Norm + Transformer Layer Adapter Layer Norm Figure 1: The structure of the adapter adopted from Houlsby et al. (2019).",
"and NLP tasks.",
"Extensive experiments and analysis are conducted in different settings, including low-resource and high-resource, monolingual and cross-lingual.",
"Our main findings can be summarized as follows: For monolingual adaptation, adapter-based tuning yields better results in low-resource settings , especially when the task is more domain-specific.",
"With increasing training samples, the performance gain over fine-tuning is less significant ( 3).",
"Adapter-based tuning tends to outperform fine-tuning on zero-shot cross-lingual tasks under different amounts of training data ( 4).",
"Adapter-based tuning demonstrates higher stability and better generalization ability.",
"It is less sensitive to learning rates compared to fine-tuning ( 5).",
"When adapting a pretrained language model (PrLM), adapter-based tuning inserts light-weight neural networks (adapters) between the transformer layers of the PrLM, and only updates the parameters of the adapters on a downstream task, but keeps the ones of the PrLM frozen.",
"Unlike fine-tuning which introduces an entire new model for every task, one great advantage of adapter-based tuning is generating a compact 2 4 6 8 10 12 BERT Layer 0.0 0.2 0.4 0.6 0.8 1.0 RSAS i m il a r i t y Representation Space Comparison (SST-2) Fine-tune vs Base Adapter vs Base Figure 2: Comparison of the representations obtained at each layer before ( Base ) and after adapter-based tuning or fine-tuning on BERT-base using Representational Similarity Analysis (RSA).",
"Houlsby et al. (2019) have extensively studied the choices of adapter architectures and where they should be inserted into PrLMs.",
"They find that a stack of downand up-scale neural networks works well which only introduces a small amount of extra parameters to the network.",
"This design inspires most of the following work (Pfeiffer et al., 2020a,c; Bapna and Firat, 2019).",
"As shown in Figure 1, the adapter maps an input hidden vector h from dimension d to dimension m where m < d , and then re-maps it to dimension d .",
"We refer m as the hidden size of the adapter.",
"A skip-connection is employed inside the adapter network such that if the parameters of the projection layers are near zeros, the adapter module approximates an identity function.",
"Formally, given the input hidden vector h , the output vector h (cid:48) is calculated as: h (cid:48) = f 2 (tanh f 1 (h)) + h (1) in which f 1 ( ) and f 2 ( ) are the downand up-projection layers.",
"At each transformer layer, two adapters are inserted right after the self-attention and the feed-forward layers respectively.",
"During adapter tuning, only the parameters of the adapters, the normalization layers, and the final classification layer are updated.",
"We use the above described adapter configuration in all of our experiments, since it is adopted in most prior work with few modifications.",
"Fine-tuning large-scale PrLMs on downstream tasks can suffer from overfitting and bad generalization issues (Dodge et al., 2020; Phang et al., 2018).",
"Recently, Lee et al. (2020) propose Mixout to regularize the fine-tuning of PrLMs.",
"They show that Mixout avoids catastrophic forgetting and stabilizes the fine-tuning process by encouraging the weights of the updated model to stay close to the initial weights.",
"Since adapter-based tuning does not update the weights of PrLMs at all, we suspect that it has a similar effect of alleviating the issue of catastrophic forgetting.",
"Since the weights of the PrLM are the same before and after adapter-based tuning, to verify this, we use Representational Similarity Analysis (RSA) (Laakso and Cottrell, 2000) to assess the similarity of tuned representations to those without tuning at each transformer layer.",
"RSA has been widely used to analyze the similarity between two neural network outputs (Abnar et al., 2019; Chrupaa and Alishahi, 2019; Merchant et al., 2020), which works by creating two comparable sets of representations by inputting a same set of n samples to the two models.",
"For each set of representations, a n n pairwise similarity 1 matrix is calculated.",
"The final RSA similarity score between the two representation space is computed as the Pearson correlation between the flattened upper triangulars of the two similarity matrices.",
"We use a subset of GLUE tasks (Wang et al., 2018) for our analysis.",
"Given a task, we first perform adapter-based tuning and fine-tuning to adapt a BERT-base model ( M org ) to the target task, which yields models M adapt and M ft respectively (See Appendix A.2 for training details).",
"Then we pass sentences (or sentence-pairs depend on the task) from the development set to M org , M adapt , and M ft respectively.",
"We extract representations at each layer from the three models and select the corresponding representations of 5k randomly sampled tokens 2 ( n = 5000 ) for evaluation.",
"Note that the same set of tokens is used for all models.",
"Finally, we compare the representations obtained from M adapt or M ft to those from M org using RSA.",
"Figure 2 plots the results on STS-2, results of other tasks demonstrate a similar trend and can be found in Appendix A.3.",
"For both fine-tuning and adapter-based tuning, we observe that the repre-1 Cosine similarity is used 2 We skip [PAD], [CLS], [SEP] for token selection.",
"sentation change generally arises in the top layers of the network, which is consistent with previous findings that higher layers are more task relevant (Howard and Ruder, 2018).",
"It can be clearly observed that compared to fine-tuning, adapter-based tuning yields representations with less deviation from those of BERT-base at each layer, which verifies our claim that adapter-based tuning can better regularize the tuning process by mitigating the forgetting problem.",
"Apparently, this property of adapter tuning comes from that it freezes all the parameters of PrLMs.",
"And because of the skip-connection in the adapter, the hidden representation out of the adapter can mimic the input representation, in this way, some of the original knowledge of PrLMs (before injecting adapters) can be preserved.",
"Since we find that adapter-based tuning better regularizes the learning process, the next question is how this property will help to improve the performance when adapting PrLMs to downstream tasks.",
"We conduct extensive experiments to investigate this.",
"The remainder of this paper is organized as follows.",
"We compare fine-tuning and adapter-based tuning on monolingual text-level adaptation tasks in 3, followed by cross-lingual adaptation in",
"4. Further analysis about the training stability and generalization capabilities is shown in",
"5. 3 Monolingual Adaptation In this section, we first experiment with eight datasets as used in Gururangan et al. (2020) including both highand low-resource tasks ( 3.1).",
"We refer this set of tasks as Task Adaptation Evaluation ( TAE ).",
"We observe that adapter-based tuning consistently outperforms fine-tuning on low-resource tasks, while they perform similarly on high-resource tasks.",
"We further confirm the effectiveness of adapters in low-resource settings on the GLUE benchmark (Wang et al., 2018) ( 3.2).",
"TAE consists of four domains (biomedical, computer science, news text, and AMAZON reviews) and eight classification tasks (two in each domain), whose domain diversity makes it suitable to assess the adaptation effectiveness of different approaches.",
"Detailed data statistics are displayed in Appendix A.1.",
"We consider tasks with fewer than 5k training examples as low-resource tasks and the others as high-resource tasks.",
"Experimental Setup We perform supervised fine-tuning on RoBERTa-base as our baseline ( RoBa.-ft ).",
"For adapter-based tuning, we set the hidden size m of adapters to 256 ( RoBa.-adapter 256 ).",
"We also present the results of adding task-adaptive pretraining ( +TAPT ) (Gururangan et al., 2020).",
"In this setting, before fine-tuning or adapter-based tuning, the model was trained with a masked language modeling (MLM) objective on the training texts (without labels) of the task.",
"Note that in",
"RoBa.-adapter 256 +TAPT , we also use adapter-based tuning for TAPT where only the weights of adapters are updated at the TAPT stage.",
"This is to evaluate whether adapter-based tuning can work with unsupervised learning objectives.",
"We follow the experimental settings in Gururangan et al. (2020) for TAPT.",
"For fine-tuning and adapter-based tuning, we train models for 20 epochs to make sure they are sufficiently trained and save the checkpoint after each training epoch.",
"We select the checkpoint that achieves the best score on the validation set for evaluation on the test set.",
"The batch size is set to 16 for both methods.",
"The learning rate is set to 2e-5 for fine-tuning, and 1e-4 for adapter-based tuning.",
"See Appendix A.2 for the hyperparameter selection process and more training details.",
"Results Table 1 presents the comparison results.",
"We report the average result over 5 runs with different random seeds.",
"On four low-resource tasks, adapter-based tuning consistently outperforms fine-tuning and improves the average result by 1.9%.",
"Adapter-based tuning alone without TAPT even outperforms fine-tuning with TAPT.",
"Besides, adding TAPT before adapter-based tuning further improves the performance on 3 out of 4 low-resource tasks, which suggests that adapter-based tuning works with both supervised and unsupervised objectives.",
"Another finding is that when trained on high-resource tasks, both methods achieve similar results.",
"To verify the effects of training size, on high-resource tasks, we plot the performances with varying numbers of training examples in Figure 3.",
"The trend is consistent with our existing observations adapter-based tuning achieves better results when the training set is small while fine-tuning will gradually catch up with an increasing number of training examples.",
"To further validate that adapters tend to generalize better than fine-tuning under low-resource settings, we follow Zhang et al. (2021) to study low-resource adaptation using eight datasets from the GLUE benchmark (Wang et al., 2018) which covers four types of tasks: natural language inference (MNLI, QNLI, RTE), paraphrase detection (MRPC, QQP), sentiment classification (SST-2) and linguistic acceptability (CoLA).",
"Appendix A.1 provides detailed data statistics and descriptions.",
"we simulate two low-resource settings by randomly sampling 1k and 5k instances from the original training",
"data as the new training sets.",
"In each setting, we draw another 1k samples from the remaining training set as the validation set and instead use the original validation set as the test set, since the original GLUE test sets are not publicly available 3 .",
"We perform fine-tuning on BERT-base ( BERT-ft ) and RoBERTa-base ( RoBa.-ft ) respectively as our baselines.",
"We set the learning rate to 2e-5 and the batch size to 16 for BERT and RoBERTa fine-tuning experiments (See Appendix A.2 for details).",
"For adapters, we only tune its hidden sizes in { 64, 128, 256 } , setting the learning rate to 1e-4 and batch size to 16 as the same used in 3.1.",
"Results Table 2 presents the comparison results.",
"For adapter-based tuning, we report two results on each task.",
"One is obtained with the optimal hidden size which varies per dataset, and the other is obtained with the size of 64.",
"We observe that adapter-based tuning outperforms fine-tuning most of the time under both 1k and 5k settings.",
"In particular, the performance gain is more significant in 1k setting, where on average across all tasks, adapter-based tuning outperforms fine-tuning by 2.5% and 0.7% on BERT and RoBERTa respectively.",
"One consistent observation from 3.1 and 3.2 is that adapters tend to outperform fine-tuning on",
"3 Users are limited to a maximum of two submissions per day to obtain test results, which is inconvenient for a large number of runs",
"text-level classification tasks when the training set is small, but with more training samples, the ben-efit of adapters is less significant.",
"In low-resource setting, fine-tuning has more severe overfitting problem, since it has much more tunable parameters compared to adapter-tuning, so adapter-tuning works better than fine-tuning.",
"However, in high-resource setting, overfitting is not a big issue and model capacity counts more.",
"Obviously, the model capacity under fine-tuning is larger than that under adapter-tuning since fine-tuning can update much more model parameters.",
"When comparing the improvements of adapter tuning over fine-tuning on tasks from TAE ( 3.1) and GLUE ( 3.2), we find that the improvement is more significant on low-resource tasks from TAE on RoBERTa-base, the average improvement brought by adapters is 1.9% across four low-resource tasks from TAE, while the average improvement on GLUE is 0.7% and 0.4% in 1k and 5k settings respectively.",
"As indicated in Gururangan et al. (2020), the TAE dataset is more domain-specific and has less overlap with the corpus used for RoBERTa-base pretraining, one intuitive explanation for this observation is that fine-tuning has more severe forgetting and overfitting issues in domain adaptation where the target domain is dissimilar to the source domain in pretraining, thus adapter-based tuning is more preferable in this scenario.",
"In this section, we further compare fine-tuning and adapter-based tuning in the zero-shot cross-lingual transfer setting.",
"All experiments in this section are based on XLM-R-large (Conneau et al., 2020a), a recent SOTA multilingual PrLM covering 100 languages.",
"We conduct evaluations on a set of multilingual tasks from XTREME (Hu et al., 2020), including Universal Dependencies v2.5 tree banks (UD-POS) (Nivre et al., 2018), Wikiann NER (Pan et al., 2017), and cross-lingual natural language inference (XNLI) (Conneau et al., 2020b).",
"UD-POS contains 34 languages, Wikiann NER contains 40 languages, and XNLI contains 15 languages.",
"We refer the reader to Hu et al. (2020) for additional details about the datasets.",
"Experimental Setup On each task, we perform hyperparameter tuning on the English development set.",
"For both fine-tuning and adapter-based tuning, we use batch size 32, and tune the learning rates in { 1e-5, 2e-5, 3e-5, 4e-5, 5e-5 } .",
"For adapter-based tuning, we further tune the hidden sizes in { 64, 128, 256 } and find size 256 often performs the best.",
"We train and select models with the English training and development sets and then evaluate the tuned models on test sets of all languages.",
"See Appendix A.2 for hyperparameter and training details.",
"Results Table 3 summarizes the results.",
"To better compare cross-lingual transfer to different groups of languages, we present the average results of all languages ( All ), the target languages except English ( Target ), and the Non-Indo-European languages ( Distant ).",
"It can be observed that adapter-based tuning significantly outperforms fine-tuning Model TAE low GLUE 1 k XNLI full XNLI 5% finetune 78.52 69.86 78.64 75.09 Adapter 64 77.20 71.20 79.01 75.47 Adapter 128 79.29 71.09 79.24 75.83 Adapter 256 80.41 71.06 79.43 75.45 Table 5: Average test results with different adapter hidden sizes.",
"on all three settings for each task.",
"Specifically, adapter-based tuning outperforms the reported fine-tuning results (Hu et al., 2020) on Target and Distant by 2.06% and 3.71% on UD-POS, 1.08% and 0.8% on Wikiann NER, and 0.87% and 0.87% on XNLI.",
"See Appendix A.3 for detailed results on each language.",
"Note that UD-POS, Wikiann NER, and XNLI are all high-resource tasks, with 20k, 20k, and 400k training samples respectively.",
"Unlike monolingual tasks, adapters achieve consistent performance gains even under high-resource settings on cross-lingual tasks.",
"We suspect that the ability to mitigate forgetting is more useful in cross-lingual scenarios since the model knowledge of the target languages only comes from pretraining.",
"Adapter-based tuning can better maintain the knowledge.",
"We further investigate the effectiveness of adapter-based tuning on XNLI with smaller training sets.",
"Table 4 summarizes the results when trained on 5%, 10%, and 20% of the original training sets.",
"In all settings, adapters still demonstrate consistent improvements over fine-tuning.",
"Adapter Hidden Size The hidden size m 4 is the only adapter-specific hyperparameter.",
"As indicated in Houlsby et al. (2019), the hidden size provides a simple means to trade off performance with parameter efficiency.",
"Table 5 shows the performance with different hidden sizes, from which we find that increasing the hidden size may not always lead to performance gains.",
"For monolingual low-resource adaptation, TAE tasks prefer a larger hidden size, while the results on GLUE are similar across different hidden sizes.",
"We suspect that this is due to that TAE datasets are more dissimilar to the pretraining corpus, which requires relatively more trainable parameters to learn the domain-specific knowledge.",
"On XNLI, a larger hidden size helps improve the performance when the full data is used.",
"However, when only 5% training data is used, increasing the hidden size does not yield consistent improvements.",
"The results indicate that the optimal hidden size depends on both the domain and the training size of the task.",
"Learning Rate Robustness We compare the two tuning methods in terms of their stability w.r.t the learning rate.",
"Figure 4 shows the performance distributions on CoLA and MNLI under 1k and 5k settings.",
"The learning rates are varied in { 2e-5, 4e-5, 6e-5, 8e-5, 1e-4 } .",
"Each box in the plot is drawn from the results of 20 runs with different random seeds.",
"We observe that fine-tuning yields larger variances when increasing the learning rates.",
"It often collapses with learning rates larger than 4e-5 4 The fraction of adapter parameters w.r.t. BERT-base (110M parameters) is 2%, 4%, and 6% when m is set to 64, 128, and 256.",
"The fraction w.r.t. XLMR-large (550M parameters) is 1%, 2%, and 3%, respectively.",
"Results are based on BERT-base.",
"The original training and dev sets from GLUE are used for this analysis.",
"when RoBERTa-base is used.",
"Adapter-based tuning is more stable across a wider range of learning rates.",
"Overfitting and Generalization Here, we first study the robustness of adapter-based tuning to overfitting.",
"We use CoLA, MRPC, QNLI, and SST-2 with their original training and development sets for our analysis.",
"The CoLA and MRPC contain 8.5k and 3.7k training samples and are regarded as low-resource tasks.",
"The QNLI and SST-2 con-2 1 0 1 2 0 1 2 3 4 5 e v a l .",
"tain 104k and 67k training samples and are used as high-resource tasks.",
"We train the two low-resource tasks for 10k steps, and the high resource tasks for 60k steps with a batch size of 16.",
"We use BERT-base for all experiments.",
"Figure 5 plots the loss curves on dev sets w.r.t training steps.",
"We observe that models with fine-tuning can easily overfit on both lowand high-resource tasks.",
"Adapter-based tuning is more robust to overfitting.",
"Additional results on accuracy w.r.t. training steps and a similar analysis on XNLI are in Appendix A.3.",
"We also present the mean and best dev results across all evaluation steps in Table 6, where we perform an evaluation step every 20 training steps.",
"The mean results of adapter-based tuning consistently outperform those of fine-tuning.",
"The differences between the mean and the best values are also smaller with adapter-based tuning.",
"The results suggest that the performance of adapters is more stable over fine-tuning along the training process.",
"Training neural networks can be viewed as searching for a good minima in the non-convex landscape defined by the loss function.",
"Prior work (Hochreiter and Schmidhuber, 1997; Li et al., 2018) shows that the flatness of a local minima correlates with the generalization capability.",
"Thus, we further show the loss landscapes of the two tuning methods.",
"Following Hao et al. (2019), we plot the loss curve by linear interpolation between 0 and 1 with function f ( ) = L ( 0 + ( 1 0 )) , where 0 and 1 denote the model weights before and after tuning.",
"L ( ) is the loss function and is a scalar parameter.",
"In our experiments, we set the range of to [ 2 , 2] and uniformly sample 20 points.",
"Figure 6 shows the loss landscape curves on CoLA and SST based on BERT-base.",
"It shows that the minimas of adapter-based tuning are more wide and flat, which indicates that adapter-based tuning tends to generalize better.",
"efficient, when would adapter-based tuning be more effective than fine-tuning for PrLM adaptation?",
"Thus, we only use fine-tuning as our primary baseline in previous sections.",
"Here, for the sake of curiosity, we further compare adapter-based tuning to fine-tuning regularized by mixout (Lee et al., 2020) on a subset of GLUE tasks, since mixout similarly regularizes the learning process by mitigating the forgetting issue.",
"Specifically, it replaces all outgoing parameters from a randomly selected neuron to the corresponding parameters of the initial model without tuning, such that it reduces divergence from the initial model.",
"Following the suggestions in the paper, we conduct experiments by replacing all dropout modules in the network with mixout and set the mixout probability to 0 .",
"9 .",
"From the results in Table 7, we find that using adapter-based tuning alone yields the best results in most cases.",
"Applying mixout to fine-tuning improves the performance on CoLA and MRPC only.",
"However, applying it to adapters instead tends to degrade the performance.",
"We suspect that this is because the number of trainable parameters of adapters is very few to begin with.",
"Hence, further replacing a large percentage of them with their initial weights may weaken the learning ability.",
"Fine-tuning pretrained large scale language models has proven its effectiveness on a wide range of NLP tasks (Devlin et al., 2019; Liu et al., 2019; Conneau et al., 2020a; Brown et al., 2020).",
"However, fine-tuning requires a new set of weights for each task, which is parameter inefficient.",
"Adapter-based tuning is proposed to deal with this problem (Houlsby et al., 2019).",
"Most previous work has demonstrated that it achieves comparable performance to fine-tuning (Bapna and Firat, 2019; Pfeiffer et al., 2020b,a,c; Ruckle et al., 2020; Wang et al., 2020; Guo et al., 2020).",
"However, existing work mostly focuses on the parameter-efficient aspect while overlooks the effectiveness.",
"Fine-tuning PrLMs in a low-resource setting has been studied for a while (Dodge et al., 2020; Lee et al., 2020; Phang et al., 2018; Jiang et al., 2020; Zhang et al., 2021).",
"Previous work points out that with large-scale parameters, fine-tuning on a few samples can lead to overfitting and bad generalization, which causes the results unstable.",
"Phang et al. (2018) find that pretraining on an intermediate task can improve fine-tuning outcomes.",
"Jiang et al. (2020) improve the robustness of fine-tuning by controlling the model complexity and preventing aggressive updating.",
"On the other hand, catastrophic forgetting can appear when transferring a pretrained neural networks (French, 1999; McCloskey and Cohen, 1989; Goodfellow et al., 2013), where the learned knowledge from pretraining is lost when adapting to downstream tasks.",
"This phenomenon often appears in NLP tasks (Mou et al., 2016; Arora et al., 2019).",
"To relieve this problem of adapting pretrained language models, Howard and Ruder (2018) gradually unfreeze the layers starting from the last layer and Sun et al. (2019) find assigning lower learning rate to the bottom layers can improve the performance.",
"Lee et al. (2020) regularize learning by encouraging the weights of the updated model to stay close to the initial weights.",
"Aghajanyan et al. (2021) regularize fine-tuning by introducing noise to the input which is similar to adversarial training for fine-tuning studied in Zhu et al. (2020).",
"Mosbach et al. (2021) point out that the instability of fine-tuning lies in the optimizer and propose to revise the Adam optimizer by replacing it with a de-bias version.",
"Chen et al. (2020) propose a mechanism to recall the knowledge from pretraining tasks.",
"Prior work often focuses on the parameter-efficient aspect while overlooks the effectiveness of adapter-based tuning.",
"We empirically demonstrate that adapter-based tuning can better regularize the learning process.",
"We conduct extensive experiments to verify its effectiveness and conclude that 1) it tends to outperform fine-tuning on both low-resource and cross-lingual tasks; 2) it demonstrates higher stability under different learning rates compared to fine-tuning.",
"We hope our study will inspire more future work on PrLM adaptation based on adapters and other methods that only tune part of the PrLM parameters.",
"Linlin Liu would like to thank the support from Interdisciplinary Graduate School, Nanyang Technological University."
] | [
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"objective",
"result",
"result",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"objective",
"objective",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"objective",
"objective",
"abstain",
"other"
] |
[
"Abstract",
"Compared to the general news domain, information extraction (IE) from biomedical text requires much broader domain knowledge.",
"However, many previous IE methods do not utilize any external knowledge during inference.",
"Due to the exponential growth of biomedical publications, models that do not go beyond their fixed set of parameters will likely fall behind.",
"Inspired by how humans look up relevant information to comprehend a scientific text, we present a novel framework that utilizes external knowledge for joint entity and relation extraction named KECI (Knowledge-Enhanced Collective Inference).",
"Given an input text, KECI first constructs an initial span graph representing its initial understanding of the text.",
"It then uses an entity linker to form a knowledge graph containing relevant background knowledge for the the entity mentions in the text.",
"To make the final predictions, KECI fuses the initial span graph and the knowledge graph into a more refined graph using an attention mechanism.",
"KECI takes a collective approach to link mention spans to entities by integrating global relational information into local representations using graph convolutional networks.",
"Our experimental results show that the framework is highly effective, achieving new state-of-the-art results in two different benchmark datasets: BioRelEx (binding interaction detection) and ADE (adverse drug event extraction).",
"For example, KECI achieves absolute improvements of 4.59% and 4.91% in F1 scores over the state-of-the-art on the BioRelEx entity and relation extraction tasks 1 .",
"With the accelerating growth of biomedical publications, it has become increasingly challenging to manually keep up with all the latest articles.",
"As 1 The code is publicly available at https://github.com/ laituan245/bio_relex Figure 1: An example in the BioRelEx dataset.",
"a result, developing methods for automatic extraction of biomedical entities and their relations has attracted much research attention recently (Li et al., 2017; Fei et al., 2020; Luo et al., 2020).",
"Many related tasks and datasets have been introduced, ranging from binding interaction detection (BioRelEx) (Khachatrian et al., 2019) to adverse drug event extraction (ADE) (Gurulingappa et al., 2012).",
"Many recent joint models for entity and relation extraction rely mainly on distributional representations and do not utilize any external knowledge source (Eberts and Ulges, 2020; Ji et al., 2020; Zhao et al., 2020).",
"However, different from the general news domain, information extraction for the biomedical domain typically requires much broader domain-specific knowledge.",
"Biomedical documents, either formal (e.g., scientific papers) or informal ones (e.g., clinical notes), are written for domain experts.",
"As such, they contain many highly specialized terms, acronyms, and abbreviations.",
"In the BioRelEx dataset, we find that about 65% of the annotated entity mentions are abbreviations of biological entities, and an example is shown in Figure",
"1. These unique characteristics bring great challenges to general-domain systems and even to existing scientific language models that Figure 2: KECI operates in three main steps: (1) initial span graph construction (2) background knowledge graph construction (3) fusion of these two graphs into a final span graph.",
"do not use any external knowledge base during inference (Beltagy et al., 2019; Lee et al., 2019).",
"For example, even though SciBERT (Beltagy et al., 2019) was pretrained on 1.14M scientific papers, our baseline SciBERT model still incorrectly predicts the type of the term UIM in Figure 1 to be DNA, which should be a Protein Motif instead.",
"Since the biomedical literature is expanding at an exponential rate, models that do not go beyond their fixed set of parameters will likely fall behind.",
"In this paper, we introduce KECI (Knowledge-Enhanced Collective Inference), a novel end-to-end framework that utilizes external domain knowledge for joint entity and relation extraction.",
"Inspired by how humans comprehend a complex piece of scientific text, the framework operates in three main steps (Figure 2).",
"KECI first reads the input text and constructs an initial span graph representing its initial understanding of the text.",
"In a span graph, each node represents a (predicted) entity mention, and each edge represents a (predicted) relation between two entity mentions.",
"KECI then uses an entity linker to form a background knowledge graph containing all potentially relevant biomedical entities from an external knowledge base (KB).",
"For each entity, we extract its semantic types, its definition sentence, and its relational information from the external KB.",
"Finally, KECI uses an attention mechanism to fuse the initial span graph and the background knowledge graph into a more refined graph representing the final output.",
"Different from previous methods that link mentions to entities based solely on local contexts (Li et al., 2020b), our framework takes a more collective approach to link multiple semantically related mentions simultaneously by leveraging global topical coherence.",
"Our hypothesis is that if multiple mentions co-occur in the same discourse and they are probably semantically related, their reference entities should also be connected in the external KB.",
"KECI integrates global relational information into mention and entity representations using graph convolutional networks (GCNs) before linking.",
"The benefit of collective inference can be illustrated by the example shown in Figure",
"2. The entity linker proposes two candidate entities for the mention FKBP12 ; one is of semantic type AA, Peptide, or Protein and the other is of semantic type Gene or Genome.",
"It can be tricky to select the correct candidate as FKBP12 is already tagged with the wrong type in the initial span graph (i.e., it is predicted to be a Chemical instead of a Protein).",
"However, because of the structural resemblance between the mention-pair (cid:104) FK506 , FKBP12 (cid:105) and the pair (cid:104) Organic Chemical, AA, Peptide, or Protein (cid:105) , KECI will link FKBP12 to the entity of semantic type AA, Peptide, or Protein.",
"As a result, the final predicted type of FKBP12 will also be corrected to Protein in the final span graph.",
"Our extensive experimental results show that the proposed framework is highly effective, achieving new state-of-the-art biomedical entity and relation extraction performance on two benchmark datasets: BioRelEx (Khachatrian et al., 2019) and ADE (Gurulingappa et al., 2012).",
"For example, KECI achieves absolute improvements of 4.59% and 4.91% in F1 scores over the state-of-the-art on the BioRelEx entity and relation extraction tasks.",
"Our analysis also shows that KECI can automatically learn to select relevant candidate entities without any explicit entity linking supervision during training.",
"Furthermore, because KECI considers text spans as the basic units for prediction, it can extract nested entity mentions.",
"KECI considers text spans as the basic units for feature extraction and prediction.",
"This design choice allows us to handle nested entity mentions (Sohrab and Miwa, 2018).",
"Also, joint entity and relation extraction can be naturally formulated as the task of extracting a span graph from an input document (Luan et al., 2019).",
"In a span graph, each node represents a (predicted) entity mention, and each edge represents a (predicted) relation between two entity mentions.",
"Given an input document D , KECI first enumerates all the spans (up to a certain length) and embeds them into feature vectors (Sec. 2.2).",
"With these feature vectors, KECI predicts an initial span graph and applies a GCN to integrate initial relational information into each span representation (Sec. 2.3).",
"KECI then uses an entity linker to build a background knowledge graph and applies another GCN to encode each node of the graph (Sec. 2.4).",
"Finally, KECI aligns the nodes of the initial span graph and the background knowledge graph to make the final predictions (Sec. 2.5).",
"We train KECI in an end-to-end manner without using any additional entity linking supervision (Sec. 2.6).",
"Overall, the design of KECI is partly inspired by previous research in educational psychology.",
"Students' background knowledge plays a vital role in guiding their understanding and comprehension of scientific texts (Alvermann et al., 1985; Braasch and Goldman, 2010).",
"Activating relevant and accurate prior knowledge will aid students' reading comprehension.",
"Our model first constructs a contextualized representation for each input token using SciBERT (Beltagy et al., 2019).",
"Let X = ( x 1 , ..., x n ) be the output of the token-level encoder, where n denotes the number of tokens in D .",
"Then, for each span s i whose length is not more than L , we compute its span representation s i R d as: s i = FFNN g (cid:0)(cid:2) x START ( i ) , x END ( i ) , x i , ( s i ) (cid:3)(cid:1) (1) where START ( i ) and END ( i ) denote the start and end indices of s i respectively.",
"x START ( i ) and x END ( i ) are the boundary token representations.",
"x i is an attention-weighted sum of the token representations in the span (Lee et al., 2017).",
"( s i ) is a feature vector denoting the span length.",
"FFNN g is a feedforward network with ReLU activations.",
"With the extracted span representations, we predict the type of each span and also the relation between each span pair jointly.",
"Let E denote the set of entity types (including non-entity), and R denote the set of relation types (including non-relation).",
"We first classify each span s i : e i = Softmax (cid:0) FFNN e ( s i ) (cid:1) (2) where FFNN e is a feedforward network mapping from R d R | E | .",
"We then employ another network to classify the relation of each span pair (cid:104) s i , s j (cid:105) : r ij = Softmax (cid:0) FFNN r (cid:0)(cid:2) s i , s j , s i s j (cid:3)(cid:1)(cid:1) (3) where denotes the element-wise multiplication, FFNN r is a mapping from R 3 d R | R | .",
"We will use the notation r ij [ k ] to refer to the predicted probability of s i and s j having the relation k .",
"At this point, one can already obtain a valid output for the task from the predicted entity and relation scores.",
"However, these predictions are based solely on the local document context, which can be difficult to understand without any external domain knowledge.",
"Therefore, our framework uses these predictions only to construct an initial span graph that will be refined later based on information extracted from an external knowledge source.",
"To maintain computational efficiency, we first prune out spans of text that are unlikely to be entity mentions.",
"We only keep up to n spans with the lowest probability scores of being a non-entity.",
"The value of is selected empirically and set to be 0.5.",
"Spans that pass the filter are represented as nodes in the initial span graph.",
"For every span pair (cid:104) s i , s j (cid:105) , we create | R | directed edges from the node representing s i to the node representing s j .",
"Each edge represents one relation type and is weighted by the corresponding probability score in r ij .",
"Let G s = { V s , E s } denote the initial span graph.",
"We use a bidirectional GCN (Marcheggiani and Titov, 2017; Fu et al., 2019) to recursively update each span representation: (cid:126) h li = (cid:88) s j V s \\{ s i } (cid:88) k R r ij [ k ] (cid:18) (cid:126) W ( l ) k h lj + (cid:126) b ( l ) k (cid:19) (cid:126) h li = (cid:88) s j V s \\{ s i } (cid:88) k R r ji [ k ] (cid:18) (cid:126) W ( l ) k h lj + (cid:126) b ( l ) k (cid:19) h l +1 i = h li + FFNN ( l ) a (cid:32) ReLU (cid:18)(cid:2) (cid:126) h li , (cid:126) h li (cid:3)(cid:19)(cid:33) (4) where h li is the hidden feature vector of span s i at layer l .",
"We initialize h 0 i to be s i (Eq. 1).",
"FFNN ( l ) a is a feedforward network whose output dimension is the same as the dimension of h li .",
"After multiple iterations of message passing, each span representation will contain the global relational information of G s .",
"Let h i denote the feature vector at the final layer of the GCN.",
"Note that the dimension of h i is the same as the dimension of s i (i.e., h i R d ).",
"In this work, we utilize external knowledge from the Unified Medical Language System (UMLS) (Bodenreider, 2004).",
"UMLS consists of three main components: Metathesaurus, Semantic Network, and Specialist Lexicon and Lexical Tools.",
"The Metathesaurus provides information about millions of fine-grained biomedical concepts and relations between them.",
"To be consistent with the existing literature on knowledge graphs, we will refer to UMLS concepts as entities.",
"Each entity is annotated with one or more higher-level semantic types, such as Anatomical Structure , Cell , or Virus .",
"In addition to relations between entities, there are also semantic relations between semantic types.",
"For example, there is an affects relation from Acquired Abnormality to Physiologic Function .",
"This information is provided by the Semantic Network.",
"mapping tool for UMLS (Aronson and Lang, 2010).",
"We then construct a background knowledge graph (KG) from the extracted information.",
"More specifically, we first create a node for every extracted biomedical entity.",
"The semantic types of each entity node are also modeled as type nodes that are linked with associated entity nodes.",
"Finally, we create an edge for every relevant relation found in the Metathesaurus and the Semantic Network.",
"An example KG is in the grey shaded region of Figure",
"2. Circles represent entity nodes, and rectangles represent nodes that correspond to semantic types.",
"Note that we simply run MetaMap with the default options and do not tune it.",
"In our experiments, we found that MetaMap typically returns many candidate entities unrelated to the input text.",
"However, as to be discussed in Section 3.4, we show that KECI can learn to ignore the irrelevant entities.",
"Let G k = { V k , E k } denote the constructed background KG, where V k and E k are the node and edge sets, respectively.",
"We use a set of UMLS embeddings pretrained by Maldonado et al. (2019) to initialize the representation of each node in V k .",
"We also use SciBERT to encode the UMLS definition sentence of each node into a vector and concatenate it to the initial representation.",
"After that, since G k is a heterogeneous relational graph, we use a relational GCN (Schlichtkrull et al., 2018) to update the representation of each node v i : v l +1 i = ReLU (cid:32) U ( l ) v li + (cid:88) k R (cid:88) v j N ki (cid:18) 1 c i,k U ( l ) k v lj (cid:19)(cid:33) (5) where v li is the feature vector of v i at layer l .",
"N ki is the set of neighbors of v i under relation k R .",
"c i,k is a normalization constant and set to be | N ki | .",
"After multiple iterations of message passing are performed, the global relational information of the KG will be integrated into each node's representation.",
"Let v i denote the feature vector at the final layer of the relational GCN.",
"We further project each vector v i to another vector n i using a simple feedforward network, so that n i has the same dimension as the span representations (i.e., n i R d ).",
"At this point, we have two graphs: the initial span graph G s = { V s , E s } (Sec. 2.3) and the background knowledge graph G k = { V k , E k } (Sec. 2.4).",
"We have also obtained a structure-aware representation for each node in each graph (i.e., h i for Figure 3: An illustration of the attention mechanism. each span s i V s and n j for each entity v j V k ).",
"The next step is to soft-align the mentions and the candidate entities using an attention mechanism (Figure 3).",
"Let C ( s i ) denote the set of candidate entities for a span s i V s .",
"For example, in Figure 2, the mention FKBP12 has two candidate entities, while FK506 has only one candidate.",
"For each candidate entity v j C ( s i ) , we calculate a scalar score ij indicating how relevant v j is to s i : ij = FFNN c (cid:0)(cid:2) h i , n j (cid:3)(cid:1) (6) where FFNN c is a feedforward network mapping from R 2 d R .",
"Then we compute an additional sentinel vector c i (Yang and Mitchell, 2017; He et al., 2020) and also compute a score i for it: c i = FFNN s (cid:0) h i (cid:1) i = FFNN c (cid:0)(cid:2) h i , c i (cid:3)(cid:1) (7) where FFNN s is another feedforward network mapping from R d R d .",
"Intuitively, c i records the information of the local context of s i , and i measures the importance of such information.",
"After that, we compute a final knowledge-aware representation f i for each span s i as follows: Z = exp ( i ) + (cid:88) v z C ( s i ) exp ( iz ) i = exp ( i ) /Z and ij = exp ( ij ) /Z f i = i c i + (cid:88) v j C ( s i ) ij n j (8) The attention mechanism is illustrated in Figure",
"3. With the extracted knowledge-aware span representations, we predict the final span graph in a way similar to Eq.",
"2 and Eq.",
"3: (cid:98) e i = Softmax (cid:0) FFNN (cid:98) e ( f i ) (cid:1) (cid:99) r ij = Softmax (cid:0) FFNN (cid:98) r ( (cid:2) f i , f j , f i f j (cid:3) ) (cid:1) (9) where FFNN (cid:98) e is a mapping from R d R | E | , and FFNN (cid:98) r is a mapping from R 3 d R | R | .",
"(cid:98) e i is the final predicted probability distribution over possible entity types for span s i .",
"(cid:99) r ij is the final predicted probability distribution over possible relation types for span pair (cid:104) s i , s j (cid:105) .",
"where L e * denotes the cross-entropy loss of span classification.",
"L r * denotes the binary cross-entropy loss of relation classification.",
"L e 1 and L r 1 are loss terms for the initial span graph prediction (Eq. 2 and Eq. 3 of Section 2.3).",
"L e 2 and L r 2 are loss terms for the final span graph prediction (Eq. 9 of Section 2.5).",
"We apply a larger weight score to the loss terms L e 2 and L r 2 .",
"We train the framework using only ground-truth labels of the entity and relation extraction tasks.",
"We do not make use of any entity linking supervision in this work.",
"Datasets and evaluation metrics We evaluate KECI on two benchmark datasets: BioRelEx and ADE.",
"The BioRelEx dataset (Khachatrian et al., 2019) consists of 2,010 sentences from biomedical literature that capture binding interactions between proteins and/or biomolecules.",
"BioRelEx has annotations for 33 types of entities and 3 types of relations for binding interactions.",
"The training, development, and test splits contain 1,405, 201, and 404 sentences, respectively.",
"The training and development sets are publicly available.",
"The test set is unreleased and can only be evaluated against using CodaLab 2 .",
"For BioRelEx, we report Micro-F1 scores.",
"The ADE dataset (Gurulingappa et al., 2012) consists of 4,272 sentences extracted from medical reports that describe drug-related adverse effects.",
"Two entity types ( Adverse-Effect and Drug ) and a single relation type ( Adverse-Effect ) are pre-defined.",
"Similar to previous work (Eberts 2 https://competitions.codalab.org/ competitions/20468 Model Entity (Micro-F1) Relation (Micro-F1) SciIE (2018) 77.90 49.60 DYGIEPP + ELMo (2020) 81.10 55.60 DYGIEPP + BioELMo (2020) 82.80 54.80 SentContextOnly 83.98 63.90 FlatAttention 84.32 64.23 KnowBertAttention 85.69 65.13 Full Model (KECI) 87.42 66.09 Table 1: Overall results (%) on the development set of BioRelEx.",
"and Ulges, 2020; Ji et al., 2020), we conduct 10-fold cross-validation and report averaged Macro-F1 scores.",
"All the reported results take overlapping entities into consideration.",
"Implementation details We implement KECI using PyTorch (Paszke et al., 2019) and Hugging-face's Transformers (Wolf et al., 2020).",
"KECI uses SciBERT as the Transformer encoder (Beltagy et al., 2019).",
"All details about hyperparameters and reproducibility information are in the appendix.",
"to comparing our method with state-of-the-art methods on the above two datasets, we implement the following",
"following baselines for further comparison and analysis:",
"1. SentContextOnly : This baseline does not use any external knowledge .",
"It uses only the local sentence context for prediction.",
"It extracts the final output directly from the predictions obtained using Eq.",
"2 and Eq.",
"3.",
"2. FlatAttention : This baseline does not rely on collective inference .",
"It does not integrate any global relational information into mention and entity representations.",
"Each h i mentioned in Sec. 2.3 is set to be s i (Eq. 1), and each v i mentioned in Sec. 2.4 is set to be v 0 i .",
"Then, the prediction of the final span graph is the same as described in Sec. 2.5.",
"3. KnowBertAttention : This baseline uses the Knowledge Attention and Recontextualization (KAR) mechanism of KnowBert (Peters et al., 2019), a state-of-the-art knowledge-enhanced Model Entity (Macro-F1) Relation (Macro-F1) Relation-Metric (2019) 87.11 77.29 SpERT (2020) 89.28 78.84 SPAN Multi-Head (2020) 90.59 80.73 SentContextOnly 88.13 77.23 FlatAttention 89.16 78.81 KnowBertAttention 90.08 79.95 Full Model (KECI) 90.67 81.74 Table 3: Overall results (%) on the ADE dataset.",
"language model .",
"The baseline first uses SciBERT to construct initial token-level representations.",
"It then uses the KAR mechanism to inject external knowledge from UMLS into the token-level vectors.",
"Finally, it embeds text spans into feature vectors (Eq. 1) and uses the span representations to extract entities and relations in one pass (similar to Eq. 9).",
"For fair comparison, all the baselines use SciBERT as the Transformer encoder.",
"A major difference between KECI and KnowBertAttention (Peters et al., 2019) is that KECI explicitly builds and extracts information from a multi-relational graph structure of the candidate entity mentions before the knowledge fusion process.",
"In contrast, KnowBertAttention only uses SciBERT to extract features from the candidate entity mentions.",
"Therefore, KnowBertAttention only takes advantage of the entity-entity co-occurrence information.",
"On the other hand, KECI integrates more fine-grained global relational information (e.g., the binding interactions shown in Figure 2) into the mention representations.",
"This difference makes KECI achieve better overall performance, as to be discussed next.",
"Table 1 and Table 2 show the overall results on the development and test sets of BioRelEx, respectively.",
"Compared to SentContextOnly, KECI achieves much higher performance.",
"This demonstrates the importance of incorporating external knowledge for biomedical information extraction.",
"KECI also outperforms the baseline FlatAttention by a large margin, which shows the benefit of collective inference.",
"In addition, we see that our model performs better than the baseline KnowBertAttention.",
"Finally, at the time of writing, KECI achieves the first position on the BioRelEx leaderboard 3 .",
"Table 3 shows the overall results on ADE.",
"KECI again outperforms all the baselines and state-of-the-art models such as SpERT (Eberts and Ulges, 2020) and SPAN Multi-Head (Ji et al., 2020).",
"This further confirms the effectiveness of our framework.",
"Overall, the two datasets used in this work focus on two very different subareas of the biomedical domain, and KECI was able to push the state-of-the-art results of both datasets.",
"This indicates that our proposed approach is highly generalizable.",
"Table 4 shows the results of ablation studies we did on the development set of the BioRelEx benchmark.",
"We compare our full model against several partial variants.",
"The variant [w/o external knowledge] is the same as the baseline SentContextOnly, and the variant [w/o collective inference] is the same as the baseline FlatAttention (Section 3.1).",
"For the variant [w/o the bidirectional GCN], we simply set each h i mentioned in Section 2.3 to be s i .",
"Similarly, for the variant [w/o the relational GCN], we set each v i in Section 2.4 to be v 0 i .",
"The last two variants are related to the initialization of each vector v 0 i .",
"We see that all the partial variants perform worse than our full model.",
"This shows that each component of KECI plays an important role.",
"There is no gold-standard set of correspondences between the entity mentions in the datasets and the UMLS entities.",
"Therefore, we cannot directly evaluate the entity linking performance of KECI.",
"However, for each UMLS semantic type, we compute the average attention weight that an entity of that type gets assigned (Table 5).",
"Overall, we see 3 https://competitions.codalab.org/ competitions/20468 that KECI typically pays the most attention to the relevant informative entities while ignoring the irrelevant ones.",
"Table 6 shows some examples from the ADE dataset that illustrate how incorporating external knowledge can improve the performance of joint biomedical entity and relation extraction.",
"In the first example, initially, there is no edge between the node bleeding symptoms and the node warfarin, probably because of the distance between their corresponding spans in the original input sentence.",
"However, KECI can link the term warfarin to a UMLS entity (CUI: C0043031), and the definition in UMLS says that warfarin is a type of anticoagulant that prevents the formation of blood clots.",
"As the initial feature vector of each entity contains the representation of its definition (Sec. 2.4), KECI can recover the missing edge.",
"In the second example, the initial span graph is predicted to have three entities of type Adverse-Effect , which correspond to three different overlapping text spans.",
"Among these three, only retroperi-toneal fibrosis can be linked to a UMLS entity.",
"It is also evident from the input sentence that one of these spans is related to methysergide.",
"As a result, KECI successfully removes the other two unlinked span nodes to create the final span graph.",
"In the third example, probably because of the phrase due to, the node endometriosis is initially predicted to be of type Drug , and the node acute abdomen is predicted to be its Adverse-Effect .",
"However, KECI can link the term endometriosis to a UMLS entity of semantic type Disease or Syndrome .",
"As a result, the system can correct the term's type and also predict the right edges for the final span graph.",
"Finally, we also examined the errors made by KECI.",
"One major issue is that MetaMap sometimes fails to return any candidate entity from UMLS for an entity mention.",
"We leave the extension of this work to using multiple KBs as future work.",
"Traditional pipelined methods typically treat entity extraction and relation extraction as two separate tasks (Zelenko et al., 2002; Zhou et al., 2005; Chan and Roth, 2011).",
"Such approaches ignore the close interaction between named entities and their relation information and typically suffer from the error Datasets Top 3 types with the lowest avg.",
"propagation problem.",
"To overcome these limitations, many studies have proposed joint models that perform entity extraction and relation extraction simultaneously (Roth and Yih, 2007; Li and Ji, 2014; Li et al., 2017; Zheng et al., 2017; Bek-oulis et al., 2018a,b; Wadden et al., 2019; Fu et al., 2019; Luan et al., 2019; Zhao et al., 2020; Wang and Lu, 2020; Li et al., 2020b; Lin et al., 2020).",
"Particularly, span-based joint extraction methods have gained much popularity lately because of their ability to detect overlapping entities.",
"For example, Eberts and Ulges (2020) propose SpERT, a simple but effective span-based model that utilizes BERT as its core.",
"The recent work of Ji et al. (2020) also closely follows the overall architecture of SpERT but differs in span-specific and contextual semantic representations.",
"Despite their impressive performance, these methods are not designed specifically for the biomedical domain, and they do not utilize any external knowledge base.",
"To the best of our knowledge, our work is the first span-based framework that utilizes external knowledge for joint entity and relation extraction from biomedical text.",
"Biomedical event extraction is a closely related task that has also received a lot of attention from the research community (Poon and Vanderwende, 2010; Kim et al., 2013; V S S Patchigolla et al., 2017; Rao et al., 2017; Espinosa et al., 2019; Li et al., 2019; Wang et al., 2020; Huang et al., 2020; Ramponi et al., 2020; Yadav et al., 2020).",
"Several studies have proposed to incorporate external knowledge from domain-specific KBs into neural models for biomedical event extraction.",
"For example, Li et al. (2019) incorporate entity information from Gene Ontology into tree-LSTM models.",
"However, their approach does not explicitly use any external relational information.",
"Recently, Huang et al. (2020) introduce a framework that uses a novel Graph Edge conditioned Attention Network (GEANet) to utilize domain knowledge from UMLS.",
"In the framework, a global KG for the entire corpus is first constructed, and then a sentence-level KG is created for each individual sentence in the corpus.",
"Our method of KG construction is more flexible as we directly create a KG for each input text.",
"Furthermore, the work of Huang et al. (2020) only deals with event extraction and assumes that gold-standard entity mentions are provided at inference time.",
"Some previous work has focused on integrating external knowledge into neural architectures for other tasks, such as reading comprehension (Mihaylov and Frank, 2018), question answering (Pan et al., 2019), natural language inference (Sharma et al., 2019), and conversational modeling (Parthasarathi and Pineau, 2018).",
"Different from these studies, our work explicitly emphasizes the benefit of collective inference using global relational information.",
"Many previous studies have also used GNNs for various IE tasks (Nguyen and Grishman, 2018; Liu et al., 2018; Subburathinam et al., 2019; Zeng et al., 2021; Zhang and Ji, 2021).",
"Many of these methods use a dependency parser or a semantic parser to construct a graph capturing global interactions between tokens/spans.",
"However, parsers for specialized biomedical domains are expensive to build.",
"KECI does not rely on such expensive resources.",
"In this work, we propose a novel span-based framework named KECI that utilizes external domain knowledge for joint entity and relation extraction from biomedical text.",
"Experimental results show that KECI is highly effective, achieving new state-of-the-art results on two datasets: BioRelEx and ADE.",
"Theoretically, KECI can take an entire document as input; however, the tested datasets are only sentence-level datasets.",
"In the future, we plan to evaluate our framework on more document-level datasets.",
"We also plan to explore a broader range of properties and information that can be extracted from external KBs to facilitate biomedical IE tasks.",
"Finally, we also plan to apply KECI to other information extraction tasks (Li et al., 2020a; Lai et al., 2021; Wen et al., 2021).",
"We thank the three reviewers and the Area Chair for their insightful comments and suggestions.",
"This research is based upon work supported by the Molecule Maker Lab Institute: An AI Research Institutes program supported by NSF under Award No. 2019897, NSF No. 2034562, U.S. DARPA KAIROS Program No.",
"FA8750-19-2-1004, the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), via contract No.",
"FA8650-17-C-9116.",
"Any opinions, findings and conclusions or recommendations expressed in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the U.S. Government.",
"The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation here on."
] | [
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"method",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"objective",
"method",
"method",
"abstain",
"abstain",
"method",
"objective",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"abstain",
"result",
"abstain",
"abstain",
"method",
"result",
"result",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"objective",
"other",
"other",
"other",
"other",
"objective",
"abstain",
"abstain",
"method",
"objective",
"method",
"other",
"other",
"other",
"other",
"other",
"other"
] |
[
"We demonstrate that current state-of-the-art approaches to Automated Essay Scoring (AES) are not well-suited to capturing adversarially crafted input of grammatical but incoherent sequences of sentences.",
"We develop a neural model of local coherence that can effectively learn connectedness features between sentences, and propose a framework for integrating and jointly training the local coherence model with a state-of-the-art AES model.",
"We evaluate our approach against a number of baselines and experimentally demonstrate its effectiveness on both the AES task and the task of flagging adversarial input, further contributing to the development of an approach that strengthens the validity of neural essay scoring models.",
"Automated Essay Scoring (AES) focuses on automatically analyzing the quality of writing and assigning a score to the text.",
"Typically, AES models exploit a wide range of manually-tuned shallow and deep linguistic features (Shermis and Hammer, 2012; Burstein et al., 2003; Rudner et al., 2006; Williamson et al., 2012; Andersen et al., 2013).",
"Recent advances in deep learning have shown that neural approaches to AES achieve state-of-the-art results (Alikaniotis et al., 2016; Taghipour and Ng, 2016) with the additional advantage of utilizing features that are automatically learned from the data.",
"In order to facilitate interpretability of neural models, a number of visualization techniques have been proposed to identify textual (superficial) features that contribute to model performance (Alikaniotis et al., 2016).",
"To the best of our knowledge, however, no prior work has investigated the robustness of neural AES systems to adversarially crafted input that is designed to trick the model into assigning desired missclassifications; for instance, a high score to a low quality text.",
"Examining and addressing such validity issues is critical and imperative for AES deployment.",
"Previous work has primarily focused on assessing the robustness of standard machine learning approaches that rely on manual feature engineering; for example, Powers et al. (2002); Yannakoudakis et al. (2011) have shown that such AES systems, unless explicitly designed to handle adversarial input, can be susceptible to subversion by writers who understand something of the sys-tems' workings and can exploit this to maximize their score.",
"In this paper, we make the following contributions: i.",
"We examine the robustness of state-of-the-art neural AES models to adversarially crafted input, 1 and specifically focus on input related to local coherence ; that is, grammatical but incoherent sequences of sentences.",
"2 In addition to the superiority in performance of neural approaches against standard machine learning models (Alikaniotis et al., 2016; Taghipour and Ng, 2016), such a setup allows us to investigate their potential superiority / capacity in handling adversarial input without being explicitly designed to do so.",
"ii.",
"We demonstrate that state-of-the-art neural AES is not well-suited to capturing adversarial input of grammatical but incoherent sequences of sentences, and develop a neural model of local coherence that can effectively learn connectedness features between sentences.",
"iii.",
"A local coherence model is typically evaluated based on its ability to rank coherently ordered sequences of sentences higher than their incoherent / permuted counterparts (e.g., Barzilay and Lapata (2008)).",
"We focus on a stricter evaluation setting in which the model is tested on its ability to rank coherent sequences of sentences higher than any incoherent / permuted set of sentences, and not just its own permuted counterparts.",
"This supports a more rigorous evaluation that facilitates development of more robust models.",
"iv.",
"We propose a framework for integrating and jointly training the local coherence model with a state-of-the-art AES model.",
"We evaluate our approach against a number of baselines and experimentally demonstrate its effectiveness on both the AES task and the task of flagging adversarial input, further contributing to the development of an approach that strengthens AES validity.",
"At the outset, our goal is to develop a framework that strengthens the validity of state-of-the-art neural AES approaches with respect to adversarial input related to local aspects of coherence.",
"For our experiments, we use the Automated Student Assessment Prize (ASAP) dataset, 3 which contains essays written by students ranging from Grade 7 to Grade 10 in response to a number of different prompts (see Section 4).",
"AES Evaluation against Adversarial Input One of the earliest attempts at evaluating AES models against adversarial input was by Powers et al. (2002) who asked writing experts that had been briefed on how the e-Rater scoring system works to write essays to trick e-Rater (Burstein et al., 1998).",
"The participants managed to fool the system into assigning higher-than-deserved grades, most notably by simply repeating a few well-written paragraphs several times.",
"Yannakoudakis et al. (2011) and Yannakoudakis and Briscoe (2012) created and used an adversarial dataset of well-written texts and their random sentence permutations, which they released in the public domain, together with the grades assigned by a human expert to each piece of text.",
"Unfortunately, however, the dataset is quite small, consisting of 3 https://www.kaggle.com/c/asap-aes/ 12 texts in total.",
"Neural AES Models Alikaniotis et al. (2016) developed a deep bidirectional Long Short-Term Memory (LSTM) (Hochreiter and Schmidhuber, 1997) network, augmented with score-specific word embeddings that capture both contextual and usage information for words.",
"Their approach outperformed traditional feature-engineered AES models on the ASAP dataset.",
"Taghipour and Ng (2016) investigated various recurrent and convolutional architectures on the same dataset and found that an LSTM layer followed by a Mean over Time operation achieves state-of-the-art results.",
"Dong and Zhang (2016) showed that a two-layer Convolutional Neural Network (CNN) outperformed other baselines (e.g., Bayesian Linear Ridge Regression) on both in-domain and domain-adaptation experiments on the ASAP dataset.",
"Neural Coherence Models A number of approaches have investigated neural models of coherence on news data.",
"Li and Hovy (2014) used a window approach where a sliding kernel of weights was applied over neighboring sentence representations to extract local coherence features.",
"The sentence representations were constructed with recursive and recurrent neural methods.",
"Their approach outperformed previous methods on the task of selecting maximally coherent sentence orderings from sets of candidate permutations (Barzilay and Lapata, 2008).",
"Lin et al. (2015) developed a hierarchical Recurrent Neural Network (RNN) for document modeling.",
"Among others, they looked at capturing coherence between sentences using a sentence-level language model, and evaluated their approach on the sentence ordering task.",
"Tien Nguyen and Joty (2017) built a CNN over entity grid representations, and trained the network in a pairwise ranking fashion.",
"Their model outperformed other graph-based and distributed sentence models.",
"We note that our goal is not to identify the best model of local coherence on randomly permuted grammatical sentences in the domain of AES, but rather to propose a framework that strengthens the validity of AES approaches with respect to adversarial input related to local aspects of coherence.",
"Our local coherence model is inspired by the model of Li and Hovy (2014) which uses a window approach to evaluate coherence.",
"4 Figure 1 presents a visual representation of the network architecture, which is described below in detail.",
"Sentence Representation This part of the model composes the sentence representations that can be utilized to learn connectedness features between sentences.",
"Each word in the text is initialized with a k -dimensional vector w from a pre-trained word embedding space.",
"Unlike Li and Hovy (2014), we use an LSTM layer 5 to capture sentence compo-sitionality by mapping words in a sentence s = { w 1 , w 2 , ..., w n } at each time step t ( w t , where t n ) onto a fixed-size vector h wrdt R d lstm (where d lstm is a hyperparameter).",
"The sentence representation h snt is then the representation of the last word in the sentence: h snt = h wrdn (1) Clique Representation Each window of sentences in a text represents a clique q = 4 We note that Li and Jurafsky (2017) also present an extended version of the work by Li and Hovy (2014), evaluated on different domains.",
"{ s 1 , ..., s m } , where m is a hyperparameter indicating the window size.",
"A clique is assigned a score of 1 if it is coherent (i.e., the sentences are not shuffled) and 0 if it is incoherent (i.e., the sentences are shuffled).",
"The clique embedding is created by concatenating the representations of the sentences it contains according to Equation 1.",
"A convolutional operation using a filter W clq R m d lstm d cnn , where d cnn denotes the convolutional output size is then applied to the clique embedding, followed by a non-linearity in order to extract the clique representation h clq R d cnn : h clqj = tanh ([ h sntj ; .. ; h sntj + m 1 ] W clq ) (2) Here, j { 1 , ..., N m + 1 } , N is the number of sentences in the text, and is the linear convolutional operation.",
"Scoring The cliques' predicted scores are calculated via a linear operation followed by a sigmoid function to project the predictions to a [0 , 1] probability space: y clqj = sigmoid ( h clqj . V ) (3) where V R d cnn is a learned weight.",
"The network optimizes its parameters to minimize the negative log-likelihood of the cliques' gold scores y clq , given the network's predicted scores: L local = 1 TTX j =1 [ y clqj log ( y clqj ) (1 y clqj ) log (1 y clqj )] (4) 265 Figure 2: AES LSTM T&N model of Taghipour and Ng (2016).",
"where T = N m +1 (number of cliques in text).",
"The final prediction of the text's coherence score is calculated as the average of all of its clique scores: y coh = 1 TTX j =1 y clqj (5) This is in contrast to Li and Hovy (2014), who multiply all the estimated clique scores to generate the overall document score.",
"This means that if only one clique is misclassified as incoherent and assigned a score of 0 , the whole document is regarded as incoherent.",
"We aim to soften this as-sumption and use the average instead to allow for a more fine-grained modeling of degrees of coherence.",
"6 We train the LC model on synthetic data automatically generated by creating random permutations of highly-scored ASAP essays (Section 4).",
"We utilize the LSTM AES model of Taghipour and Ng (2016) shown in Figure 2 (LSTM T&N ), which is trained, and yields state-of-the-art results on the ASAP dataset.",
"The model is a one-layer LSTM that encodes the sequence of words in an essay, followed by a Mean over Time operation that averages the word representations generated from the LSTM layer.",
"7 6 Our experiments showed that using the multiplicative approach gives poor results, as presented in Section 6.",
"7 We note that the authors achieve slightly higher results when averaging ensemble results of their LSTM model together with CNN models.",
"We use their main LSTM model 3.3 Combined Models We propose a framework for integrating the LSTM T&N model with the Local Coherence (LC) one.",
"Our goal is to have a robust AES system that is able to correctly flag adversarial input while maintaining a high performance on essay scoring.",
"The baseline model simply concatenates the output representations of the pre-prediction layers of the trained LSTM T&N and LC networks, and feeds the resulting vector to a machine learning algorithm (e.g., Support Vector Machines, SVMs) to predict the final overall score.",
"In the LSTM T&N model, the output representation (hereafter referred to as the essay representation ) is the vector produced from the Mean Over Time operation; in the LC model, we use the generated clique representations (Figure 1) aggregated with a max operation; 8 (hereafter referred to as the clique representation ).",
"Although the LC model is trained on permuted ASAP essays (Section",
"4) and the LSTM T&N model on the original ASAP data, essay and clique representations are generated for both the ASAP and the synthetic essays containing reordered sentences.",
"Instead of training the LSTM T&N and LC models separately and then concatenating their output representations, we propose a framework where both models are trained jointly, and where the final network has then the capacity to predict AES scores and flag adversarial input (Figure 3).",
"Specifically, the LSTM T&N and LC networks predict an essay and coherence score respectively (as described earlier), but now they both share the word embedding layer.",
"The training set is the aggregate of both the ASAP and permuted data to allow the final network to learn from both simultaneously.",
"Concretely, during training, for the ASAP essays, we assume that both the gold essay and coherence scores are the same and equal to the gold ASAP scores.",
"This is not too strict an as-sumption, as overall scores of writing competence tend to correlate highly with overall coherence.",
"For the synthetic essays, we set the gold coher-which, for the purposes of our experiments, does not affect our conclusions.",
"ence scores to zero, and the gold essay scores to those of their original non-permuted counterparts in the ASAP dataset.",
"The intuition is as follows: firstly, setting the gold essay scores of synthetic essays to zero would bias the model into over-predicting zeros; secondly, our approach reinforces the LSTM T&N 's inability to detect adversarial input, and forces the overall network to rely on the LC branch to identify such input.",
"9 The two sub-networks are trained together and the error gradients are back-propagated to the word embeddings.",
"To detect whether an essay is adversarial, we further augment the system with an adversarial text detection component that simply captures adversarial input based on the difference between the predicted essay and coherence scores.",
"Specifically, we use our development set to learn a threshold for this difference, and flag an essay as adversarial if the difference is larger than the threshold.",
"We experimentally demonstrate that this approach enables the model to perform well on both original ASAP and synthetic essays.",
"During model evaluation, the texts flagged as adversarial by the model are assigned a score of zero, while the rest are assigned the predicted essay score ( y esy in Figure 3).",
"We use the ASAP dataset, which contains 12 , 976 essays written by students ranging from Grade 7 to",
"9 We note that, during training, the scores are mapped to a range between 0 and 1 (similarly to Taghipour and Ng (2016)), and then scaled back to their original range during evaluation.",
"Grade 10 in response to 8 different prompts.",
"We follow the ASAP data split by Taghipour and Ng (2016), and apply 5 -fold cross validation in all experiments using the same train/dev/test splits.",
"For each prompt, the fold predictions are aggregated and evaluated together.",
"In order to calculate the overall system performance, the results are averaged across the 8 prompts.",
"To create adversarial input, we select high scoring essays per prompt (given a pre-defined score threshold, Table 1) 10 that are assumed coherent, and create 10 permutations per essay by randomly shuffling its sentences.",
"In the joint learning setup, we augment the original ASAP dataset with a subset of the synthetic essays.",
"Specifically, we randomly select 4 permutations per essay to include in the training set, 11 but include all 10 permutations in the test set.",
"Table 1 presents the details of the datasets.",
"We test performance on the ASAP dataset using Quadratic Weighted Kappa (QWK), which was the official evaluation metric in the ASAP competition, while we test performance on the synthetic dataset using pairwise ranking accuracy (PRA) between an original non-permuted essay and its permuted counterparts.",
"PRA is typically used as an evaluation metric on coherence assessment tasks on other domains (Barzilay and Lapata, 2008), and is based on the fraction of correct pairwise rankings in the test data (i.e., a coherent essay should be ranked higher than its permuted coun-terpart).",
"Herein, we extend this metric and furthermore evaluate the models by comparing each original essay to all adversarial / permuted essays in the test data, and not just its own permuted counterparts we refer to this metric as total pairwise ranking accuracy (TPRA).",
"Coherence models We train and test the LC model described in Section 3.1 on the synthetic dataset and evaluate it using PRA and TPRA.",
"During pre-processing, words are lowercased and initialized with pre-trained word embeddings (Zou et al., 2013).",
"Words that occur only once in the training set are mapped to a special UNK embed-10 We note that this threshold is different than the one mentioned in Section 3.3.2.",
"11 This is primarily done to keep the data balanced: initial experiments showed that training with all 10 permutations per essay harms AES performance, but has negligible effect on adversarial input detection.",
"ding.",
"All network weights are initialized to values drawn randomly from a uniform distribution with scale = 0 .",
"05 , and biases are initialized to zeros.",
"We apply a learning rate of 0 .",
"001 and RMSProp (Tieleman and Hinton, 2012) for optimization.",
"A size of 100 is chosen for the hidden layers ( d lstm and d cnn ), and the convolutional window size ( m ) is set to 3 .",
"Dropout (Srivastava et al., 2014) is applied for regularization to the output of the convolutional operation with probability 0 .",
"3 .",
"The network is trained for 60 epochs and performance is monitored on the development sets we select the model that yields the highest PRA value.",
"12 We use as a baseline the LC model that is based on the multiplication of the clique scores (simi-larly to Li and Hovy (2014)), and compare the results (LC mul ) to our averaged approach.",
"As another baseline, we use the entity grid (EGrid) (Barzilay and Lapata, 2008) that models transitions between sentences based on sequences of entity mentions labeled with their grammatical role.",
"EGrid has been shown to give competitive results on similar coherence tasks in other domains.",
"Using the Brown Coherence Toolkit (Eisner and Charniak, 2011), 13 we construct the entity transition probabilities with length = 3 and salience = 2 .",
"The transition probabilities are then used as features that are fed as input to an SVM classifier with an RBF kernel and penalty parameter C = 1 .",
"5 to predict a coherence score.",
"Combined models After training the LC and LSTM T&N models, we concatenate their output 12 Our implementation is available at https: //github.com/Youmna-H/Coherence_AES 13 https://bitbucket.org/melsner/browncoherence 14 https://github.com/nusnlp/nea vectors to build the Baseline: Vector Concatenation (VecConcat) model as described in Section 3.3.1, and train a Kernel Ridge Regression model.",
"15 The Joint Learning network is trained on both the ASAP and synthetic dataset as described in Section 3.3.2.",
"Adversarial input is detected based on an estimated threshold on the difference between the predicted essay and coherence scores (Figure 3).",
"The threshold value is empirically calculated on the development sets, and set to be the average difference between the predicted essay and coherence scores in the synthetic data: threshold = P Mi =1 y esyi y cohi M where M is the number of synthetic essays in the development set.",
"We furthermore evaluate a baseline where the joint model is trained without sharing the word embedding layer between the two sub-models, and report the effect on performance (Joint Learning no layer sharing ).",
"Finally, we evaluate a baseline where for the joint model we set the gold essay scores of synthetic data to zero (Joint Learning zero score ), as opposed to our proposed approach of setting them to be the same as the score of their original non-permuted counterpart in the ASAP dataset.",
"The state-of-the-art LSTM T&N model, as shown in Table 2, gives the highest performance on the ASAP data, but is not robust to adversarial input and therefore unable to capture aspects of local coherence, with performance on synthetic data that is less than 0 .",
"5 .",
"On the other hand, both 15 We use scikit-learn with the following parameters: alpha= 0 .",
"1 , coef0= 1 , degree= 3 , gamma= 0 .",
"1 , kernel=rbf'.",
"our LC model and the EGrid significantly outperform LSTM T&N on synthetic data.",
"While EGrid is slightly better in terms of TPRA compared to LC ( 0 . 706 vs. 0 . 689 ), LC is substantially better on PRA ( 0 . 946 vs. 0 . 718 ).",
"This could be attributed to the fact that LC is optimised using PRA on the development set.",
"The LC mul variation has a performance similar to LC in terms of PRA, but is significantly worse in terms of TPRA, which further supports the use of our proposed LC model.",
"Our Joint Learning model manages to exploit the best of both the LSTM T&N and LC approaches: performance on synthetic data is significantly better compared to LSTM T&N (and in particular gives the highest TPRA value on synthetic data compared to all models), while manages to maintain the high performance of LSTM T&N on ASAP data (performance slighly drops from 0 . 739 to 0 . 724 though not significantly).",
"When the Joint Learning model is compared against the VecConcat baseline, we can again confirm its superiority on both datasets, giving significant differences on synthetic data.",
"We furthermore evaluate the performance of the the Joint Learning model when trained using different parameters (Table 3).",
"When assigning gold essay scores of zero to adversarial essays (Joint Learning zero score ), AES performance on the ASAP data drops to 0 .",
"449 QWK, and the results are statistically significant.",
"16 This is partly ex-16 Note that we do not report performance of this model on synthetic data.",
"In this case, the thresholding technique cannot be applied as both sub-models are trained with the same gold scores and thus have very similar predictions on synthetic data.",
"plained by the fact that the model, given the training data gold scores, is biased towards predicting zeros.",
"The result, however, further supports our hypothesis that forcing the Joint Learning model to rely on the coherence branch for adversarial input detection further improves performance.",
"Importantly, we need something more than just training a state-of-the-art AES model (in our case, LSTM T&N ) on both original and synthetic data.",
"We also compare Joint Learning to Joint Learning no layer sharing in which the the two sub-models are trained separately without sharing the first layer of word representations.",
"While the difference in performance on the ASAP test data is small, the differences are much larger on synthetic data, and are significant in terms of TPRA.",
"By examining the false positives of both systems (i.e., the coherent essays that are misclassified as adver-sarial), we find that when the embeddings are not shared, the system is biased towards flagging long essays as adversarial, while interestingly, this bias is not present when the embeddings are shared.",
"For instance, the average number of words in the false positive cases of Joint Learning no layer sharing on the ASAP data is 426 , and the average number of sentences is 26 ; on the other hand, with the Joint Learning model, these numbers are 340 and 19 respectively.",
"17 A possible explanation for this is that training the words with more contextual information (in our case, via embeddings sharing), is advantageous for longer documents with a large number of sentences.",
"Ideally, no essays in the ASAP data should be flagged as adversarial as they were not designed to trick the system.",
"We calculate the number of ASAP texts incorrectly detected as adversarial, and find that the average error in the Joint Learning model is quite small ( 0 . 382% ).",
"This increases with Joint Learning no layer sharing ( 1% ), although still remains relatively small.",
"17 Adversarial texts in the synthetic dataset have an average number of 306 words and an average number of 18 sentences.",
"We further investigate the essay and coherence scores predicted by our best model, Joint Learning, for the permuted and original ASAP essays in the synthetic dataset (for which we assume that the selected, highly scored ASAP essays are coherent, Section 4), and present results for 3 randomly selected prompts in Figure 4.",
"The graphs show a large difference between predicted essay and coherence scores on permuted / adversarial data",
"((a),",
"(b) and",
"(c)), where the system predicts high essay scores for permuted texts (as a result of our training strategy), but low coherence scores (as predicted by the LC model).",
"For highly scored ASAP essays",
"((d),",
"(e) and",
"(f)), the system predictions are less varied and positively contributes to the performance of our proposed approach.",
"We evaluated the robustness of state-of-the-art neural AES approaches on adversarial input of grammatical but incoherent sequences of sentences, and demonstrated that they are not well-suited to capturing such cases.",
"We created a synthetic dataset of such adversarial examples and trained a neural local coherence model that is able to discriminate between such cases and their coherent counterparts.",
"We furthermore proposed a framework for jointly training the coherence model with a state-of-the-art neural AES model, and introduced an effective strategy for assigning gold scores to adversarial input during training.",
"When compared against a number of baselines, our joint model achieves better performance on randomly permuted sentences, while maintains a high performance on the AES task.",
"Among others, our results demonstrate that it is not enough to simply (re-)train neural AES models with adversarially crafted input, nor is it sufficient to rely on simple approaches that concatenate output representations from different neural models.",
"Finally, our framework strengthens the validity of neural AES approaches with respect to adversarial input designed to trick the system.",
"We gratefully acknowledge the support of NVIDIA Corporation with the donation of the Titan X Pascal GPU used for this research.",
"We are also grateful to Cambridge Assessment for their support of the ALTA Institute.",
"Special thanks to Christopher Bryant and Marek Rei for their valuable feedback."
] | [
"objective",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"other",
"objective",
"method",
"abstain",
"objective",
"objective",
"objective",
"objective",
"method",
"objective",
"objective",
"objective",
"objective",
"objective",
"result",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"objective",
"result",
"objective",
"method",
"other",
"other",
"other"
] |
[
"How can we measure whether a natural language generation system produces both high quality and diverse outputs?",
"Human evaluation captures quality but not diversity, as it does not catch models that simply plagiarize from the training set.",
"On the other hand, statistical evaluation (i.e., perplexity) captures diversity but not quality, as models that occasionally emit low quality samples would be in-sufficiently penalized.",
"In this paper, we propose a unified framework which evaluates both diversity and quality, based on the optimal error rate of predicting whether a sentence is humanor machine-generated.",
"We demonstrate that this error rate can be efficiently estimated by combining human and statistical evaluation, using an evaluation metric which we call HUSE.",
"On summarization and chitchat dialogue, we show that",
"(i) HUSE detects diversity defects which fool pure human evaluation and that",
"(ii) techniques such as annealing for improving quality actually decrease HUSE due to decreased diversity.",
"Generating text is a core part of many NLP tasks such as image captioning (Lin et al., 2014), open-domain dialogue (Sordoni et al., 2015), story generation (Roemmele, 2016), and summarization (Nallapati et al., 2016).",
"However, proper evaluation of natural language generation has proven difficult (Liu et al., 2016; Novikova et al., 2017; Chaganty et al., 2018).",
"A good evaluation metric should not only capture the quality of generation, but also the diversity of generation, which is especially crucial for creative, open-ended tasks like dialogue or story generation.",
"Human evaluation , which is often viewed as the gold standard evaluation, captures quality but fails to capture diversity.",
"As an example, for language Reference Model Probability (p model ) Agassi bows out of Australian open Agassi withdraws from Australian open Sharon has stroke for stroke Cleared coach facing another grilling from British swim bosses Model Generations Reference H u m a n J u d g m e n t Figure 1: HUSE is twice the classification error of distinguishing reference and generated text based on human judgment scores and model probabilities.",
"modeling, a model that directly plagiarizes sentences from the training set would pass the human quality bar but would have zero generalization ability and thus have inadequate diversity.",
"On the other hand, statistical evaluation i.e., perplexity on a reference test setcaptures diversity, as it ensures a model must assign reasonable probability to novel sentences, but perplexity provides an inadequate measure of quality (Theis et al., 2015).",
"For example, modifying a perfect model by removing its ability to generate even a single test sentence results in infinite perplexity even though the model is still near-perfect.",
"Automatic metrics such as BLEU (Papineni et al., 2002) and ROUGE (Lin and Rey, 2004) capture quality better than perplexity but still correlate poorly with human evaluation and fail to capture diversity (Novikova et al., 2017; Chaganty et al., 2018).",
"Existing approaches to combining statistical and human evaluation have been ad-hoc, leading to misleading performance measures.",
"A common approach is to measure diversity through the perplexity of a probabilistic model and quality through human evaluation on beam-searched outputs.",
"This gives the illusion that a single model is high-quality and diverse, while the reality is that it shows we can have either a diverse model (when sampling from the distribution used to compute perplexity) or a high-quality model (when beam-searching).",
"In this paper, we define the idealized evaluation metric as twice the error of the optimal discriminator for classifying sentences as coming from the reference distribution or the model (Section 2).",
"If a model generates gibberish (low quality), the optimal discriminator can classify these accurately as coming from the model.",
"If the reference distribution contains sentences the model cannot generate (low diversity), the optimal discriminator can classify these accurately as coming from the reference.",
"Unfortunately, the optimal discriminator is unavailable.",
"Human discriminators cannot capture diversity effectively, and learned discriminators e.g., from a Generative Adversarial Network (Goodfellow et al., 2014) or one trained on human judgments (Lowe et al., 2017)are too unreliable to use for rigorous evaluation.",
"Our key result (Section 3) is based on the observation that the optimal classifier depends only on two numbers: the probability of a sentence under the model and the probability under the reference distribution.",
"The former can be computed directly from the model, and we show that the latter can be well-approximated by human judgment scores.",
"The resulting two-dimensional space is illustrated in Figure 1. We apply a simple k -nearest neighbor classifier in this space and define Human Unified with Statistical Evaluation (HUSE) as twice the leave-one-out error of this classifier.",
"We apply HUSE to four natural language generation tasks (Section 5): language modeling, chitchat dialogue, story generation, and summarization.",
"First, we show that human evaluation alone is insufficient to discriminate model generations from the references, leading to inflated estimates of model performance.",
"In contrast, HUSE is able to reveal deficiencies of current models.",
"We also show that common techniques for improving sample quality such as annealing actually increase distinguishability between the model and reference due to losses in diversity.",
"Consider a natural language generation task where the model is given a context x (e.g., a dialogue history)",
"history) drawn from some prior p ( x ) and must output a distribution over possible sentences p model ( y | x ) .",
"We define an idealized evaluation metric based on whether p model is close to a reference distribution p ref , which is generally human-generated.",
"1 Specifically, consider a random variable y drawn from either the reference or the model based on an indicator z Bernoulli (cid:0) 12 (cid:1) : y | x, z (cid:40) p ref ( y | x ) if z = 1 p model ( y | x ) if z = 0 .",
"Obstacles.",
"Unfortunately, L is unattainable because it requires computing the optimal discriminator.",
"In the spirit of the Turing Test, we could consider using the error rate of a human discriminator f hum instead, often considered the gold standard for evaluation.",
"However, while humans might have knowledge of p ref , they do not have full knowledge of p model and thus would have difficul-ties determining which sentences a model cannot generate.",
"As a concrete example, suppose p ref placed a uniform distribution over some set S .",
"Without knowledge of p model the most sensible discriminator is to predict z = 1 (reference) when y S .",
"This discriminator achieves the same classification error of 0 .",
"5 for both the perfect model p model = p ref and one which can only return a single y S .",
"We could try to reveal p model to humans by showing multiple samples simultaneously, but this is expensive and, as we will later see, unnecessary.",
"Another option is to learn f over an expressive class of functions such as neural networks on data 1 While some tasks only care about quality and thus only require p model to place mass on some high quality y , we demand that p model places mass on all high quality y as given by p ref .",
"This diversity is important for open-ended tasks such as dialogue or story generation.",
"Also note that p ref need not be the human distribution, or match the training distribution.",
"It can be defined as the distribution given by experts.",
"2 Note that L is a linear function of the total variational divergence: (cid:107) p model p ref (cid:107) TV def = (cid:80) x,y p ( x ) | p model ( y | x ) p ref ( y | x ) | = 1 L .",
"See Appendix A.1 for details.",
"sampled from p model and p ref .",
"This is analogous to learning the discriminator in a Generative Adversarial Network (GAN) (Goodfellow et al., 2014) or learning an evaluation metric from human judgments (Lowe et al., 2017).",
"However, as ( x, y ) are high-dimensional objects, training a good classifier is extremely difficult (and perhaps not significantly easier than solving the original generation problem).",
"Indeed, learned evaluation metrics do not generalize very well (Lowe et al., 2017; Chaganty et al., 2018).",
"Unlike these approaches which seek to replace human evaluation, our focus will instead be on combining human and automatic statistical evaluation to estimate the optimal classifier error.",
"Our key result is that the optimal discriminator depends on ( x, y ) only through a two-dimensional sufficient statistic (Section 3.1), motivating an approximation which we call HUSE (Section 3.2).",
"For any feature map that maps ( x, y ) to ( x, y ) R d , define the evaluation score L ( ) to be twice the error rate of the optimal discriminator that depends on ( x, y ) only through : L ( ) def = 2 inf f P [ f ( ( x, y )) (cid:54) = z ] .",
"Note that the evaluation score L ( ) given by a feature map optimizes over all functions that depend on (3).",
"Thus, the more information contains, the lower L ( ) is.",
"This has two implications: First, any feature map yields an (opti-mistic) upper bound on L (2), meaning that L ( ) might be able detect when a model is poor but cannot certify that it is good.",
"Second, adding features to can only improve this bound.",
"Proposition 1. The two-dimensional feature map opt achieves the optimal discriminator score: L ( opt ) = L .",
"Proof We compute the true posterior over z given x, y .",
"Since p ( z = 1) = p ( z = 0) = 12 , p ( y | x, z = 1) = p ref ( y | x ) and p ( y | x, z = 0) = p model ( y | x ) , by Bayes' rule: p ( z = 1 | x, y ) = p ref ( y | x ) p ref ( y | x ) + p model ( y | x ) .",
"The optimal discriminator simply predicts z = 1 if p ref ( y | x ) > p model ( y | x ) and z = 0 otherwise.",
"In other words, the decision boundary is given by opt ( x, y ) 1 > opt ( x, y ) 2 .",
"More generally, we can obtain this equality with a wider class of .",
"It will hold exactly for any invertible transformation of opt (Appendix Corollary 1), and approximately for any which has high mutual information with opt (Appendix Theorem 1).",
"This means that we can substitute p ref with noisy, possibly un-normalized estimates and still obtain accurate estimates of L .",
"While we can directly compute p model ( y | x ) for many probabilistic models, p ref ( y | x ) is unattainable, so L ( opt ) is not computable.",
"However, the wisdom of the crowds (Surowiecki, 2004; Ungar et al., 2012) suggests that pooling together the judgments of many humans can often produce surprisingly reliable estimates of real-world probabilities such as p ref ( y | x ) , even if no individual human is particularly reliable.",
"With this motivation, we ask Amazon Mechanical Turk workers to rate a sentence from 15 based on how typical it is as a way to estimate p ref ( y | x ) .",
"(see Appendix A.3 for more details).",
"We define HJ ( x, y ) to be the average response over 20 crowdworkers.",
"Figure 2 shows that for a language modeling task on the Reddit corpus, 3 HJ ( x, y ) strongly correlates with the actual log-frequency of y in the corpus.",
"The high correlation suggests that human judgments HJ ( x, y ) are a good surrogate for log p ref .",
"In addition, we found that rather than using the model probability p model ( y | x ) directly as a feature, normalizing by sentence length len ( y ) yielded lower (tighter) scores.",
"We therefore define the HUSE features as follows: huse ( x, y ) def = (cid:20) log p model ( y | x ) len ( y ) , HJ ( x, y ) (cid:21) , (5) 3 We used the Reddit corpus due to crowdworker familiarity, corpus size, and short average sentence length, which results in a wide range of sentence frequencies.",
"We now show that the HUSE score satisfies two nice properties:",
"(i) HUSE does at least as well as human evaluation and",
"(ii) a low HUSE score is sufficient to show that a model is far from the reference distribution.",
"To show",
"(i), consider a feature map that only includes human evaluation: hj ( x, y ) def = [ HJ ( x, y )] .",
"Because huse also incorporates human evaluation, L ( huse ) is always tighter (lower) than the human discriminator error L ( hj ) : Proposition 1 (Relationship between HUSE, human evaluation, and optimal scores) .",
"Furthermore, the main difference between L ( huse ) and L is that the former uses HJ ( x, y ) and the latter uses p ref .",
"But as we argued using Figure 2, HJ ( x, y ) is strongly correlated with p ref , and good approximations to p ref provide approximation guarantees for L ( huse ) (Appendix Theorem 1).",
"In this section, we show how we can estimate the error rate L ( ) from finite data (Section 4.1).",
"We then show how the HUSE estimate ( L ( huse )) can be decomposed into a score that measures quality (HUSE-Q) and a score that measures diversity (HUSE-D), which allows us to study quality-diversity tradeoffs (Section 4.2).",
"For any feature map , we show how to produce an estimate of L ( ) .",
"Fix n contexts x 1 , . . . , x n .",
"First, we draw n examples y 1 , . . . , y n from the reference distribution p ref ( y | x ) , which are usually human-generated sentences from a test set.",
"We also draw n examples y (cid:48) 1 , . . . , y (cid:48) n from the model p model ( y | x ) we wish to evaluate.",
"Next, for each of the 2 n examples ( x, y ) , we compute the feature map ( x, y ) , which might involve evaluating the model probability p model ( y | x ) as well as collecting human judgments HJ ( x, y ) from crowdworkers.",
"Finally, we compute the leave-one-out error of a classifier that tries to predict whether a given example ( x, y ) comes from the reference distribution ( z = 1 ) or the model ( z = 0 ).",
"The classification problems for HUSE are two-dimensional, which allows us to accurately estimate error rates using a k -nearest neighbors classifier.",
"We opt to use nearest neighbors classifiers as they are simple, require no training, and can asymptotically capture arbitrary continuous decision boundaries.",
"Specifically, we set k = 16 and define neighbors using L 2 distances over the feature vectors ( x, y ) scaled componentwise to have unit variance.",
"The overall procedure for computing the estimate L ( ) is formally defined in Algorithm 1. Algorithm 1 Estimating error rates under Require: Feature map , number of neighbors k Contexts x 1 , . . . , x n Reference outputs y 1 , . . . , y n Model outputs y (cid:48) 1 , . . . , y (cid:48) n 1: Construct dataset: D = n (cid:91) i =1 { ( ( x i , y i ) , 1) , ( ( x i , y (cid:48) i ) , 0) } 2: L ( ) def = leave-one-out error of k -NN on D 4.2 Quality-diversity decomposition We now define the (empirical) HUSE score using the feature map huse : HUSE def = L ( huse ) .",
"Since humans can detect quality defects in a model, any increase in error from removing p model must come from a model's lack of diversity.",
"Therefore, we define the diversity component (HUSE-D) as follows: HUSE-D def = 1 + HUSE HUSE-Q , (9) which implies the decomposition (1 HUSE-D )+ (1 HUSE-Q ) = 1 HUSE.",
"As long as the discriminators are non-degenerate (obtaining better performance than chance and HUSE > HUSE-Q), all scores are contained in [0 , 1] .",
"Here, HUSE-D = 1 implies that the model suffers no diversity defects, while HUSE-D = 0 indicates that the examples could be discriminated perfectly due to a lack of diversity.",
"We use HUSE to evaluate three different types of single-sentence natural language generation tasks:",
"(i) unconditional and high entropy (language mod-eling);",
"(ii) conditional and high entropy (story generation, chit-chat dialogue); and",
"(iii) conditional and low entropy (summarization).",
"We show that HUSE provides a direct and interpretable measure of diversity on high-entropy tasks, while also serving as a useful model diagnostic on low-entropy ones.",
"The four tasks along with the datasets and models are as follows: Summarization : Giganews story to headline dataset and the pre-trained model from Gehrmann et al. (2018).",
"The dataset consists of 3.8 million news story-headline pairs.",
"Examples from this dataset are shown in Table 2. Story generation : Last sentence generation for ROC stories (Mostafazadeh et al., 2016) consisting of 96,198 examples of partially written four-sentence stories as input, and a single sentence which completes the story as the target.",
"We use a standard OpenNMT model with global attention (Klein et al., 2017).",
"Language modeling : One billion word benchmark pre-trained language model from Jozefowicz et al. (2016).",
"The task consists of generating a single sentence from the one billion word newswire text distribution.",
"Chit-chat dialogue : Two-turn chit-chat dialogue dataset consisting of 37.3 million comment-response pairs from Reddit (Ap-pendix A.4).",
"Comments are generally short (515 tokens) and cover a single topic (e.g. given wow how did i not notice that, the response is you were focusing on other things its understandable).",
"We train a convolutional model using fairseq (Gehring et al., 2017).",
"For all the tasks, we train neural models and evaluate their diversity-quality tradeoffs as we change the decoding scheme for generation.",
"Our primary evaluation concerns diversity trade-offs involving temperature annealing which is a generation technique applicable to any probabilistic model that generates words sequentially.",
"In temperature annealed models, we sample a word w proportional to p 1 /t ( w ) where p is the model probability of w given previous words and t is the temperature parameter.",
"We excluded beam search since it qualitatively behaves similarly to temperature annealing with low temperatures and HUSE 0 due to beam search being extremely Score Summarization Story generation Chit-chat dialogue LM t = 1 .",
"As a non-neural baseline, we also consider retrieval based models based on Apache solr on a few tasks.",
"For this approach, we retrieve the single most relevant response from the training set using the BM25 similarity metric on inputs.",
"Such models are known to perform well in tasks with complex outputs such as program generation (Hayati et al., 2018; Hashimoto et al., 2018) and style transfer (Li et al., 2018).",
"For cost reasons, we did not measure certain combinations of task and generation mechanisms.",
"We did not measure retrieval for chit-chat dialogue, as we observed its outputs were lower quality than a low-temperature neural model.",
"We also did not anneal language models, as the generation quality from the language model was already high, and our goal was to show that they achieved high HUSE.",
"Our set of measurements, while not comprehensive, generally covers the available quality-diversity tradeoffs for conditional tasks.",
"Finally, we collect human judgments HJ ( x, y ) as per Section 4.1 where we query 20 Amazon Mechanical Turk crowdworkers for typicality ratings on 100 reference and 100 model sentences.",
"Since our models generate UNK (unknown and out-of-vocabulary) tokens, we instructed crowdworkers to treat UNK tokens as rare, but appropriate words for the context.",
"The HUSE scores across the four tasks vary widely.",
"Table 1 shows that single-sentence language models are nearly indistinguishable, with HUSE = 0 .",
"86 and implied discriminator error of 43% .",
"In contrast, both summarization and dialogue are highly distinguishable (HUSE 0 . 5 ) with relatively low quality when sampled from t = 1 .",
"0 .",
"Human evaluation alone (HUSE-Q) would suggest that using temperature annealing ( t = 0 . 7) to emphasize high-probability outputs substantially improves the model (HUSE-Q goes from 0 . 58 to 0 . 92 for summarization and 0 . 56 to 0 . 92 for dia-logue).",
"However, we find that this increase in sample quality comes at the cost of diversity (HUSE-D goes from 0 . 95 to 0 . 34 for summarization and 1 . 0 to 0 . 57 for dialogue).",
"Examining the achievable HUSE and diversity tradeoffs in Figure 3 shows that mechanisms such as annealing which improve sample quality actually degrade HUSE due to severe losses in diversity.",
"We find that all generation schemes and models are inadequate for story generation on ROC stories.",
"The original model ( t = 1 . 0 ) is very easily distinguishable by a human (HUSE-Q = 0 . 15 ), corresponding to a discriminator error of 7% .",
"The retrieval models can improve this to HUSE-Q = 0 .",
"47 , but this comes at the expense of diversity.",
"Finally, we observe that directly sampling from the model ( t = 1 . 0) is always diverse.",
"This suggests that human evaluation is an appropriate evaluation for generation systems that are directly sampled (rather than beam-searched).",
"Since HUSE is estimated from a two-dimensional classification problem, we can directly visualize the classification problem to understand defects in both model quality and diversity.",
"Figure 4 shows both reference points huse ( x i , y i ) (blue squares) and model points huse ( x i , y (cid:48) i ) (red circles) for the summarization task.",
"The shaded areas indicate the decision boundary of the 16 -nearest neighbor classifier.",
"At temperature t = 1 .",
"0 , we find that the classification boundary is mostly horizontal, implying that human judgment alone can distinguish model outputs from references.",
"There is a cluster of sentences with high HJ and high p model which are es-T=1.0 T=0.9 T=0.7 Figure 4: The two-dimensional classification problem in Algorithm 1 on the summarization task with different softmax temperatures (three panels).",
"sentially indistinguishable.",
"Examining the samples in this top-right region reveals that these are news stories with short headlines such as Nadal pulls out of Sydney International which can be reliably generated even at t = 1 .",
"0 .",
"However, the model frequently generates low quality samples that can easily be distinguished such as two new vaccines in the poor countries were effective against go-it-alone study says (Table 2).",
"At lower temperatures of t = 0 .",
"9 and t = 0 .",
"7 , the boundary shifts towards becoming diagonal.",
"Although the distribution is no longer directly separable on human judgment, the two distributions are clearly separable with the inclusion of p model .",
"Using Figure 4, we can identify individual examples which were correctly and incorrectly classified based on p model and HJ.",
"Table 2 shows examples of both quality failures and diversity failures identified by HUSE.",
"For example, the di-versity failure table shows that the summarization model ( t = 0 . 7 ) has an extremely low probability of generating some reference sentences (NFL's bills shake up front office) and is thus under-diverse.",
"Closer examination of the model shows that the probability of generating front office is low, since it is an unusual way to refer to the president and general manager.",
"Improving these models on the diversity failures will require that the model understand more subtle paraphrases.",
"We can also identify model successes, where the model outputs are indistinguishable from the reference in terms of quality (Agassi bows out of Australian Open after injury), and the model assigns high probability to the reference (Agassi withdraws from Australian Open).",
"Since HUSE depends on human crowdworker annotations, one might ask if it is possible to reduce either the number of annotated examples, or number of distinct crowdworkers for each example.",
"We show that for low-quality models, substantially fewer annotations are needed.",
"Figure 5 shows the result of subsampling our original data of 200 sentences and 20 crowdworkers and estimating HUSE.",
"First, we find that using 50 test set examples (Figure 5, left) is often sufficient to give accurate estimates of HUSE.",
"Next, we find that the necessary number of crowdworkers per example depends heavily on the task.",
"Easily distinguishable tasks (story generation), require only 10 crowdworkers, while less distinguishable tasks (summarization) require more than 20 crowdworkers to obtain accurate estimates.",
"The current state of NLG evaluation.",
"Existing approaches to NLG evaluation use a hodgepodge mix of quality and diversity measures.",
"Out of the 26 NLG papers at ACL 2018, six perform only hu-Quality failure log p model HJ Context: Two new vaccines have been shown effective against rotavirus, which is responsible for a half-million infant deaths in poor countries each year, research studies published Wednesday said.",
"man evaluation, fourteen measure human evaluation and a diversity metric such as perplexity or n-gram diversity, and six do not evaluate using human judgments.",
"While perplexity and n -gram counts can in principle evaluate diversity, their practical implementations suffer from serious drawbacks.",
"When human evaluation and perplexity are both evaluated, they are almost always done on separate modelshuman evaluations are done on beam-searched output, while perplexity is computed on the softmax outputs.",
"This makes it appear as if the models can simultaneously generate high quality outputs while also being diverse, when in fact they can only be one at a time based on whether they sample or run beam search.",
"On the other hand, n -gram diversity was proposed by Li et al. (2016) to identify models with the generic utterance problem where models repeat phrases such as I don't know'.",
"Unfortunately, n -gram diversity is computed across contexts by counting the number of unique n -grams generated, and so does not measure a model's ability to generate multiple valid utterances at any single context.",
"In particular, a model which only outputs a single memorized utterance per context (e.g., via memorization or retrieval) can still have high n -gram diversity as long as the memorized sentences differ across contexts.",
"Finally, all existing diversity measures are computed separately from human evaluation.",
"This results in two incomparable evaluation metrics, which prevent us from reasoning about tradeoffs between diversity and quality.",
"In contrast, HUSE allows us to make precise statements about the tradeoffs between model quality and diversity because it is a single metric which decomposes into diversity and quality terms.",
"Related evaluations of diversity.",
"The importance of diverse responses has previously been acknowledged for summarization (Nenkova et al., 2007) and information retrieval (Clarke et al., 2008).",
"Our work differs in considering a single evaluation measure that captures quality and diversity applicable to any generation task.",
"Automated metrics based on n -gram overlap such as BLEU, METEOR, ROUGE (Papineni et al., 2002; Lavie and Denkowski, 2009; Lin and Rey, 2004) work well for machine translation but do not generalize well to domains with a diverse spectrum of correct responses.",
"While variants (Sun and Zhou, 2012; Galley et al., 2015; Shima and Mitamura, 2011) have adapted such metrics to high entropy generative environments, they are still significantly inferior to the human judgments they attempt to mimic.",
"Caccia et al. (2018) recently examined the diversity and quality tradeoffs for different language model architectures on synthetic datasets.",
"However, as their approach relies on measuring log-likelihoods under both the model and reference distributions, it cannot be applied to real data where p ref is unavailable.",
"Our main conceptual contribution overcomes this by showing that HJ is an acceptable proxy for p ref .",
"Sajjadi et al. (2018) also examines diversity and quality (which they call precision and recall) in the context of generative image models.",
"However, they rely on assuming that p ref and p model can be estimated accurately using the Frechet Inception Distance (FID) (Heusel et al., 2017).",
"HUSE avoids such assumptions and instead directly leverages human judgments, resulting in a simple and reliable metric more suitable for use as a gold-standard.",
"Estimating optimal classification error.",
"Evaluating a model by estimating its optimal classification error has been considered by several earlier works (Olsson et al., 2018; Kannan and Vinyals, 2016; Li et al., 2017; Bruni and Fernandez, 2017; Bowman et al., 2016).",
"However, these methods have focused on classifying sentences directly, which is quite challenging to do reliably.",
"Existing adversarial evaluation methods do not yet reliably outperform human classification (Kannan and Vinyals, 2016; Bruni and Fernandez, 2017).",
"We propose the use of both human evaluation and model probabilities as part of the adversarial evaluation framework, and demonstrate that the resulting classifier reliably outperforms humans and captures both the sample quality and diversity of a model.",
"Distributional divergence estimation.",
"Our proposed evaluation metric is closely related to the total variation distance which has been studied extensively in the distribution testing literature.",
"It is known that total variation distance estimates have pessimistic minimax estimation rates in high dimensions (Balakrishnan and Wasserman, 2017).",
"Our work overcomes this by utilizing p model and an estimate of p ref .",
"Other approaches to distributional testing include the maximum mean discrepancy (MMD) and Wasserstein distances, but these approaches require knowledge of a ground truth metric or kernel space (Tolstikhin et al., 2016; Singh et al., 2018).",
"Although such divergences are easier to estimate than the total variation distance from samples, the implied convergence rates are still too slow to be practically useful.",
"In this paper, we demonstrate that the current gold standard of human evaluation does not penalize under-diverse models.",
"To remedy this, we propose HUSE, a general purpose evaluation strategy which can be applied to any model for which we can calculate a model's sampling probabilities.",
"HUSE is an upper bound on the optimal classification error of distinguishing reference and model-generated text, and never does worse than human classification.",
"HUSE leverages both model probabilities and human judgments, ensuring that models which do well on the metric are both high-quality and diverse.",
"Our work can be viewed as a superhuman ver-sion of the classic Turing Test (Turing, 1950).",
"Instead of relying on just a human classifier, we approximate the optimal classifier, which can utilize information about the model in addition to the reference.",
"We also modify the classification problem and seek to identify whether a sample comes from a (potentially superhuman) reference distribution, rather than the human distribution.",
"These two changes lead to tractable, rigorous estimators which can quantify tradeoffs between model quality and diversity on a wide range of generation tasks.",
"Acknowledgements.",
"We would like to thank Arun Chaganty, Robin Jia, and Peng Qi for extensive comments and feedback on the paper.",
"This work was funded by DARPA CwC program under ARO prime contract no.",
"W911NF-15-1-0462.",
"Reproducibility.",
"All code, data, and experiments are available on the CodaLab platform at https://worksheets.",
"codalab.org/worksheets/ 0x88644b5ee189402eb19d39d721d1005c ."
] | [
"method",
"abstain",
"abstain",
"objective",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"result",
"method",
"objective",
"abstain",
"result",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"result",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"objective",
"other",
"other",
"objective",
"other",
"other",
"other",
"other",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"objective",
"other",
"method",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain"
] |
[
"Controllable summarization aims to provide summaries that take into account user-specified aspects and preferences to better assist them with their information need, as opposed to the standard summarization setup which build a single generic summary of a document.",
"We introduce a human-annotated data set (ENTSUM) for controllable summarization with a focus on named entities as the aspects to control.",
"We conduct an extensive quantitative analysis to motivate the task of entity-centric summarization and show that existing methods for controllable summarization fail to generate entity-centric summaries.",
"We propose extensions to state-of-the-art summarization approaches that achieve substantially better results on our data set.",
"Our analysis and results show the challenging nature of this task and of the proposed data set.",
"12 1 Introduction Automatic summarization is a core NLP problem that aims to extract key information from a large document and present it to the user with the role of assisting them to digest the core information in the document faster and more easily.",
"However, each user may have a distinct information need and generating a single summary for a document is not suitable for all readers of the document.",
"Recently, various setups for summarization were proposed such that user preferences can be taken into account in the summarization process.",
"These include providing guidance signals such as summary length (Kikuchi et al., 2016), allowing users to provide terms of interest such as aspects (Amplayo et al., 2021) or entities (Fan et al., 2018) or providing * Equal Contribution Work done during an internship at Bloomberg 1 The data set is available at: https://zenodo.org/ record/6359875 2 The code is available at: https://github.com/ bloomberg/entsum Figure 1: Example of a generic summary (blue), with three entity-centric summaries from ENTSUM focusing on the entities in bold .",
"users the flexibility to interact with the summary and explore new facets of interest (Avinesh et al., 2018).",
"The development of such methods may be paramount in enabling the wide-spread usability of summarization technology.",
"Figure 1 shows an example of a document, its generic summary and summaries controlled through salient named entities in the original document.",
"High quality reference data sets are needed to fos-ter development and facilitate benchmarking.",
"Most summarization data sets are obtained using opportunistic methods such as using abstracts written by editors or librarians when indexing documents.",
"These are by default generic, thus not applicable to controllable summarization.",
"Initial research in this area used small scale human annotations to compare between controllable and generic summarization methods (Fan et al., 2018; He et al., 2020), but these can be prone to biases or qualitative issues, offer only relative quality measurement and do not allow for replicable comparisons between multiple methods or model tuning.",
"Thus, this paper introduces a new data set for controllable summarization focusing on entities as control aspects given these are usually key aspects in documents and their summaries.",
"The data set consists of 2,788 human-generated entity-centric summaries across 645 documents that are obtained 3355 using a strict quality control process mechanism involving several intermediate annotation steps which can be further used in modelling and analyses such as identifying sentences relevant to an entity.",
"The summaries are elicited largely to merge the most important content in a coherent way, while maintaining factuality during the summary creation process.",
"Our data set demonstrates the distinct nature of the entity-centric summarization as opposed to generic summarization and that methods proposed to date for controllable summarization fail at this task.",
"We propose adaptations of state-of-the-art extractive and abstractive summarization methods that significantly improve performance when compared to generic summaries.",
"Our contributions are: the first annotated data set for controllable summarization with entities as targets for control (ENTSUM Entity SUMmarization); quantitative data set analysis that highlights the challenges and distinctiveness of this task; evaluation of generic and also controllable summarization methods on the ENTSUM data set; adaptations of extractive and abstractive summarization methods for performing entity-centric summarization when trained with generic summaries only.",
"Controllable summarization was proposed with the goal of allowing users to define high-level attributes of summaries such as length, source-style or entities (Fan et al., 2018).",
"Methods relied on adapting existing summarization methods such as CNNs (Fan et al., 2018) or BART (He et al., 2020) by pre-pending the controls to the training data and presenting the target control only in inference.",
"However, these methods were only evaluated by comparison to generic summarization methods using human judgments, which can suffer from biases and qualitative issues.",
"Closely related to controllable summarization, guided summarization also uses an input guidance variable in addition to the document when generating the summary (Dou et al., 2021).",
"This is different to controllable summarization because the goal of the guidance signal is to generate an improved generic summary by using the guidance to increase faithfulness and quality.",
"Guidance signals explored in past research include summary length (Kikuchi et al., 2016; Liu et al., 2018b; Sarkhel et al., 2020), keywords (Li et al., 2018; Saito et al., 2020), relations (Jin et al., 2020) or highlighted sentences (Liu et al., 2018a).",
"Opinion summarization is the task of automatically generating summaries for a set of reviews about a specific target and usually involves inferring the aspects of interest, predicting sentiment towards them and generating a summary from the extracted sentences (Kim et al., 2011; Angelidis and Lapata, 2018).",
"Amplayo and Lapata (2021) studied zero-shot controllability to generate need-specific summaries for movie reviews and evaluated using human comparison judgments.",
"Contemporaneous to this work, controllable multi-document summarization for aspects in reviews was introduced (Angelidis et al., 2021; Amplayo et al., 2021).",
"This work created two data sets used for testing, one focusing on six aspects in hotel reviews (SPACE) and another focusing on 18 aspects for product reviews (OPOSUM+), both obtained using a multi-step annotation process related to the one we use in this paper.",
"Interactive Summarization is a technique which aims to provide to an interactive faceted summarization of a set of documents and help the user inquire for more information via suggested or free-text queries (Avinesh et al., 2018; Shapira et al., 2021; Hirsch et al., 2021).",
"This setup is focused on a multi-document scenario where relevant content to a target concept is retrieved, then fed to a generic abstractive summarization method.",
"Recently, Hsu and Tan (2021) proposed decision-focused summarization , where the goal is to summarize information across multiple documents with the goal of aiding a human to forecast an outcome.",
"This section details the collection and annotation process for data set creation.",
"We focus on entities as the aspect to control because named entities are central actors in most news articles and entities are key aspects that make good summaries, together with events and facts.",
"Initial work on controllable summarization considered entities as one of the target for controls (Fan et al., 2018; He et al., 2020).",
"Most large-scale summarization data sets were obtained opportunistically by mining existing sources of documents and their generic summaries expressed either as titles (Narayan et al., 2018), bullet points (Hermann et al., 2015) summaries created for indexing purposes (Sandhaus, 2008) or 3356 TL;DR's created by scientific paper authors (Ca-chola et al., 2020).",
"However, we could not identify any similar proxies for entity-centric summaries.",
"Thus, we created the ENTSUM data set through a manual annotation process.",
"Given a document and entity pair, where the entity is a named entity mentioned in the document, the goal of the annotation is to obtain a summary capturing important information about the entity in that document.",
"Our entity-centric summarization data set consists of news articles from the The New York Times Annotated Corpus (NYT) (Sandhaus, 2008), which consists of 1.8 million articles written between 1987 and 2007.",
"Around 650k articles in the corpus contain article summaries written by library scientists for indexing purposes.",
"We choose to annotate documents from the NYT data set to enable comparison to generic summaries.",
"We selected the NYT data set instead of other popular summarization data sets (e.g. CNN/DailyMail) because of the clarity of the data licensing terms on the NYT corpus for research purposes (Sandhaus, 2008).",
"We use the NYT test set as defined in (Kedzie et al., 2018) to sample the articles used in the ENTSUM data set, as we envision the data set will be used primarily for evaluation purposes.",
"We removed documents with over 1500 words, as we found the majority of these are opinion articles not involving many entities.",
"We split the rest of the documents into sentences and identified named entities using Flair, a high performing system for named entity recognition (Akbik et al., 2019) which iden-tifies Organizations, Person and Location entities.",
"We only select for annotation entities that are Organization and Persons because Locations are usually not salient to the document, thus do not play an active role in the article.",
"From this set, we randomly sampled 10,000 entities spanning 693 documents.",
"Summarization is a highly subjective task because the notion of salient information in a document is user-specific and task-dependent (Iskender et al., 2020).",
"There has been relatively little work on the topic of designing annotation guidelines.",
"The most common method to collect summaries is to ask annotators to summarize the document within a specific length limit (Harman and Over, 2004; Dang, 2006).",
"However, such methods are prone to subjective bias with a low human agreement about the content in the summary (Li et al., 2021).",
"Therefore, to ensure quality of the annotation process, we propose a multi-step approach to collect entity-centric summaries that has similarities to the collection method for opinion summarization (Angelidis and Lapata, 2018).",
"Splitting the tasks in multiple steps allows us to ensure quality of the data set through adjudication across multiple annotations at each step which reduces error propagation across tasks.",
"Figure 2 shows an overview of the four-step annotation process.",
"The first tasks judges if an automatically extracted entity is really a named entity and how salient it is to the source document (Gamon et al., 2013a,b; Dojchinovski et al., 2016; Trani et al., 2016).",
"We do this to keep only salient entities for generating summaries, as others are not important targets for entity-centric summaries and may not have enough related content to produce a summary.",
"Given an article and an entity in the article, we asked the annotators to rate the salience of the entity with respect to the article on a four point scale ranging from not salient (1), through low salience (2), medium salience (3) and high salience (4), similar to Trani et al. (2016).",
"We collected 2 independent annotations for each entity and increased redundancy up to 5 if there was disagreement.",
"We take the salience rating as the average of all individual ratings.",
"We observe that entities with an average rating < 1.5 are generally mentioned once in the document and, therefore can not have a meaningful summary.",
"We remove these entities, resulting in 3,846 entities.",
"We further grouped the entity mentions from each document using substring matching because multiple entity strings can refer to the same entity (e.g. Barack Obama Obama ).",
"After grouping, we obtain 2,788 entities to use in subsequent tasks.",
"The second task aims to identify all sentences in the article that are salient to the target entity.",
"To facilitate the process, we displayed all sentences in a document in a tabular format and premarked sentences that contain the given entity mention.",
"The annotators can add additional sentences or remove existing ones.",
"We also asked the annotators to keep 3357 Figure 2: Annotation pipeline of ENTSUM Metric Overall Entity Type Entity Salience PER ORG Medium High Number of Salient Entities ( Task 1 ) 2788 1741 1047 2100 688 Sentences with entity mentions 3.95 4.21 3.46 3.36 5.65 Entity Salient Sentences ( Task 2 ) 5.80 6.34 5.02 4.95 8.56 Entity-Centric Summary Sentences ( Task 3 ) 2.49 2.59 2.28 2.33 2.66 Summary word length ( Task 4 ) 81.7 84.9 76.1 78.6 88.2 Summary char length ( Task 4 ) 444.3 458.1 421.7 432.1 482.9 Table 1: Statistics for the output of each task in our entity-centric summary annotation pipeline, overall and across entity types and salience scores as annotated in Task 1.",
"the salient sentences as complete as possible by including the sentences that resolve any references in the initially selected sentences.",
"We collected three annotations for each document and entity pair resulting in three annotations for all sentence and entity pairs.",
"We assigned each sentence a binary label (salient to the entity or not) using majority vote across the three annotations.",
"number of premarked sentences (3.95), indicating this task resulted in an expansion from only using the sentences that explicitly mention the target entity.",
"The third task aims to identify the sentences in the article that are used to make up the entity-centric summary.",
"We display the sentences of the document in a tabular format with the salient sentences extracted from the previous task highlighted and allowed the annotators to select only from these sen-3358 tences.",
"We instructed the annotators to first select up to 3 sentences and add up to 3 more sentences if these are needed to provide context.",
"The final task is to write a coherent summary for the entity in the document of up to 150 words using the summary sentences selected previously.",
"This task was performed together with the third task, as they are tightly coupled, to limit cognitive load and to be able to control for quality by comparing selected summary sentences.",
"As this is a labor intensive task, we collected two annotations for a subset of the target entities (867 out of 2,788) to measure agreement.",
"We provide both summaries in the data set release in order to facilitate evaluation with multiple references.",
"The annotated summary sentences represent only 41.3% of all salient sentences across all the tasks.",
"Table 1 shows the annotation statistics.",
"We note the output of each task is released with the ENTSUM data set and can be used when training models, for separate tasks or as auxility tasks in a multi-task learning setup.",
"We devised multiple tasks to accomplish our goal of ensuring quality throughout the annotation process and to make the complex and subjective task of summarization easier for annotators.",
"We adjudicate annotations across multiple annotators to reduce error propagation, wherein if one task has wrong annotations, the subsequent tasks will have the error propagated.",
"We use our internal annotation platform for obtaining annotations.",
"The annotation was performed using a group of English-speaking vendors who were hired and trained for completing this task through training sessions and performed the task independently from each other.",
"We do not collect any private information from the annotators and do not release the identity of the annotators together with the data.",
"We conducted several training sessions and initial rounds with the annotators, the results of which were discarded, to ensure the annotators are proficient in the task.",
"The training rounds included 100 items for the first two tasks and 50 items for the latter two for all annotators.",
"We perform multiple annotations for the upstream tasks.",
"For the entity salience task which is a four-way classification task, we elicit 2 annotations for each item and, if these disagree, we increase redundancy to up to 5 annotations if there is no majority (2 annotations 6261 items; 3 annotations 3318 items; 5 annotations 421 items).",
"For the salient sentence extraction task, we elicit 3 annotations for each item and adjudicate annotations at the sentence level using majority vote.",
"We report inter-annotator agreement for each task.",
"For the 4-way ordinal entity salience task we observe 0.709 interval Krippendorf's Alpha (Krip-pendorff, 2011), which corresponds to substantial agreement (Artstein and Poesio, 2008).",
"The annotators agreed on a single annotation 62.6% of the time.",
"For the salient sentence selection task, we compute inter-annotator agreement using Krippendorf's Alpha between binary sentence-level judgments and obtain a value of 0.744 Krippendorf's Alpha, which again indicates substantial agreement.",
"All three annotators agreed on the same value for 88.4% of the sentences.",
"Selecting the summary sentences is a more subjective task, especially given that all sentences are salient to the target entity.",
"Despite this, the inter-annotator agreement is of 0.539 Krippendorf's Alpha, which is considered good agreement.",
"Finally, in the summary creation task, we compute ROUGE (Lin and Hovy, 2003) between the summaries and achieve the following values: ROUGE-1 = 71.7; ROUGE-2 = 62.6 and ROUGE-L = 69.0.",
"We release both summaries in our data set where available, as these could be used as multiple references when computing evaluation metrics.",
"Summary Statistics Table 2 presents summary statistics relevant to summarization data for the newly introduced ENTSUM data set, with the commonly used document generic summarization data sets CNN-DailyMail (CNNDM) and NYT.",
"We note that summaries in ENTSUM are shorter than their generic counterparts in the NYT corpus, but longer than those in CNNDM, except for the number of sentences, which is expected as the summaries in CNNDM undergo the most compression as demonstrated by the article compression ratio.",
"ENTSUM exhibits the lowest percentage of novel unigrams and bigrams, in line with how our annotation was set up to focus on integrating the original content in a coherent summary.",
"The entity-specific salient text is significantly shorter than the entire document and, as a result, the summary contains the relevant content without requiring dramatic paraphrasing or compression.",
"Comparison to Generic Summaries Our hypothesis is that a new data set for entity-centric summarization is needed as entity-centric summaries do not align well with generic summaries.",
"We compute ROUGE (Lin and Hovy, 2003) scores between the entity-centric summaries in ENTSUM and their corresponding generic summaries in the NYT corpus, with the following values: ROUGE-1 = 26.2, ROUGE-2 = 9.8 and ROUGE-L = 22.9.",
"Low scores show there is low lexical and content overlap between the entity-centric summaries and their corresponding document summaries, demonstrating the distinctiveness of the entity-centric summarization task.",
"Entity Type and Salience Table 1 shows the task-specific statistics of ENTSUM by entity type and salience level separately.",
"We note that the data set has more person entities than organizations and, on average, the related content and summaries associated to people is slightly longer.",
"There are significantly more entities with medium salience values when compared to highly salient entities, which are an average slightly more than one for each document.",
"We note that both sentences with entity mentions and salient sentences to the entities are substantially larger in number for highly salient entities, but there is just a small gap for the entity-centric summaries and sentences, which shows that more selection and compression was achieved for these highly salient entities.",
"Sentence Position Distribution Figure 3 shows the position distribution of entity salient and entity-centric summary sentences in the original document.",
"The figure highlights that both types of sentences are more likely to be distributed at the start of the document, which is expected given we are only considering salient entities to the document.",
"We see that sentences used for summaries are even more likely to be towards the start of the document.",
"However, the sentence distribution is not very skewed, with hundreds of summary sentences being present even in position 20 or higher in the original document.",
"This highlights the challenging nature of the data set.",
"For an initial modelling attempt for the ENTSUM data set, we evaluate all controllable summarization approaches proposed to date, generic summarization methods, strong heuristics for summarization and a couple of adaptations of state-of-the-art meth-Figure",
"ods for abstractive (Dou et al., 2021) and extractive summarization (Liu and Lapata, 2019) to the entity-centric summarization task.",
"Some of the methods described in this section involve detecting the entity mentions in documents unlabeled with entities in training and/or at inference time.",
"For this, we use a combination of standard methods for NER based on Flair (Akbik et al., 2018) and their coreferent mentions as identified through the SpanBERT coreference system (Joshi et al., 2020).",
"Abstractive summarization uses generation methods to express the content of the original document.",
"We denote through ConvNet the first method for controllable summarization proposed in Fan et al. (2018).",
"It adopts a CNN encoder-decoder model for summarization and is trained by replacing entities in the document with placeholders and prepending them to the document.",
"At inference time, only the target entity is prepended to the summary to generate the entity-centric summary (Fan et al., 2018).",
"CTRLSum (He et al., 2020) is a method based on BART (Lewis et al., 2020), a popular Transformer-based sequence-to-sequence model for summarization.",
"CTRLSum is fine-tuned by prepending keywords, in this case all detected entity mentions, to the input document to control the summary (He et al., 2020).",
"At inference time, only the target entity is prepended to the target document to generate the entity-centric summary.",
"GSum (Dou et al., 2021) is a document summarization framework that allows for using as input a guid-3360",
"ance signal (e.g. keywords, sentences) along with the source document with the goal of improving the generic document summarization task through improving faithfulness.",
"The model architecture consists of a Transformer (Vaswani et al., 2017) model initialized with BART (Lewis et al., 2020).",
"The model has two encoders: one to encode the source document and the other to encode the guidance signal.",
"The encoders share the embedding and the encoding layers except for the topmost layer.",
"The decoder first attends to the guidance signal to select the part of the document to focus on and then attends to the document with these guidance-aware representations.",
"The framework allows to include varied guidance signals and demonstrates improvements on generating generic summaries.",
"We adapt GSum to generate entity summaries by using the entity information as guidance signal.",
"However, the original GSum implementation used a single generic summary as output for each input document, which is not suitable for our setup in which the output is conditioned on both the input document and the guidance signal (i.e. entity).",
"In addition, we do not have access to gold entity mentions in training and inference and, because we only use ENTSUM in evaluation only, we do not have gold reference entity-centric summaries.",
"We create proxies as above for the input and output in training as follows: for each training and testing (document, entity) pair, we feed the full document and as guidance input either the mention string ( GSum ent name ) or the sentences that mention the given entity ( GSum ent sent ) as detected by our NER and coreference approach previously described; the output summary for each (document, entity) training pair is obtained from the reference entity-agnostic summary as follows:",
"(a) Select at most 3 sentences in the reference that mention the entity;",
"(b) If we obtain less than 3 sentences in the previous step, then select the remaining sentences from the lead 3 sentences that mention the given entity.",
"Selecting the top sentences in a document is a strong heuristic for the document summarization tasks (Nallapati et al., 2017).",
"We evaluate the following variants: Lead3 ovr is a generic summarization method that selects the first three sentences in the document irrespective of the target entity.",
"Lead3 ent is the entity-aware summarization variant which selects the first three sentences in the document that mention the given entity, as inferred by our NER and coreference resolution approach.",
"BERTSum obtains near state-of-the-art results for extractive summarization (Liu and Lapata, 2019).",
"The method uses the BERT (Devlin et al., 2019) encoder to generate representations for each sentence, then models the interactions between these sentences through a BERTSum summarization layer and then predicts the most important sentences from these as the sentences to be part of the generic summary.",
"We evaluate on both all and top 3 predicted sentences to make fair comparisons with Lead3 baselines.",
"We adapt BERTSum in the training phase by restricting the input only to all the sentences containing the entity string mention and its coreferent mentions, instead of the entire source document.",
"In training, the output entity-centric summary is constructed in a similar way to the GSum training procedure, where we use the generic summary to select top 3 sentences that mention the entity or otherwise up to 3 sentences that mention the entity.",
"Most previous approaches make the realistic as-sumption that gold entity mentions or other entity-related annotations are not available at inference time.",
"To explore the impact of these, we explore the following additional heuristics: 3361 Oracle Lead3 ent (salient) uses as summary the first three salient sentences selected by annotators during the second step of the annotation pipeline.",
"Oracle Lead3 ent (summary) uses as summary the first three sentences selected by annotators for writing the summary.",
"We train all non-entity-centric methods on the NYT corpus consisting of 44,382 training and 5,523 validation (document, summary) pairs as specified in Kedzie et al. (2018).",
"However, this data set size increases to 464,339 training and 58,991 validation pairs when training the adapted GSum and BERTSum as each document contains multiple entities resulting in multiple <document, summary> pairs for a single document.",
"We use the author's implementations for the following methods: CTRLSum, 3 BERTSum, 4 and GSum.",
"5 We reimplement the ConvNet method using the FairSeq library (Ott et al., 2019) as described in Fan et al. (2018).",
"For all our implementations, we first train on the CNN DailyMail data set and compared to published numbers to ensure we are able to reproduce the original results and then retrain on the NYT data set for reporting our results on ENTSUM.",
"We experiment with various hyperparameter settings for each of the architectures but we find that the original hyperparamters used for training each of the CNN DailyMail models seem to be the most stable and produce the best results.",
"We automatically evaluate the quality of the generated summaries using unigram and bigram overlap (ROUGE-1 and ROUGE-2), which are a proxy for assessing informativeness and use the longest common subsequence (ROUGE-L) to measure fluency (Lin and Hovy, 2003).",
"We also use BERTScore (Zhang et al., 2020) to compute a similarity score 3 https://github.com/salesforce/ ctrl-sum 4 https://github.com/nlpyang/BertSum 5 https://github.com/neulab/guided_ summarization for each token in the generated summaries with each token in the reference summaries using con-textualized word embeddings provided by BERT (Devlin et al., 2019).",
"BERTScore incorporates semantic information behind sentences, thus can provide better evaluations for cases where ROUGE score fails to account for meaning-preserving lexical and semantic diversity.",
"BERTScore showed to have better correlations with human judgments for natural language generation (Zhang et al., 2020).",
"For the samples in ENTSUM where we have multiple reference summaries, we take the maximum ROUGE or BERTScore scores.",
"We also report the average sentence and word lengths of the generated summaries to observe summary statistics for the behavior of the output, as automated metrics are sensitive to summary length.",
"We benchmark all methods described above on the newly proposed ENTSUM data set in order to establish baseline performance of both abstractive and extractive methods for this new task and data set.",
"Table 3 shows the automatic evaluation results.",
"The results show the following trends across all four evaluation metrics: Entity-centric summarization is very different to generic summarization given that methods that do not take entity information into account (Lead3 ovr , GSum ovr ) perform significantly lower than the best methods in the same class which use entity information.",
"Previously introduced methods (ConvNet, CTRLSum) for controllable summarization can not perform well on entity-centric summarization with their results being over 17 BERTScore and 29 ROUGE-L lower than the proposed adaptation for abstractive summarization on entity-centric summaries.",
"Further, these methods actually obtain lower results by 4.93 BERTScore and 7.43 ROUGE-L than the entity-agnostic GSum ovr method, which shows these methods are not effective at modelling entity-centric information through their training and inference process.",
"Our proposed adaptations to both abstractive and extractive methods perform well on entity-centric evaluation, despite they were trained on a data set that used proxies for entity-centric summaries.",
"For extractive summarization BERTSum ent top 3 performs better than 3362 ROUGE-1 ROUGE-2 ROUGE-L BERTScore Avg.",
"Len Sent.",
"BERTSum ovr by 34.23 ROUGE-L and by 19.65 on BERTScore, while for abstractive summarization GSum ent sent is better than GSum ovr by 21.07 ROUGE-L and 12.87 BERTScore.",
"We also see that the choice of guidance signal in the GSum framework is impactful, with using sentences with entities leading to 9.62 ROUGE-L and 5.76 BERTScore improvements over using the entity name.",
"Extractive approaches perform better than abstractive methods, which is expected due to the extractive nature of the ENTSUM data set, the gap between the best performing methods (BERTSum ent top 3 and GSum ent sent ) is clear, when using BERTScore (+2.02) which better estimates semantic similarity opposed to the n-gram matches used in ROUGE (+7.66 on ROUGE-2, +6.03 on ROUGE-L).",
"Lead3 ent is a very strong baseline as expected, because this is a strong baseline for document summarization in general and especially because ENTSUM is by design a more extractive summarization data set.",
"Lead3 using oracle selected sentences perform much better than Lead3 and shows the benefits of selecting salient sentences (+7.36 ROUGE-L, +5.16 BERTScore) and the benefits of selecting the most important sentences used in writing the summary (further +9.82 ROUGE-L, +6.26 BERTScore compared to top salient sentences).",
"The absolute results also show there is further room for improvement in entity-centric summarization approaches, given that performance of automated methods still lags behind Lead3 ent , whereas this is currently surpassed by automated methods in generic summarization.",
"We introduced the first annotated data set (ENTSUM) for controllable summarization where entities are targets for control.",
"We conducted a quantitative analysis of the newly created resource and highlighted how this is different to generic summarization methods.",
"We used the ENTSUM data set for benchmarking state-of-the-art generic abstractive and extractive summarization methods, as well as initial methods for controllable summarization.",
"Further, we proposed a new setup for learning entity-centric summaries from generic summarization data sets and, extending previous methods, demonstrated good performance on the newly proposed ENTSUM data set.",
"In the future, we aim to propose new methods for both extractive and abstractive summarization performance through modelling information about the document and the entity in a more complex way.",
"We also plan to create a data set for entity-centric summarization that is more abstractive in nature.",
"We would like to thank Chen-Tse Tsai, Umut Top-kara and the other members of the NLP team and the broader Bloomberg AI group, who provided invaluable feedback on the task framing and experiments.",
"We wish to thank Wei Xu for supporting the collaboration.",
"We are grateful to our annotators for their diligence in performing this annotation task."
] | [
"abstain",
"method",
"objective",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"other",
"other",
"other",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"other",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"result",
"method",
"objective",
"objective",
"method",
"other",
"other",
"other"
] |
[
"Non-Parametric Few-Shot Learning for Word Sense Disambiguation Howard Chen, Mengzhou Xia, Danqi Chen Princeton University {howardchen, mengzhou, danqic}@cs.princeton.edu Abstract Word sense disambiguation (WSD) is a longstanding problem in natural language processing.",
"One significant challenge in supervised all-words WSD is to classify among senses for a majority of words that lie in the longtail distribution.",
"For instance, 84% of the annotated words have less than 10 examples in the SemCor training data.",
"This issue is more pronounced as the imbalance occurs in both word and sense distributions.",
"In this work, we propose MetricWSD, a non-parametric few-shot learning approach to mitigate this data imbalance issue.",
"By learning to compute distances among the senses of a given word through episodic training, MetricWSD transfers knowledge (a learned metric space) from high-frequency words to infrequent ones.",
"MetricWSD constructs the training episodes tailored to word frequencies and explicitly addresses the problem of the skewed distribution, as opposed to mixing all the words trained with parametric models in previous work.",
"Without resorting to any lexical resources, MetricWSD obtains strong performance against parametric alternatives, achieving a 75 .",
"1 F1 score on the unified WSD evaluation benchmark (Raganato et al., 2017b).",
"Our analysis further validates that infrequent words and senses enjoy significant improvement.",
"1 1 Introduction Word sense disambiguation (WSD) (Navigli, 2009) is a widely studied problem that aims to assign words in text to their correct senses.",
"Despite advances over the years, a major challenge remains to be the naturally present data imbalance issue.",
"Models suffer from extreme data imbalance, rendering learning the long-tail examples a major focus.",
"In the English all-words WSD task (Raganato et al., 2017b), 84% of the annotated words 2 have less 1 Our code is publicly available at: https://github.",
"than 10 occurrences in the training data and the most frequent sense (MFS) accounts for a large portion of the examples, resulting in a 65 .",
"2 test F1 score by simply predicting MFS (Figure 1).",
"Recent approaches tackle this problem by resorting to extra sense information such as gloss (sense definition) and semantic relations to mitigate the issue of rare words and senses (Luo et al., 2018b,a; Kumar et al., 2019; Huang et al., 2019; Blevins and Zettlemoyer, 2020; Bevilacqua and Navigli, 2020).",
"However, most work sticks to the parametric models that share parameters between words and adopts standard supervised learning mixing all the words of different frequencies.",
"We argue that this accustomed paradigm exposes a missing opportunity to explicitly address the data imbalance issue.",
"In this work, we propose MetricWSD, a simple non-parametric model coupled with episodic training to solve the long-tail problem, drawing inspiration from few-shot learning methods such as Prototypical Networks (Snell et al., 2017).",
"Given a word, the model represents its senses by encoding a sampled subset ( support set ) of the training data and learns a distance metric between these sense representations and the representations from the remaining subset ( query set ).",
"This lightens the load for a model by learning an effective metric space instead of learning a sense representation from scratch.",
"By sharing only the parameters in the text encoder, the model will trickle the knowledge of the learned metric space down from high-frequency words to infrequent ones.",
"We devise a sampling strategy that takes word and sense frequency into account and constructs support and query sets accordingly.",
"In combination, this non-parametric approach naturally fits in the imbalanced few-shot problems, which is a more realistic setting when learning from a skewed data distribution as in WSD.",
"We evaluate MetricWSD on the unified WSD evaluation benchmark (Raganato et al., 2017b), achieving a 75 .",
"1% test F1 and outperforming parametric baselines using only the annotated sense supervision.",
"A further breakdown analysis shows that the non-parametric model outperforms the parametric counterparts in low-frequency words and senses, validating the effectiveness of our approach.",
"Word sense disambiguation has been studied extensively as a core task in natural language processing.",
"Early work computes relatedness through concept-gloss lexical overlap without supervision (Lesk, 1986; Banerjee and Pedersen, 2003).",
"Later work designs features to build word-specific classifiers ( word expert ) (Zhong and Ng, 2010; Shen et al., 2013; Iacobacci et al., 2016).",
"All-words WSD uni-fies the datasets and training corpora by collecting large scale annotations (Raganato et al., 2017b), which becomes the standard testbed for the WSD task.",
"However, due to the naturally present longtail annotation, word expert approaches fall short in utilizing information across different words.",
"Recent supervised neural approaches prevail word-independent classifiers by more effective sentence feature extraction and achieve higher performance (Kgebck and Salomonsson, 2016; Raganato et al., 2017a).",
"Approaches that use large pre-trained language models (Peters et al., 2018; Devlin et al., 2019) further boost the performance (Hadi-winoto et al., 2019).",
"Recent work turns to incorporate gloss information (Luo et al., 2018b,a; Huang et al., 2019; Loureiro and Jorge, 2019; Blevins and Zettlemoyer, 2020).",
"Other work explores more lexical resources such as knowledge graph structures (Kumar et al., 2019; Bevilacqua and Navigli, 2020; Scarlini et al., 2020b,a).",
"All the above approaches mix words in the dataset and are trained under a standard supervised learning paradigm.",
"Another close work to ours is Holla et al. (2020), which converts WSD into an N -way, K -shot few-shot learning problem and explores a range of meta-learning algorithms.",
"This setup assumes disjoint sets of words between meta-training and meta-testing and deviates from the standard WSD setting.",
"Given an input sentence x = x 1 , x 2 , . . . , x n , the goal of the all-words WSD task is to assign a sense y i for every word x i , where y i S x i S for a given sense inventory such as the WordNet.",
"In practice, not all the words in a sentence are annotated, and only a subset of positions are identified I { 1 , 2 , . . . , n } to be disambiguated.",
"The goal is to predict y i for i I .",
"We regard all the instances of a word w W as a classification task T w , since only the instances of word w share the output label set S w .",
"We de-fine input x = ( x, t ) where x is an input sentence, and 1 t n is the position of the target word and the output is y t for x t .",
"A WSD system is a function f such that y = f ( x ) .",
"Our method groups the training instances by word w : A ( w ) = { ( x ( i ) , y ( i ) ) : x ( i ) t ( i ) = w } N ( w ) i =1 where N ( w ) is the number of training instances for T w .",
"It allows for word-based sampling as opposed to mixing all words in standard supervised training.",
"We construct episodes by words with a tailored sampling strategy to account for the data imbalance issue.",
"In each episode, all examples A ( w ) of a word w are split into a support set S ( w ) containing J distinct senses and a query set Q ( w ) by a predefined ratio r (splitting r % into the support set).",
"When the support set is smaller than a predefined size K , we use the sets as they are.",
"This split maintains the original sense distribution of the infrequent words as they will be used fully as support instances during inference.",
"On the other hand, frequent words normally have abundant examples to form the support set.",
"To mimic the few-shot behavior, we sample a balanced number of examples per sense in the support set for frequent words (referred to as the P b strategy).",
"We also compare to the strategy where the examples of all senses of Algorithm 1 Episodic Sampling 1: K : maximum sample number for support set 2: r : support to query splitting ratio 3: P : sampling strategy { P b , P u } 4: Initialize empty dataset D = 5: for all w W do 6: Retrieve A ( w ) and randomly split A ( w ) into S ( w ) and Q ( w ) with a ratio r .",
"the word are uniformly sampled (referred to as the P u strategy).",
"We present the complete sampling strategy in Algorithm",
"1. 3.3 Learning Distance Metric We use BERT-base (uncased) (Devlin et al., 2019) as the context encoder.",
"We follow Blevins and Zettlemoyer (2020) closely and denote context encoding as f ( x ) = BERT ( x )[ t ] where the context encoder is parameterized by .",
"If a word x t is split into multiple word pieces, we take the average of their hidden representations.",
"In each episode, the model encodes the contexts in the support set S ( w ) and the query set Q ( w ) , where the encoded support examples will be taken average and treated as the sense representations ( prototypes ).",
"For word w , the prototype for sense j among the sampled J senses is computed from the support examples: c j = 1 |S j ( w ) | (cid:88) ( x,y ) S j ( w ) f ( x ) , (1) where S j ( w ) = { ( x ( i ) , y ( i ) ) : y ( i ) = j } |S j | i =1 S ( w ) .",
"We compute dot product 3 as the scoring function s ( , ) between the prototypes and the query representations to obtain the probability of predicting sense j given an example ( x (cid:48) , y (cid:48) ) : p ( y = j | x (cid:48) ) = exp( s ( c j , f ( x (cid:48) )) (cid:80) k exp( s ( c k , f ( x (cid:48) ))) .",
"The loss is computed using negative log-likelihood and is minimized through gradient descent.",
"During inference, we randomly sample min( IS , |A j ( w ) | ) examples in the training set for sense j as the support set, where IS is a hyperparameter.",
"We also experimented with a cross-attention model which learns a scoring function for every pair of instances, similar to the BERT-pair model in Gao et al. (2019); however, we didn't find it to perform better than the dual-encoder model.",
"Our non-parametric approach is inspired and closely related to Prototypical Networks (Snell et al., 2017) with several key differences.",
"First, instead of using disjoint tasks (i.e., words in our case) for training and testing, MetricWSD leverages the training data to construct the support set during inference.",
"Second, we control how to sample the support set using a tailored sampling strategy (ei-ther balanced or uniform sense distribution).",
"This encourages learning an effective metric space from frequent examples to lower-frequency ones, which is different from adapting between disjoint tasks as in the typical meta-learning setup.",
"We evaluate our approach with the WSD framework proposed by Raganato et al. (2017b).",
"We train our model on SemCor 3.0 and use SemEval-2007 (SE07) for development and the rest: Senseval-2 (SE02), Senseval-3 (SE03), SemEval-2013 (SE13), and SemEval-2015 (SE15) for testing.",
"Following standard practice, we report performance on the separate test sets, the concatenation of all test sets, and the breakdown by part-of-speech tags.",
"For all the experiments, we use the BERT-base (uncased) model as the text encoder.",
"Baselines We first compare to two simple baselines: WordNet S1 always predicts the first sense and MFS always predicts the most frequent sense in the training data.",
"We compare our approach to BERT-classifier: a linear classifier built on top of BERT (all the weights are learned together).",
"As opposed to our non-parametric approach, the BERT-classifier has to learn the output weights from scratch.",
"We compare to another supervised baseline using contextualized word representations that extends the input context text with its surrounding sentences in the SemCor dataset (Hadiwinoto et al., 2019).",
"We also compare to a non-parametric nearest neighbor baseline BERT-kNN, which obtains Dev Test Datasets Concatenation of Test Datasets Gloss?",
"sense representations by averaging BERT encoded representations from training examples of the same sense.",
"It predicts the nearest neighbor of the input among the sense representations.",
"The BERT weights are frozen which, different from our approach, does not learn the metric space.",
"Models using only supervised WSD data fall back to predicting the most frequent sense (MFS) when encountering unseen words.",
"For reference, we also list the results of recent state-of-the-art methods that incorporate gloss information including EWISE (Ku-mar et al., 2019), EWISER (Bevilacqua and Nav-igli, 2020), GlossBERT (Huang et al., 2019), and BEM (Blevins and Zettlemoyer, 2020).",
"More implementation details are given in Appendix A. Overall results Table 1 presents the overall results on the WSD datasets.",
"Comparing against systems without using gloss information, MetricWSD achieves strong performance against all baselines.",
"In particular, MetricWSD outperforms BERT-classifier by 1 .",
"4 points and BERT-kNN by 2 .",
"5 points respectively in F1 score on the test set.",
"Using gloss information boosts the performance by a large margin especially for unseen words, where systems without access to gloss can only default to the first sense.",
"We believe adding gloss has the potential to enhance the performance for our non-parametric approach and we leave it to future work.",
"Performance on infrequent words and senses The performance breakdown for words and senses of different frequency groups is given in Figure",
"2. The non-parametric methods (both MetricWSD and BERT-kNN) are better at handling infrequent words and senses.",
"In particular, our approach outperforms BERT-classifier 3 .",
"5% for the words with 10 occurrences and 6 .",
"6% for the senses with 10 occurrences.",
"It demonstrates the effectiveness of MetricWSD to handle scarce examples.",
"Ablation on sampling strategies We provide an ablation study for the sampling strategy on the development set.",
"The system using the balanced strategy ( P b ) achieves a 71 .",
"4 F1 on the development set and drops to 69 .",
"2 F1 when the uniform strategy ( P u ) is used.",
"Balancing the sampled word = nove (v) word = nove (v) word = nove (v) word = nove (v) BERT-classifier word = nove (v) word = nove (v) word = nove (v) word = nove (v) MetricWSD word = provide (v) word = provide (v) word = provide (v) word = provide (v) BERT-classifier word = provide (v) word = provide (v) word = provide (v) word = provide (v) MetricWSD Figure 3: t-SNE visualization of the learned representations f ( x ) for the examples of note (v) and provide (v) in the SemCor dataset.",
"senses achieves significantly higher performance than sampling with the uniform distribution and this observation is consistent across different hyper-parameter settings.",
"Qualitative analysis Table 2 shows the examples which are correctly predicted by our method but incorrectly predicted by BERT-classifier.",
"We see that MetricWSD is able to correctly predict the sense art%1:09:00:: (a superior skill that you can learn by study and practice and observation), which has only 6 training examples.",
"The BERT-classifier model incorrectly predicts the sense art%1:06:00:: (the products of human creativity; works of art collectively) that has many more training examples.",
"Visualization of learned representations We conduct a qualitative inspection of the learned representations for the BERT-classifier model and MetricWSD.",
"Figure 3 shows the encoded representations of all 105 examples in the SemCor dataset of the word note (with part-of-speech tag v ).",
"We see that the BERT-classifier model fails to learn distinct grouping of the senses while MetricWSD forms clear clusters.",
"Note that even for the sense (red) with only few examples, our method is able to learn representations that are meaningfully grouped.",
"Similarly, MetricWSD separates senses more clearly than BERT-classifier for the word provide (with part-of-speech tag v , especially on the rare sense (pink).",
"In this work, we introduce MetricWSD, a few-shot non-parametric approach for solving the data imbalance issue in word sense disambiguation.",
"Through learning the metric space and episodic training, the model learns to transfer knowledge from frequent words to infrequent ones.",
"MetricWSD outperforms previous methods only using the standard annotated sense supervision and shows significant improvements on low-frequency words and senses.",
"In the future, we plan to incorporate lexical information to further close the performance gap.",
"We thank the members of the Princeton NLP group and the anonymous reviewers for their valuable comments and feedback.",
"We also thank Terra Blevins at University of Washington for providing code and checkpoints for the baselines.",
"Both HC and MX are supported by a Graduate Fellowship at Princeton University.",
"We identify areas where the WSD applications and our proposed approach will impact or benefit users.",
"WSD systems are often used as an assistive submodule for other downstream tasks, rendering the risk of misuse less pronounced.",
"However, it might still exhibit risk when biased data incurs erroneous disambiguation.",
"For example, the word shoot might have a higher chance to be interpreted as a harmful action among other possible meanings when the context contains certain racial or ethnic groups that are biasedly presented in training data.",
"Our proposed method does not directly address this issue.",
"Nonetheless, we identify the opportunity for our approach to alleviate the risk by providing an easier way to inspect and remove biased prototypes instead of making prediction using learned output weights that are hard to attribute system's biased behavior.",
"We hope future work extends the approach and tackles the above problem more explicitly."
] | [
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"result",
"abstain",
"result",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"other",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain"
] |
[
"Deep neural models have repeatedly proved excellent at memorizing surface patterns from large datasets for various ML and NLP benchmarks.",
"They struggle to achieve human-like thinking, however, because they lack the skill of iterative reasoning upon knowledge.",
"To expose this problem in a new light, we introduce a challenge on learning from small data, PuzzLing Machines , which consists of Rosetta Stone puzzles from Linguistic Olympiads for high school students.",
"These puzzles are carefully designed to contain only the minimal amount of parallel text necessary to deduce the form of unseen expressions.",
"Solving them does not require external information (e.g., knowledge bases, visual signals) or linguistic expertise, but meta-linguistic awareness and deductive skills.",
"Our challenge contains around 100 puzzles covering a wide range of linguistic phenomena from 81 languages.",
"We show that both simple statistical algorithms and state-of-the-art deep neural models perform inadequately on this challenge, as expected.",
"We hope that this benchmark, available at https://ukplab.github.io/ PuzzLing-Machines/ , inspires further efforts towards a new paradigm in NLPone that is grounded in human-like reasoning and understanding.",
"Kahneman (2011) discusses the two modes of human thinking which perfectly encapsulate the current (so called System1) and the desired state (Sys-tem1+System2) of the deep learning field.",
"System1 handles tasks that humans consider fast, intuitive and automatic, such as object detection and document classification.",
"Recent deep learning (DL) models have shown great promise at this type of tasksthanks to large training datasets.",
"Yet, it is through slow, rational and sequential mechanisms that human-like abstract reasoning happens, Chikasaw English 1. Ofi'at kowi'a lhiyohli.",
"This System2-style modeling is still in its early stages in DL, but is recognized as a much needed next step in the field (McClelland et al., 2019; Marcus, 2020; LeCun, 2020; Bengio, 2020).",
"To foster research in this promising direction, we propose a unique challenge on learning from small data: PuzzLing Machines , based on the Linguistic Olympiadsone of the 13 recognized International Science Olympiads targeted at high-school students.",
"The PuzzLing Machines challenge is based on one of the most common puzzle types in the Linguistic Olympiads: the Rosetta Stone puzzles (Bozhanov and Derzhanski, 2013), a.k.a. translation puzzles.",
"An example is given in Table 1. 1 Although these puzzles take the form of a traditional machine translation task, they are different in many ways: Rosetta Stone puzzles contain a minimal, carefully designed set of parallel expressions (words, phrases or sentences) in a for-1 Copyright University of Oregon, Department of Linguistics.",
"eign and in a familiar language (e.g., Chickasaw-English).",
"This minimal set is just enough to deduce the underlying translation model, which typically involves deriving mini-grammar rules, extracting a lexicon, and discovering morphological and phonological rules.",
"The actual task then is to translate new expressionsgenerally in both directions using the model deduced from the parallel data.",
"The assignments are carefully designed so that the expressions cannot be generated through simple analogy, but rather through the application of the discovered rules.",
"These properties distinguish the PuzzLing Machines challenge from the modern MT task, as it relies on deductive reasoning with linguistic concepts that are central to System2, rather than exploiting statistical properties from large datasets as in System1.",
"The lack of reasoning skills of statistical systems has recently gained a lot of attention.",
"Various datasets that require a wide range of background knowledge and different types of reasoning abilities have been introduced, such as ARC (Clark et al., 2018), GQA (Hudson and Manning, 2019), GLUE benchmarks (Wang et al., 2018) and SWAG (Zellers et al., 2018).",
"Our challenge distinguishes from previous benchmarks with some key properties.",
"First, most of these reasoning tasks require external scientific or visual knowledge, which makes it hard to measure the actual reasoning performance.",
"On the other hand, our challenge does not rely on any external, multimodal or expert-level information.",
"Second, and more importantly, PuzzLing challenge consists of a minimal set of examples required for solution.",
"That means, there exists no extra training data, ensuring that exploiting surface patterns would not be possible unlike in some of existing benchmarks (Gururan-gan et al., 2018).",
"In summary, this paper introduces a unique challenge, PuzzLing Machines , made up of 100 Rosetta Stone, a.k.a translation puzzles covering 81 languages from 39 different language families based on the Linguistic Olympiads.",
"The challenge requires System2 skillssequential reasoning and abstraction of linguistic concepts, discussed in detail in 2. We discuss the dataset and the linguistic phenomena in the resulting dataset supported with statistics and examples in 3. In 4, we present the results of intuitive baseline methods and strong MT baselines such as Transformers encoder-decoder (Vaswani et al., 2017) with integrated pretrained language models as applied to these puzzles.",
"We show that, unsurprisingly, the puzzles cannot be easily or robustly solved by currently existing methods.",
"We hope that this benchmark is going to evoke development of new deep MT/NLP models that operate in a human-like manner and reason upon linguistic knowledge, providing a new future research direction for NLP.",
"Meta-linguistics is defined by Chomsky (1976) as the knowledge of the characteristics and structures of language as realised on the level of phonology, morphology, syntax and semantics.",
"Any English speaker would likely have the linguistic capacity to produce the word undo when asked What is the opposite of do ?",
"Only a speaker with some level of meta-linguistic awareness, however, would further be able to reflect on the structure of the word they have produced: to identify unas a unit that serves to negate words, to spot its similarity in function to other units like disand de.",
"He/she would also be aware that unis not interchangeable with dis-and de, since it attaches to the front of verbs and adjectives but not to nouns.",
"Meta-linguistic awareness is especially useful (and often improved) in the process of learning a new language, as it allows the learner to compare and contrast the structure and characteristics of the new language to those that he/she is already familiar with.",
"It is desirable that systems for natural language processing possess meta-linguistic awareness, too, as that could hugely improve their cross-lingual generalizability, a problem that remains open after being approached from various engineering perspectives, often with little recourse to linguistics.",
"However, measuring the meta-linguistic awareness of a system is not trivial.",
"Existing probing techniques are mostly designed to measure how well neural models capture specific linguistic phenomena, e.g., whether a specific layer of an English language model can capture that undo is negative, instead of testing for meta-linguistic awareness.",
"Our challenge takes a step further and tests whether the model can apply the underlying morphological processes, e.g. of verbal negation through prefix-ing.",
"In addition, our challenge spans a wide-range of language families and covers a variety of linguistic phenomena (see 3.1), that qualifies it as a favorable testbed for measuring meta-linguistic awareness.",
"Let us demonstrate how meta-linguistic reasoning skills are used to solve the Chickasaw puzzle given in Table 1. The translation model is iteratively deduced as follows: (1) the word order in Chickasaw is Subject-Object-Verb (SOV), unlike the English SVO word order; (2) nouns take different suffixes when in a subject or object position ( at and a , respectively); (3) verbs take a suffix for 1st person singular pronomial subject or object ( li and sa , respectively).",
"Notice that, crucially, it is not possible to learn the function of the prefix sa , which corresponds to me in English, without deducing that lhiyohli corresponds to the verb chases and that third person agency in Chickasaw is not explicitly expressed.",
"As demonstrated, inferring a translation model requires iterative reasoning on the level of words, morphemes and syntactic abstractions (classes), or, to put things differently, it requires meta-linguistic awareness.",
"The puzzles from Linguistic Olympiads cover many aspects of language such as phonetics, morphology, syntax and semantics.",
"They are carefully designed by experts according to several key criteria: (1) The puzzles should be self-contained and unambiguous , meaning that no prior knowledge in the foreign language is requires, just the command of one's own native language and some level of meta-linguistic awareness and that a solution is guaranteed; (2) They should require no specialized external knowledge or formal linguistic knowledge, i.e. linguistic terms are either excluded from the instructions that accompany a puzzle or they are explicitly defined; (3) The foreign language used in a puzzle should be from a truly lesser known language family (e.g. Chickasaw, Lakhota, Khmer, Ngoni), such that there is no unfair advantage to participants whose native language is related.",
"We based our data collection efforts on a rich and publicly available database of language puzzles maintained by the organizers of NACLO.",
"2 This resource contains puzzles from IOL and a wide range of local competitions 3 .",
"We only included puzzles written in English (or translated to English) to ensure a quality transcription and to enable error 2 http://tangra.cs.yale.edu/naclobase/ 3 NACLO (North America), OzCLO (Australia), UKLO (UK), Olimp ada Brasileira (Brazil), OLE (Spain), Panini (India), Russian LO, Russian Little Bear, Swedish LO, Polish LO, Estonian LO, Slovenian LO, Bulgarian LO, Netherlands LO and more.",
"analysis.",
"Expert solutions are available for most puzzles; we excluded the rest.",
"In addition to the translation puzzle type shown in Table 1, we also collected matching' puzzles.",
"These are two-step puzzles, in which the participants first align a shuf-fled set of sentences to obtain parallel data, and then translate a set of unseen sentences.",
"We converted these puzzles to the translation puzzle format by referring to the solution files to align the training sentence pairs.",
"Appendix A.1 describes how we selected the puzzles and how we transcribed them into a machine-readable format.",
"The final dataset contains 96 unique puzzles from 81 languages that span 39 different language families from all over the world, as well as two creoles and two artificial languages (see Appendix A.6 for the full list).",
"Some of the large language families have multiple representatives, e.g. there are 13 Indo-European languages, seven Austronesian and six from the Niger-Congo family.",
"But the majority of languages are single representatives of their respective family.",
"This genealogical diversity leads to a great diversity in the linguistic phenomena attested in the data.",
"Some puzzles are designed to explore a specific aspect of the unknown language in isolation, e.g. case markers on demonstrative pronouns in Hungarian (Pudeyev, 2009).",
"In general, however, the correct solution of a puzzle involves processing on the level of syntax, morphology, phonology, and semantics all at once.",
"The foreign languages used in linguistic puzzles are purposefully chosen to demonstrate some interesting linguistic phenomena, not found in English (or in the respective source language of the puzzle) (Bozhanov and Derzhanski, 2013), resulting in a challenging, non-trivial translation process between these diverse languages and English.",
"In this section, we outline some key linguistic properties of the languages found in the dataset, but the list is by no means exhaustive.",
"Syntax: Three common configurations for the order between subject (S), verb (V) and object (O) in a sentence are exemplified in the dataset: SVO, SOV and VSO.",
"In addition to these three, our dataset covers the rather rare OSV word order: see the example in Table 5 from the Australian language Dyirbal (Semenuks, 2012).",
"Morphology: We see examples of highly analytic languages (e.g. Yoruba from West Africa) Language Source sentence Target sentence Other accepted forms 1. Chickasaw Hilha.",
"as well as highly polysythetic ones (e.g. Inuktitut from Canada).",
"Within the synthetic type, we see both agglutinative languages (e.g. Turkish) and in-flectional ones (e.g. Polish).",
"Some specific morphological properties explored in the puzzles are verbal inflection with its many categories concerning tense, aspect and mood, nominal declension and noun class systems.",
"The aforementioned Dyirbal puzzle also exemplifies an interesting classification of nouns, wherein women and dangerous animals and objects are treated as one class, men and other animals constitute another class and a third class captures all remaining nouns.",
"The choice of the articles balan and bagu in Table 5, for example, is guided by this classification.",
"Phonology: A wide range of phonological assimilation processes interplay with the morphological processes described above and obfuscate morpheme boundaries.",
"These can concern voicing, nasality and vowel quality, among other features.",
"As an example of morphological and phonological processes working together, consider the realization of pronomial possession in Australian language Wembawembda (Laughren, 2009).",
"Unlike English, which expresses this feature with pronouns his/her/its , Wembawemba expresses it with a suffix on the noun it modifies, e.g. wutyupuk (his/her/its) stomach'.",
"The form of the suffix, however, depends on the ending of the noun it attaches to and can vary greatly as shown in Table 3. Semantics: Semantics come into play when we consider the compositionality of language and fig-urative speech: the phrase falepak hawei in the train test 0.0 2.5 5.0 7.5 10.0 12.5 15.0 17.5 20.0 4 2 4 2 English Foreign train test 0 10 20 30 40 50 13 8 Figure 1: Box-plots for Left: Word# per language and split, Right: Sentence# per split.",
"Indonesian language Abui, for example, literally translates into pistol's ear, but a more fitting translation would be trigger (Pegusevs, 2017).",
"As a side note, it is important to note that while here we use extensive linguistic terminology to discuss the properties of the languages in our dataset, the high-school students who participate in Linguistic Olympiads need not and may not be familiar with any of the terminology.",
"Their good performance depends on a well-developed metalinguistic awareness, not on formal linguistic training.",
"In total, 2311 parallel instances are transcribed 1559 training and 752 test.",
"63% of the test pairs are in the English foreign direction, while the rest are in the foreign English direction.",
"Statistics concerning the number of words per sentence 4 are shown on the left of Figure 1. The majority of both training and test pairs are fairly short, but length varies considerably.",
"This is due to the fact that some puzzles in the dataset concern the 4 We naively tokenize on space.",
"translation of individual words, some take scope over noun-modifier phrases and some, over entire sentences.",
"English sentences are generally longer (median 4) than their translations (median 2).",
"This is rather intuitive considering the synthetic nature of many of the foreign languages in the dataset, wherein a single long word in the foreign language may translate into 4-5 words on the English side, as in this translation from t ckotoyatih in the Mexican language Zoque to the English only for the tooth .",
"Sentence statistics about the length of the train and test split per problem are shown on the right of Figure 1. Intuitively, train splits are bigger than test splits.",
"However, the number of training instances varies greatly between the puzzles, which is related to a number of factors such as the difficulty and type of the task, as well as the linguistic properties of the foreign language.",
"One property of the data splits in linguistic puzzles, which diverges from the standard paradigm in machine learning, is that the input test data should not be considered held out.",
"On the contrary, in some cases, vocabulary items attested in the input of foreign English test instances may be crucial to the translation of English foreign test instances, and vice versa.",
"So it is only the targets of test instances that should be truly held out.",
"This speci-ficity is not ubiquitous across the puzzles, but it should be accounted for by any approach to their solution, for example by building the system vocabulary over the union of the train and input test data.",
"We attemp to solve these puzzles with models of varying complexity, i.e. from random guessing to state-of-the-art neural machine translation systems.",
"Random Words (RW): Since the vocabularies of source and target languages are quite small, we test what random word picking can accomplish.",
"We simply tokenize the training sentence pairs and then randomly choose a word from the target language's vocabulary for each token in the source sentence.",
"5 FastAlign (FA): We use the translation alignment tool FastAlign (Dyer et al., 2013), to test 5 We don't use frequency of the words, i.e., pick words that occur more often, since they are not that meaningful due to the tininess of the data.",
"whether the puzzles can be solved by early lexical translation models (Brown et al., 1993).",
"Since FA produces alignments for each training pair, we postprocess the output to create a translation dictionary separately for each direction.",
"We then randomly choose from the translation entries for each token in source test sentence.",
"6 Phrase Based Statistical Machine Translation (PBSMT) Since Koehn and Knowles (2017) report that PBSMT models outperform vanilla NMT models in case of small parallel training data, we use PBSMT as one of the baselines.",
"For the foreign English direction, we implement two modelsone using no external mono-lingual English data and one otherwise.",
"We implement three different models based on Transformers (Vaswani et al., 2017) using the implementation of Ott et al. (2019).",
"In the first scenario, we train an off-the-shelf Transformer encoder-decoder model for each direction, referred to as Transformer .",
"Second, we use a strong pretrained English language model, RoBERTa (Liu et al., 2019), to initialize the encoder of the NMT model for English to foreign translation.",
"Finally, for foreign to English translation, we concatenate the translation features extracted from the last Transformer decoder layer, with the language modeling features extracted from RoBERTa (Liu et al., 2019), before mapping the vectors to the output vocabulary.",
"These models are denoted as Trans-former+RoBERTa .",
"We first compile a subset from the puzzles that are diverse by means of languages and contain translation questions in both directions.",
"During tuning, we use the test sentences on these puzzles to validate our models.",
"Since our foreign languages are morphologically rich, we use BPE (Sennrich et al., 2016) to segment words into subwords.",
"For the sentences in the foreign language, we learn the BPE from the training data, while for English sentences we use the already available GPT2-BPE dictionary to exploit English language prior.",
"For convenience, 6 We add all aligned target phrases of the source token to the dictionary.",
"Hence, when one target phrase is seen multiple times, it is more likely to be chosen during inference.",
"before we train the models, we lowercase the sentences, remove certain punctuations, remove pronoun tags and brackets, and augment training data with multiple reference translations.",
"PBSMT: We use Moses (Koehn et al., 2007) with default settings.",
"We employ wikitext-103 corpus to train a 5-gram English LM for the model with access to external data.",
"The other model only uses training sentences for the LM.",
"NMT: Following the suggestions for low-resource NMT systems by Sennrich and Zhang (2019), we use small and few layers and high dropout rates.",
"Similarly we use the smallest available language model (RoBERTa Base) and freeze its parameters during training to reduce the number of trainable parameters.",
"We tune the following hyper-parameters: BPE merge parameter, learning rate and number of epochs.",
"The submissions to Linguistic Olympiads are manually graded by experts.",
"For a full mark, an exact solution has to be provided, as well as a correct and detailed discussion of the underlying processes that led to this solution, e.g., concerning findings about word-order, the function of individual morphemes, etc.",
"Participants are also given partial marks in case of partial solutions or valid discussions.",
"Since we don't have access to expert evaluation, we use readily available automatic machine translation measures.",
"We also note grading of system interpretations or its solution steps as an interesting future research direction.",
"The first is the BLEU (Papineni et al., 2002) score since it is still the standard metric in MT. We use BLEU-2 to match the lower median of sentence lengths we observe across the English and the foreign data (see Fig 1).",
"BLEU matches whole words rather than word pieces, which prevents us from assigning partial credit to subword matches, which could be especially relevant for foreign target languages with rich morphology.",
"We therefore use three additional metrics that operate on the level of word pieces: CharacTER (Wang et al., 2016), ChrF (Popovic, 2016) and ChrF++ (Popovic, 2017).",
"CharacTER is a measure derived from TER (Trans-lation Edit Rate), where edit rate is calculated on character level, whereas shift rate is measured on the word level.",
"It calculates the minimum number of character edits required to adjust a hypothesis, until the reference is matched, normalized by the length of the hypothesis sentence.",
"For easier comparison, we report 1 .",
"0 characT ER scores.",
"ChrF is a simple F-measure reflecting precision and recall of the matching character n-grams.",
"ChrF++ adds word unigrams and bi-grams to the standard ChrF for a higher human correlation score.",
"We experiment with different combinations of character n-grams ( n = 3 , 5 as suggested in Popovic (2016)) and word n-grams ( n = 0 , 1 , 2 as suggested in Popovic (2017)).",
"Finally, we also measure the average exact match of the puzzles, which is calculated as 1 if the prediction and reference sentences match and 0 otherwise.",
"As it is not feasible to report and compare results on all of these metrics (nine in total), we compute the pair-wise Pearson correlation coefficient between them, and average over all pairs to arrive at the following four metrics that show the least correlation with each other: BLEU 2 , CharacTER, ChrF 3 and exact match.",
"We note, however, that of these four, exact match is really the most meaningful metric.",
"Since the sentences in the dataset are rather short and the puzzles are designed to be solvable and unambiguous, an exact match should be attainable.",
"Moreover, as the puzzles in the dataset are of varying difficulty, the average exact match score can be seen as a continuous metric.",
"We report the results for the best models in Fig. 2. The hyperparameter configuration and the development set results are given in Appendix A.4.",
"The maximum exact match score among all results is only 3 .",
"4 %; and the highest scores are consistently achieved by PBSMT models on both directions and dataset splits.",
"The overall results for foreign English are generally higher than English foreign.",
"This may be due to",
"(a) having longer sentences for English;",
"(b) the scores (except from EM) being more suitable for English (even the character-based ones) or",
"(c) the more challenging nature of translation into foreign languages, which needs another dedicated study.",
"English Foreign: Initializing the NMT encoder with RoBERTa has severely worsened the results, compared to standard Transformer model.",
"We believe the main reason is the imbalance between encoder (huge encoder) and the decoder (tiny decoder), that makes training very challenging.",
"The gap between the simplest baselines (RW, 0 10 BLEU 2 1.6 3.5 5.9 6.8 15.1 0 25 50 CTER 16.0 20.3 26.3 22.8 29.1 0 25 50 C h r F 19.9 29.9 35.0 29.1 36.2 Transformers+RoBERTa Random FastAlign Transformer PBSMT 0 25 50 E x a c t M a t c h 0.0 0.0 0.5 0.0 3.0 0 20 BLEU 2 5.5 6.6 17.3 17.9 21.1 0 25 50 CTER 7.2 13.2 26.9 26.6 33.1 0 50 C h r F 19.6 20.4 34.9 35.0 44.1 Random FastAlign Transformers+RoBERTaTransformer PBSMT 0 25 50 E x a c t M a t c h 0.4 0.4 1.4 1.3 3.4 Figure 2: Main results (best viewed with color).",
"FA) and more sophisticated models (Transform-ers, PBSMT) is also considerably small; FA even surpassing Transformers's CTER and ChrF performance.",
"For most of the foreign languages, even when two words are semantically distant, there may still be significant morpheme overlap.",
"These suggest that simple lexical alignment models (includ-ing random assignment) can achieve higher partial matching scores that hints at the unreliability of CTER and ChrF measures for the puzzles.",
"Foreign English: We observe that the gap between the simple and more sophisticated baselines are higher in this direction by means of all measures, as we would expect.",
"Using RoBERTa features in the decoder does not hurt the performance while providing a small increase in EM score compared to standard Transformers.",
"It should be noted that the decoder is still tiny and LM features are only incorporated via a separate linear layer at a very late stage, which prevents the imbalance problem we saw in English foreign.",
"We see similar results for the validation data with the exception that Transformer-based models achieve either higher or the same EM scores than PBSMT while surpassing PBSMT's BLEU-2 scores in foreign English.",
"It supports the findings of Sennrich and Zhang (2019), drawing attention to the importance of hyper-parameter tuning for low-resource NMT models.",
"We perform manual error analysis on the predictions of our top two models for the Chickasaw puzzle presented in Table 1. The predicted translations are shown in Table 4. We also provide the predictions of the simple baselines in Appendix A.5 for",
"comparison.",
"Although the PBSMT model is best on average, we find that for this particular puzzle, the Transformer model did much better.",
"PBSMT had very few hits overall: it correctly chose to include the lexical items hattak and hollo in (1), but the position and inflection of the former is incorrect.",
"In (5) and (6) there are indications of correct lexicon induction, but the overall quality of the translations is very poor both in terms of accuracy and fluency.",
"The Transformer model, on the other hand, predicts fluent translations in both directions.",
"In the direction from English to Chickasaw, we see that the model correctly acquired the relevant morphological patterns: subjects take suffix at , objects take suffix a , and, importantly, that first person agency is expressed through suffix li .",
"The translations are still not perfect, though, due to lexical confusion: the words for cat and dog have been swapped in both (1) and (2), as well as the words for love and chase in (3).",
"In the direction from Chickasaw to English, the Transformer's predictions remain fluent, but they hardly relate to the input.",
"Contrary to the overall results, for this puzzle translation to English appears to be more challenging for the model.",
"Recently, reasoning tasks and datasets that require natural language processing have been introduced, such as common-sense reasoning in the form of pronoun resolution e.g., WSC (Levesque, 2011), multiple-choice question answering e.g., SWAG (Zellers et al., 2018) and ARC (Clark et al., 2018); inference tasks in the form of binary or multi-label classification problems e.g., the GLUE benchmarks (Wang et al., 2018); and visual reasoning in the form of question answering (Zellers et al.,",
"2019) e.g., GQA (Hudson and Manning, 2019).",
"In these tasks, the required level of semantics is mostly limited to single sentences rather than a collection; almost all tasks target English; data is derived from running text and is mostly close-domain.",
"In addition, some require external knowledge bases or high-level knowledge on physical models or experiments as in ARC classified by Boratko et al. (2018), which leaves room for accumulating errors from external parts and complicates the analysis of individual parts like reasoning.",
"Another body of early work on symbolic AI provides a different set of tools to model reasoning such as rule-engines, rule-induction algorithms, logic programs and case-based reasoning models (Kolodner, 1992).",
"However, it is not trivial to represent and model our task in these frameworks, since they mostly require defining primitives, expressions, discrete features and cases.",
"Furthermore, the strength of statistical/neural models has been repeatedly shown to surpass rule-based models.",
"Our goal is to encourage researchers to incorporate reasoning into statistical models, rather than replacing them with symbolic models.",
"The field of NLP has developed deep neural models that can exploit large amounts of data to achieve high scores on downstream tasks.",
"Still, the field lacks models that can perform human-like reasoning and generalization.",
"To mitigate this gap, we draw inspiration from the Linguistic Olympiads that challenge the meta-linguistic and reasoning abilities of high-school students.",
"We create a new benchmark dataset from available Linguistic Puzzles that spans over 81 languages from 39 language families, which is released at https:// ukplab.github.io/PuzzLing-Machines/ .",
"We implement and evaluate simple baselines such as alignment, and state-of-the-art machine translation models with integrated a pretrained English language model.",
"We show that none of the models can perform well on the puzzles, suggesting that we are still far from having systems with meta-linguistic awareness and reasoning capabilities.",
"This work was supported by the German Research Foundation through the German-Israeli Project Cooperation (DIP, grant DA 1600/1-1 and grant GU 798/17-1).",
"We would like to thank Liane Vogel, Marc Simon Uecker and Siddharth Singh Parihar for their great help during the project.",
"We are grateful to Dragomir Radev for his feedback and continuous help with encoding problems encountered during puzzle transcription.",
"We thank Adam Lopez and Ilia Kuznetsov for providing feedback on early drafts of the paper.",
"We thank the area chairs and the senior area chair, whose comments helped us improve the paper."
] | [
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"result",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"result",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"objective",
"abstain",
"abstain",
"method",
"other",
"method",
"result",
"other",
"other",
"other",
"other",
"other"
] |
[
"Unlike English letters, Chinese characters have rich and specific meanings.",
"Usually, the meaning of a word can be derived from its constituent characters in some way.",
"Several previous works on syntactic parsing propose to annotate shallow word-internal structures for better utilizing character-level information.",
"This work proposes to model the deep internal structures of Chinese words as dependency trees with 11 labels for distinguishing syntactic relationships.",
"First, based on newly compiled annotation guidelines, we manually annotate a word-internal structure treebank (WIST) consisting of over 30K multi-char words from Chinese Penn Treebank.",
"To guarantee quality, each word is independently annotated by two annotators and inconsistencies are handled by a third senior annotator.",
"Second, we present detailed and interesting analysis on WIST to reveal insights on Chinese word formation.",
"Third, we propose word-internal structure parsing as a new task, and conduct benchmark experiments using a competitive dependency parser.",
"Finally, we present two simple ways to encode word-internal structures, leading to promising gains on the sentence-level syntactic parsing task.",
"Unlike English, Chinese adopts a logographic writing system and contains tens of thousands of distinct characters.",
"Many characters, especially frequently used ones, have rich and specific meanings.",
"However, words, instead of characters, are often considered as the basic unit in processing Chinese texts.",
"We believe the reason may be two-fold.",
"First, usually a character may have many meanings and usages.",
"Word formation process greatly reduces such char-level ambiguity.",
"Second, by definition, Chen Gong and Saihao Huang make equal contributions to this work.",
"(c) Ours: fine-grained structure with 11 labels.",
"words are the minimal units that express a complete semantic concept or play a grammatical role independently (Xia, 2009; Yu et al., 2003).",
"1 Roles played by characters in word formation can be divided into three types.",
"(1) There is a stable and important set of single-char words , such as (you), (of), and most punctuation marks.",
"(2) A character having no specific meaning acts as a part of a single-morpheme word , such as 1 There is still a dispute on the word granularity issue (Gong et al., 2017; Lai et al., 2021). Words are defined as a character sequence that is in tight and steady combination. However, the combination intensity is usually yet vaguely qualified according to co-occurrence frequency. We believe this work may also be potentially useful to this direction. (like) and (fa) (lao) (Pharaoh, transliteration of foreign words).",
"(3) A character corresponds to a morpheme , the smallest meaningful unit in a language, and composes a polysyllabic word with other characters.",
"This work targets multi-char words, and is particularly interested in the third type which most characters belong to.",
"Intuitively, modeling how multiple characters form a word, i.e., the word-formation process, allows us to more effectively represent the meaning of a word via composing the meanings of characters.",
"This is especially helpful for handling rare words, considering that the vocabulary size of characters is much smaller than that of words.",
"In fact, many NLP researchers have tried to utilize char-level word-internal structures for better Chinese understanding.",
"Most related to ours, previous studies on syntactic parsing have proposed to annotate word-internal structures to alleviate the data sparseness problem (Zhang et al., 2014; Li et al., 2018).",
"However, their annotations mainly consider flat and shallow word-internal structure, as shown in Figure",
"1-(a) and",
"(b).",
"Meanwhile, researchers try to make use of character information to learn better word embeddings (Chen et al., 2015; Xu et al., 2016).",
"Without explicitly capturing word-internal structures, these studies have to treat a word as a bag of characters.",
"See Section 2 for more discussion.",
"This paper presents an in-depth study on char-level internal structure of Chinese words.",
"We endeavour to address three questions.",
"(1) What are the word-formation patterns for Chinese words?",
"(2) Can we train a model to predict deep word-internal structures?",
"(3) Is modeling word-internal structures beneficial for word representation learning?",
"For the first question, we propose to use labeled dependency trees to represent word-internal structures, and employ 11 labels to distinguish syntactic roles in word formation.",
"We compile annotation guidelines following the famous textbook of Zhu (1982) on Chinese syntax, and annotate a high-quality word-internal structure treebank (WIST), consisting of 30K words from Penn Chinese Treebank (CTB) (Xia, 2009).",
"We conduct detailed analysis on WIST to gain insights on Chinese word-formation patterns.",
"For the second question, we propose word-internal structure parsing as a new task, and present benchmark experimental results using a competitive open-source dependency parser.",
"For the third question, we investigate two simple ways to encode word-internal structure, i.e., LabelCharLSTM and LabelGCN, and show that using the resulting word representation leads to promising gains on the dependency parsing task.",
"We release WIST at https://github.com/ SUDA-LA/ACL2021-wist , and also provide a demo to parse the internal structure of any input word.",
"Annotating word-internal structure.",
"In the deep learning (DL) era, pretraining techniques are extremely powerful in handling large-scale unlabeled data, including Skip-Gram or CBOW models (Mikolov et al., 2013) for learning context-independent word embedding in the beginning, and the recent ELMo (Peters et al., 2018) or BERT (Devlin et al., 2019) for learning context-aware word representations.",
"Conversely, in the pre-DL era, there exist few (if any) effective methods for utilizing unlabeled data, and statistical models rely on discrete one-hot features, leading to severe data sparseness for many NLP tasks.",
"This directly motivates annotation of word-internal structure, especially for dealing with rare words.",
"Annotation of shallow internal structure of Chinese words was first mentioned in Zhao (2009), largely based on heuristic rules.",
"Li (2011); Li and Zhou (2012) found that many multi-char words could be divided into two subwords, i.e., root and affix.",
"They annotated structures of about 19K words (35% of 54,214) in CTB6.",
"Their experiments showed that subword-level syntactic parsing is superior to word-level parsing.",
"For the three words in Figure 1, their approach is only applicable to the second word, i.e., / .",
"As an extension to Li and Zhou (2012), Zhang et al. (2013, 2014) proposed char-level syntactic parsing by further dividing subwords into chars.",
"As shown in Figure",
"1-(a), for each word, they annotated a binary hierarchical tree, using constituent labels to mark which child constituent is more syntactically important, i.e., left, right, or coordinate.",
"In such way, they could convert a word-level constituent/dependency tree into a char-level one.",
"Similar to Li and Zhou (2012), Cheng et al. (2014) annotated internal structure of synthesis (multi-morpheme) words with four relations, i.e., branching, coordinate, beginning and other parts of a single-morpheme word.",
"In the DL era, three works have studied word-internal structure.",
"Similarly to our work, Li et al. (2018) employed dependency trees to encode word-Label Meaning Example Annotation root word root (come on stage) $ root (come) obj (stage) subj subject (young) (age) subj (small) obj object (rain) (drop) obj (rain) att attribute modifier (overcoat) (large) att (coat) adv adverbial modifier (different) (not) adv (same) cmp complement modifier (put down) (put) cmp (down) coo coordination (context) (above) coo (below) pobj preposition object (expire) (reach) pobj (deadline) adjct adjunct (pass by) (walk) adjct (by) frag fragment (sofa) (sand) frag (send) repet repetition (often) (often) repet (often) Table 1: The 11 labels adopted in our guidelines for distinguishing syntactic roles in word formation.",
"internal structure.",
"As shown in Figure",
"1-(b), for each multi-char word, they first annotate the part-of-speech (POS) tag of each character, and then determine an unlabeled dependency tree, and finally use a POS tag triple as arc label, corresponding to the POS tags of the modifier/head characters and the whole word.",
"However, we argue POS tag triples are only loosely related with word-formation patterns, not to mention the severe difficulty of annotating char-level POS tags in each word.",
"Recently, Lin et al. (2020) extended Zhang et al. (2014) by using an extra label for marking single-morpheme words, and annotated hierarchical internal structure of 53K words from a Chinese-English machine translation (MT) dataset.",
"Li et al. (2019a) annotated the internal structure of words with 4 dependency relations.",
"In summary, we can see that most previous studies adopted quite shallow hierarchical structure.",
"In contrast, this work presents a more in-depth investigation on internal structure of Chinese words and employs 11 labels to distinguish different syntactic roles in word formation, as shown in Figure",
"1-(c).",
"Leveraging character information for better word representation.",
"It has already become a standard way in many NLP tasks to obtain char-aware word representation by applying LSTM or CNN to the character sequence of a word, and concatenate it with word embedding as input, such as named entity recognition (Chiu and Nichols, 2016), dependency parsing (Zhang et al., 2020), and constituent parsing (Gaddy et al., 2018).",
"Another research direction is to leverage character information to obtain better word embeddings.",
"Chen et al. (2015) extended the CBOW model and proposed to jointly learn character and word embeddings.",
"Based on Chen et al. (2015), Yu et al. (2017) proposed to jointly learn embeddings of words, characters, and sub-characters.",
"2 However, both studies assume that characters contribute equally to the meaning of a word and directly average embeddings of all characters.",
"To address this, Xu et al. (2016) extended Chen et al. (2015) and proposed a cross-lingual approach to distinguish contribution of characters for a word.",
"The idea is to translate Chinese words and characters into English words, and use similarities between corresponding English word embeddings for contribution measurement.",
"Instead of treating a word as a bag of characters, we experiment with two simple ways to obtain structure-aware word representations.",
"Meanwhile, enhancing their approach with explicit word-internal structure could be also very interesting.",
"Utilizing word-internal structure.",
"Word-internal structure have been explored in various NLP tasks.",
"Several works propose to learn word-internal structure, word segmentation, POS tagging and parsing jointly (Zhang et al., 2013, 2014; Li et al., 2018), demonstrating the effectiveness of word-internal structure in helping downstream tasks.",
"Cheng et al. (2015) attempt to convert words into fine-grained subwords according to the 2 Following this direction, studies tried to explore more character information for better Chinese word representation, such as strokes (Cao et al., 2018) and ideographic shape (Sun et al., 2019).",
"internal structure of words for better dealing with unknown words during word segmentation.",
"Lin et al. (2020) propose to integrate the representation of word-internal structure into the input of neural machine translation model, leading to improved translation performance.",
"In this section, we describe in detail the annotation process of WIST.",
"As shown in Figure",
"1-(c), we adopt dependency trees for representing word-internal structure.",
"The reason is two-fold.",
"First, word-formation process correlates with syntax in different ways depending on language type (Aikhenvald, 2007).",
"Such correlation is especially close for Chinese due to its lack of morphological inflections.",
"In particular, Zhu (1982) presented thorough investigation on Chinese word formation mainly from a syntactic view.",
"Second, as a grammar formalism, dependency tree structure has been widely adopted for capturing sentence-level syntax due to its simplicity and flexibility in representing relations.",
"Meanwhile, its computational modeling is also developed quite well.",
"Annotation guidelines.",
"After several months' survey, we have compiled systematic and detailed guidelines for word-internal structure annotation.",
"Our guidelines are mainly based on the famous textbook on Chinese grammar of Zhu (1982).",
"We intensively studied all previous works on word-internal structure annotation, which are discussed in Section 2.",
"We also find that it is quite beneficial to be familiar with guidelines developed by previous annotation projects for Chinese word segmentation (Xia, 2009; Yu et al., 2003).",
"Our guidelines contain 11 relations specifically designed to capture the internal dependency syntax for Chinese words, as shown in Table 1.",
"We derive most of the dependency relations by referring to guidelines of three popular Chinese dependency treebanks, i.e., UD, Harbin Institute Technology Chinese Dependency Treebank (HIT-CDT) (Liu et al., 2006), and Chinese Open Dependency Treebank (CODT) (Li et al., 2019b).",
"We give very detailed illustrations with examples in our 30-page guidelines to ensure annotation consistency and quality.",
"Our guidelines are also gradually improved according to the feedback from the annotators.",
"with Chinese syntax, and select 6 capable annotators with a lot of data annotation experience as expert annotators to handle inconsistent submissions.",
"All the annotators (including expert annotators) were paid for their work .",
"The salary is determined by both quantity and quality.",
"Besides, we give extra bonus to the annotators with high accuracy.",
"The average salary of the annotators is 30 RMB per hour.",
"All annotators are trained for several hours to be familiar with our guidelines and the usage of annotation tool.",
"We apply strict double annotation in order to guarantee quality.",
"Each word is randomly assigned to two annotators.",
"Two identical submissions are directly used as the final answer.",
"Otherwise, a third expert annotator is asked to decide the final answer after analyzing the two inconsistent annotations.",
"Given an annotation task, all its POS tags 3 of the focused word in CTB5 are presented to the annotator, in order to explore multiple internal structures for one word.",
"In that case, the annotator can click a checkbox to inform us for further process.",
"Please note that the manually annotated POS tags in CTB5 are converted into Universal Dependencies (UD) 4 POS tags based on predefined mapping rules, since the original CTB5 POS tags are too fine-grained (33 tags) and difficult for annotators to understand.",
"The interface also presents several example sentences to improve annotation efficiency.",
"We strongly encourage annotators to look up difficult words or characters in electronic dictionaries.",
"5 3 In CTB5, a word may be annotated with different POS tags under different contexts.",
"For example, (develop-ment) is annotated as NN (noun) in the context (boost the economic development ), whereas (develop) is annotated as (VV) verb in the context (develop steadily).",
"Therefore, when annotating the word (develop/development ), we present both noun and verb to the annotators for",
"reference. 4 universaldependencies.org/u/pos/ 5",
"Eg., hanyu.baidu.com ; xh.5156edu.com/ Data selection.",
"Following previous works, we select multi-char words from CTB5 for annotation.",
"Table 2 shows word distribution regarding character numbers.",
"We can see that only 5.6% of words in the vocabulary contain one char, but they account for nearly half (48%) token occurrences in the text.",
"The percent of words with two characters is high in both vocabulary (58.3) and text (44.1).",
"We discard words containing special symbols such as English letters.",
"Finally, we have annotated 32,954 multi-char words with their internal structure, containing 83,999 dependencies (2.5 characters per word).",
"Inter-annotator consistency.",
"As discussed earlier, each word is labeled by two annotators, and inconsistent submissions are handled by a third senior annotator for obtaining a final answer.",
"The averaged inter-annotator consistency ratio is 83.0 dependency-wise, i.e., the percent of characters receiving the same head and label from two annotators, and 75.8 word-wise, i.e., the percent of words receiving the same whole trees.",
"If we do not consider labels, the unlabeled consistency ratios increase to 87.5 dependency-wise and 85.1 word-wise.",
"Although it may be a factor that most annotators are inexperienced in this new annotation task, such low consistency ratios indicate that annotating word-internal structure is quite challenging, especially when it comes to distinguishing syntactic roles.",
"Meanwhile, this also demonstrates the importance of strict double annotation, considering that nearly a quarter of words are inconsistent and require handling by senior annotators.",
"Annotation accuracy.",
"We calculate annotation accuracy by comparing all submissions (as denominator) from annotators against the final answers in WIST.",
"Please note that each word is double annotated.",
"The overall dependency-wise accuracy for all annotators is 90.9, and word-wise is 86.9.",
"If not considering labels, the overall unlabeled accuracy increases to 93.4 and 92.1, dependencyand word-wise respectively.",
"The first major row in Table 3 shows the label-wise annotation accuracy.",
"We divide characters in WIST into 11 groups according to their final-answer labels, and then calculate the percent of correct submissions for each group.",
"The highest accuracy is obtained on repet, since its pattern is quite regular.",
"Determining the root character also seems relatively easy.",
"The lowest accuracy is 62.0 on subj and 48.2 on pobj.",
"Comparing unlabeled versus labeled accuracy, the gap is quite large.",
"The extreme case is pobj.",
"Annotators usually can correctly decide the head (84.5%), but very unlikely choose its true label pobj (48.2%).",
"Similarly, accuracy drops by 24.9 for subj.",
"We give more discussions on annotation difficulties below.",
"Label distribution.",
"The third major row in Table 3 shows distribution of different labels in WIST.",
"From the percentage of root (39.2%), we can infer that one word contains 2.5 characters on average.",
"The overall percent for att is 29.1, almost half of the remaining labels, meaning that att appears once every 1.45 words.",
"This reveals that attribute modification is the most dominated pattern in word formation.",
"Coordination structure (coo) takes the second place with 10.2%.",
"The third most used pattern is fragment (frag) with 5.7%.",
"We give more discussion on frag below.",
"Besides the overall distribution, the third major row in Table 3 gives label distribution per POS tag.",
"For clarity, we give the full name of each POS tag (UD, converted from the fine-grained CTB tags) in Table 3, and it means the POS tag of the focused word.",
"If a word has multiple POS tags, then the same word-internal structure is used for each tag.",
"For example, if a word (expand) coo (ex-pand) has two tags, i.e., Noun and Verb, then the number of coo is added by one for both Noun and Verb.",
"Moreover, a label is repeatedly counted if it appears several times in the same word.",
"Due to space limitation, we only present high-frequency POS tags, with percentage shown in parenthesis.",
"Please note that we adopt a coarse-grained POS tag set for clarity.",
"We can see that nouns are mostly formed with att (33.8%) and coo (11.5%), whereas verbs are with coo/obj/adv/cmp in the descending order.",
"Proper nouns are evenly dominated by frag (29.6%) and att (28.4%).",
"It is also obvious that proper nouns tend to be longer, consisting of 2.7 characters according to its root percentage.",
"Numerals are mainly composed via att (75.7%) and consist of 5.0 character on average.",
"Multiple structures for one word?",
"Many words have multiple meanings.",
"Then the question is: how many words really have multiple internal structures?",
"As illustrated in Section 3, we show all POS tags to annotators in order to obtain all internal structures of an ambiguous word.",
"However, in annotated WIST, we find there are only 103 such words with multiple internal structures, accounting for about 0.3% of all annotated words, and 2.7% of those having multiple POS tags.",
"As a typical example, have two structures.",
"As a verb, it means subdue and has (control) cmp (tamely).",
"As a noun, it means uniform and has (regulated) att (cloth).",
"This low percentage reveals that most Chinese words actually have very steady internal structure.",
"They have multiple POS tags, mainly because they are used for different syntactic functions without morphological inflections, such as as verb (develop) or noun (development).",
"More on frag.",
"The frag label is designed to handle all words that have no internal structure due to the lack of semantic composition.",
"From Table 3, we can see that frag accounts for 5.7% of all labels.",
"In order to gain more insights, we collect all 3,528 words containing frag in WIST, and randomly sample 100 words for investigation.",
"Following the brief discussion in Section 1, we divide these words into three types, and find that 81 words are proper nouns (such as person name); 16 correspond to transliteration of foreign words; and 3 are single-morpheme words.",
"High-order structure distribution.",
"To gain more insights on complex word-formation structure, we focus on all three-char words.",
"We find that the root usually lies in the third character by 74.6%, and the percentage for the second and first characters is only 15.3 and 10.1 respectively.",
"Looking more closely, we find the following four dominated structures.",
"Difficulties in annotation.",
"Since it is difficult to capture the patterns on unlabeled-dependency inconsistencies, we focus on confusion patterns in label annotation.",
"Among all characters receiving the same head but different labels from two annotators, 20.1% correspond to { att, adv } confusion due to the ambiguity of the head character being a verb or a noun.",
"The second confusion pattern is { coo,frag } , with a proportion of 18.6, which are mainly from proper nouns.",
"According to our guidelines, if the meaning of a proper noun is compounding, annotators have to annotate its real internal structures rather than using frag.",
"It is also very difficult to distinguish obj and pobj, since the boundary between prepositions and verbs is vague in Chinese.",
"With annotated WIST, we try to address the second question: can we train a model to predict word-internal structure?",
"We adapt the Biaffine parser proposed by Dozat and Manning (2017), a widely . . . x i . . . x j . . . BiLSTM 3 MLP h MLP d Biane MLP h (cid:48) MLP d (cid:48) Bianes h i h j r hi r dj r h (cid:48) i r d (cid:48) j score( i j ) score( i l j ) Figure 2: The basic architecture of Biaffine Parser.",
"used sentence-level dependency parser, for this purpose, and present results and analysis.",
"We adopt the SuPar implementation released by Zhang et al. (2020).",
"6 As a graph-based parser, Biaffine parser casts a tree parsing task as searching for a maximum-scoring tree from a fully-connected graph, with nodes corresponding to characters in our case.",
"As shown in Figure 2, it adopts standard encoder-decoder architecture, consisting of the following components.",
"Input layer.",
"Given an input sequence, each item is represented as a dense vector x i .",
"For word-internal structure parsing, an item corresponds to a character, and we use char embedding.",
"BiLSTM encoder.",
"Then, a three-layer BiLSTM is applied to obtain context-aware representations.",
"We denote the hidden vector of the top-layer BiLSTM for the i-th position as h i .",
"Biaffine scorer.",
"Two separate MLPs are applied to each h i , resulting in two lower-dimensional vectors r hi (as head) and r di (as dependent).",
"Then the score of a dependency i j is obtained via a biaffine attention over r hi and r dj .",
"Scoring of labeled dependencies such as i l j is analogous.",
"Decoder.",
"With the scores of all dependencies, we adopt the first-order algorithm of Eisner (2000) to find the optimal unlabeled dependency tree, and then independently decide the highest-scoring label for each arc. 6 https://github.com/yzhangcs/parser Dev Test UAS LAS UAS LAS CM Random 81.18 76.15 80.63 75.58 65.13 Pretrained 82.42 77.30 81.64 76.98 67.09 +1.24 +1.15 +1.01 +1.40 +1.96 BERT 88.27 85.18 88.33 84.98 77.72 +5.85 +7.88 +6.69 +8.00 +10.63 Table 4: Results of word-internal structure parsing using different character representations.",
"Training loss.",
"During training, the parser computes two independent cross-entropy losses for each position, i.e., maximizing the probability of its correct head and the correct label between them.",
"Data.",
"We randomly split all words in WIST into three parts, 2,500/5,000 as development/test data and remaining as training data.",
"Hyperparameters.",
"We set the dimension of char embeddings to 100.",
"We obtain pre-trained character embeddings by training word2vec on Chinese Gigaword Third Edition.",
"In order to see effect of contextualized character representations, we apply BERT (Devlin et al., 2019) 7 to each word as a char sequence.",
"The output vectors of the top four layers are concatenated and reduced into a dimension of 100 via an MLP.",
"For other hyper-parameters, we keep the default configuration in SuPar.",
"Evaluation metrics.",
"We adopt the standard unlabeled and labeled attachment score (UAS/LAS), i.e., the percent of characters that receives the correct head (and label).",
"The complete match (CM) is the percent of words having correct whole trees.",
"Table 4 shows the main results under different char representations.",
"It is obvious that using randomly initialized char embeddings, the parser can only reach about 76 in LAS.",
"This shows that parsing word-internal structure is very challenging without using extra resources.",
"When we pretrain char embeddings on large-scale labeled data, the performance can be consistently improved by over 1 point in both UAS/LAS, and nearly 2 points in CM.",
"Finally, employing the contextualized character rep-7 BERT-base-Chinese https://github.com/ google-research/bert resentations dramatically improves performance further by about 6/8/10 points in UAS/LAS/CM.",
"However, even with BERT, model performance still lags behind averaged human performance (90.9 in LAS) by large margin.",
"Our experienced annotators can even reach more than 94.",
"Our experience in manual annotation points out two possible directions to enhance the model: 1) making use of sentence-level contextual information; 2) leveraging the meanings in dictionaries, usually in the form of explanation or example sentences.",
"We leave them for future exploration.",
"Analysis on label-wise accuracy.",
"The second major row in Table 3 reports accuracy regarding different labels for the model with BERT.",
"The model achieves the highest accuracy on att and root, possibly because the two labels take very large proportion in the data for sufficient model training.",
"By contrast, pobj and subj have the lowest accuracy, and are difficult for models as well as discussed in Section 3.",
"This leads to another observation that model accuracy is roughly correlated with annotation accuracy, implying the difficulties for human and model are usually consistent.",
"This section presents a preliminary study on utilizing word-internal structure, aiming to address the third question: is modeling word-internal structures beneficial for word representation learning?",
"We use sentence-level dependency parsing as the focusing task (Kubler et al., 2009), mainly considering resemblance in tree structure representation and close relatedness between the two tasks.",
"Given an input sentence w 0 w 1 ...w m , the goal of dependency parsing is to find an optimal dependency tree for the sentence.",
"Again, we adopt SuPar (Zhang et al., 2020) for implementation of Biaffine parser (Dozat and Manning, 2017) as our basic parser.",
"The basic parser applys a BiLSTM over character sequence to obtain word representation.",
"In this part, we propose two simple alternative methods to encode internal structure shown in Figure",
"1-(c).",
"Basic CharLSTM method.",
"For each word, the basic Biaffine parser uses the concatenation of word embeddings and CharLSTM outputs to represent each word in the input layer: x i = emb ( w i ) CharLSTM ( w i ) CharLSTM ( w i ) BiLSTM ( ..., z k , ... ) z k = emb ( c i,k ) (2) where c i,k is the k-th character of w i .",
"The final word representation from CharLSTM ( w i ) is obtained by concatenating two last-timestamp hidden output vectors of a one-layer BiLSTM.",
"LabelCharLSTM Method.",
"Considering that the word is usually very short and a bare label itself provides rich syntax information, we propose a straightforward extension to CharLSTM, named as LabelCharLSTM, via minor modification.",
"LabelGCN method.",
"Previous work show that GCN is very effective in encoding syntactic trees (Marcheggiani and Titov, 2017; Zhang et al., 2018).",
"We follow the implementation of Zhang et al. (2018) and use a two-layer GCN as a more sophisticated way.",
"In order to utilize labels, we extend vanilla GCN to have the same input with LabelCharLSTM, i.e., z k .",
"We obtain the final word representation by performing average pooling over the output vectors of the top-layer GCN.",
"Settings.",
"Following Chen and Manning (2014), we conduct experiments on CTB5 with the same data split (16,091/803/1,910 sentences) and constituent-to-dependency conversion.",
"Both char/label embeddings are randomly initialized and have the same dimension of 50.",
"For the parsers using gold-standard POS tags, we randomly initialized the POS tagging embeddings and set the dimension to 50.",
"For other hyperparameters, we adopt the default configuration of SuPar, including the pre-trained word embeddings.",
"For multi-char words without annotated internal structure, we use the automatic outputs from the trained parser with BERT in Section 5, so that every word corresponds to a single structure.",
"We use word-wise UAS/LAS/CM for evaluation, and punctuation is excluded in all metrics.",
"Main results.",
"Table 5 shows the parsing performance.",
"We can see that both LabelCharLSTM and LabelGCN substantially outperform the basic UAS LAS CM Basic CharLSTM 88.31 85.96 32.04 LabelCharLSTM 88.78 86.51 33.19 LabelGCN 89.02 86.76 32.93 w/o label 88.66 86.28 32.20 Table 5: Parsing performance on CTB5-test.",
"CharLSTM method.",
"LabelGCN achieves the best performance on UAS and LAS, with a gain of 0.71 and 0.80 respectively.",
"The fourth row reports performance of LabelGCN without using label embedding, leading to consistent accuracy drop, demonstrating the usefulness of rich labels, which is a key contribution of this work, despite the extra annotation effort.",
"Analysis on rare words.",
"To gain more insights on how word-internal structure helps word representation learning, we divide the words in CTB5-test into several groups according to their frequency in CTB5-train, and report fine-grained accuracy in Table 6.",
"We can see that the overall performance gain is mostly contributed by improvement over rare words with low frequency or totally unknown.",
"This verifies that word-internal structures can help the model to better represent rare words.",
"Results with gold-standard POS tags.",
"As suggested by a reviewer, we train our parser with gold-standard POS tags by concatenating the original input (i.e., x i in Equation 2) with gold-standard POS tag embeddings, in order to compare with previous works.",
"Table 7 shows the results.",
"Compared with the Basic CharLSTM results in Table 5, using gold-standard POS tags as extra features for the Basic CharLSTM leads to substantial improvements by 2.80 and 3.95 in UAS and LAS respectively, and outperforms the previous works as presented in Table 7, showing that the basic CharLSTM can be served as a strong baseline model.",
"Compared with the Basic CharLSTM, utilizing word-internal structure with LabelCharLSTM or LabelGCN achieves consistently better performance by 0.24 and 0.25 respectively in LAS in UAS LAS Ma and Hovy (2017) 89.05 87.74 Dozat and Manning (2017) 89.30 88.23 Ma et al. (2018) 90.59 89.29 Basic CharLSTM 91.11 89.91 LabelCharLSTM 91.31 90.15 LabelGCN 91.31 90.16 Table 7: Parsing performance with gold-standard POS tags on CTB5-test.",
"the scenario of using gold-standard POS tags.",
"Besides the strong baseline, another reason that the improvement brings by the internal-word structure is slight when using gold-standard POS tags is that a part of linguistic information in the POS tags and the word-internal structures may be overlapping.",
"This paper presents a thorough study on internal structures of Chinese words.",
"First, we annotate a high-quality word-internal structure treebank covering over 30K words in CTB5, named as WIST.",
"Second, we perform analysis on WIST from different perspectives and draw many interesting find-ings on Chinese word-formation patterns.",
"Third, we propose word-internal structure as a new task, and present benchmark results using a popular dependency parser.",
"Finally, we conduct preliminary experiments with two simple methods, i.e., LabelCharLSTM and LabelGCN, to encode word-internal structure as extra word representation, and find promising performance gains on the sentence-level dependency parsing task.",
"Analysis shows that the rich dependency labels adopted in WIST play a key role, and word-internal structure is most beneficial for rare word representation.",
"The authors would like to thank the anonymous reviewers for the helpful comments.",
"We are very grateful to Guodong Zhou for the inspiring discussions and suggestions on Chinese word-internal structures.",
"We thank Kaihua Lu for building the annotation system, and Mingyue Zhou, Haoping Yang, and Yahui Liu for their help in compiling annotation guidelines, and all the annotators for their hard work in data annotation.",
"This work is supported by the National Key Research and Development Program of China under Grant No. 2017YFB1002104."
] | [
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"result",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"objective",
"method",
"method",
"objective",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"abstain",
"other",
"other",
"abstain",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"abstain",
"other",
"method",
"method",
"other",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"objective",
"result",
"objective",
"result",
"abstain",
"other",
"other",
"other",
"other"
] |
[
"In this work, we present a detailed analysis of how accent information is reflected in the internal representation of speech in an end-to-end automatic speech recognition (ASR) system.",
"We use a state-of-the-art end-to-end ASR system, comprising convolutional and recurrent layers, that is trained on a large amount of US-accented English speech and evaluate the model on speech samples from seven different English accents.",
"We examine the effects of accent on the internal representation using three main probing techniques:",
"a) Gradient-based explanation methods,",
"b) Information-theoretic measures, and",
"c) Outputs of accent and phone classifiers.",
"We find different accents exhibiting similar trends irrespective of the probing technique used.",
"We also find that most accent information is encoded within the first recurrent layer, which is suggestive of how one could adapt such an end-to-end model to learn representations that are invariant to accents.",
"Traditional automatic speech recognition (ASR) systems, consisting of independently-trained acoustic, pronunciation and language models, are increasingly being replaced by end-to-end ASR systems (Chiu et al., 2018; Hori et al., 2017).",
"An end-to-end ASR system refers to a single model that subsumes all the traditional ASR components and directly translates a speech utterance into a sequence of graphemes.",
"Such models benefit from jointly training acoustic and language models and eliminating the need for a pronunciation dictionary.",
"While end-to-end ASR models have clear merits and are elegant in their formulation, they tend to be opaque in their predictions and difficult to interpret.",
"In order to understand better what is encoded in the layers of an end-to-end ASR system, prior work has explored the use of phone probes (classifiers) to analyze the phonetic content of representations at each layer (Belinkov and Glass, 2017; Belinkov et al., 2019).",
"This analysis was restricted to a single accent of English.",
"In this paper, we work with multiple accents of English and propose a number of different tools (other than phone probes) to investigate how accent information is encoded and propagated within an end-to-end ASR system.",
"Why accented speech?",
"We have witnessed impressive strides in ASR performance in the last few years.",
"However, recognizing heavily accented speech still remains a challenge for state-of-the-art ASR systems.",
"An end-to-end ASR model trained on a standard speech accent significantly underper-forms when confronted with a new speech accent.",
"To shed more light on why this happens, a systematic investigation of how such models behave when evaluated on accented speech might be useful.",
"The insights from such an investigation might also come in handy when trying to adapt end-to-end neural architectures to be more accent-agnostic.",
"1. How do the gradients of an end-to-end ASR model behave when subject to varying accents?",
"2. How do we directly measure the amount of accent information encoded within hidden representations of an end-to-end model?",
"3. How do accents impact phone accuracy across different layers in an end-to-end model?",
"While the analyses of black-box models in computer vision and natural language processing have received a considerable amount of attention, prior work on the analysis of end-to-end ASR models are notably few in number.",
"With presenting various analysis techniques that are applicable to speech, 3740 aa a e a h a o a w a y b c h d d h e h e r e y f g hh i h i y j h k l m n n g oo v o w o y p r s s h s il t t h uh u w v w y z z h Phones 0.00 0.02 0.04 0.06 0.08 F r e q u e n c y",
"(b) Phonetic duration histogram Figure 1: Phonetic coverage and duration histograms for the US accent.",
"we hope this work can serve as a starting point for further studies and spur more analysis-driven investigations into end-to-end ASR models.",
"The code used in our work is publicly available.",
"1 2 Experimental Setup In this section, we first introduce the dataset of accented speech samples used in our experiments, along with details of the phone-level alignments that were necessary for our subsequent analyses.",
"We also provide a detailed description of the specific end-to-end ASR model that we use in this work, along with important implementation details.",
"We extracted accented speech samples from the Mozilla Common Voice speech corpus (Mozilla).",
"The Voxforge corpus (Voxforge.org) was another potential source for accented speech samples.",
"However, we preferred the Mozilla corpus as the dataset is relatively cleaner, has larger diversity in speech across accents and more importantly contains the same content rendered in different speech accents (which we exploited in our experimental analy-sis).",
"We considered accented speech samples from seven different English accents: African, Australian, Canadian, England, Indian, Scotland and US.",
"These were chosen to span the gamut of accents in terms of how they differ from the primary accent that was used to train the ASR system (US).",
"US and Canadian serve as native accents; African, Australian and England accents are sufficiently different from the native accents while Indian and Scotland accents vary substantially.",
"We created a dataset of utterances in each accent using the following heuristic.",
"First, we chose sentences that appeared in speech samples corresponding to five or more accents (including US).",
"For African and Scotland accents that contained 1 https://github.com/archiki/ ASR-Accent-Analysis/ very few speech samples overall, we chose transcripts that had an utterance with the same text spoken by a US-accented speaker.",
"This finally led to 3500 samples being chosen for each accent containing text that appeared in at least two accents, at most six accents and 3 .",
"24 different accents on average.",
"We chose the utterances to largely overlap in text so that differences in ASR performance could be mostly attributed to acoustic differences and not language model-related differences.",
"Alignments: For our empirical investigation, we require phone alignments for all the accented speech samples.",
"We used an existing Kaldi-based forced aligner, gentle 2 , to align the speech samples.",
"The aligner uses the CMU dictionary and accommodates multiple pronunciations for a word which is important for accented speech.",
"Although the aligner was trained on US-accented speech, we found the alignments assigned to various accented speech samples to be fairly robust as determined by a manual check of the alignments for a random set of Indian-accented utterances.",
"The aligner failed to produce outputs on samples of poor quality; these samples were omitted from our analysis.",
"Figure",
"1(a) shows the coverage across phones for the US-accented speech samples and Figure",
"1(b) shows the total duration of phones for US-accented speech samples.",
"Phone coverage and phone duration distributions for all the other accents are almost identical in shape to the US accent.",
"Aggregate plots visualizing these distributions across the remaining accents are shown in Appendix A. 2.2 End-to-end ASR: Deep Speech 2 We chose DeepSpeech2 (Amodei et al., 2016) as our end-to-end ASR model.",
"This is a widely-used architecture that directly maps speech features to graphemes and is trained using the Connectionist Temporal Classification (CTC) loss (Graves et al., 2 Available at https://github.com/ lowerquality/gentle 3741 2006).",
"The input to the model is a sequence of frequency magnitude spectrograms (henceforth referred to as SPEC), obtained using a 20ms Hamming window and a stride of 10ms.",
"With a sampling rate of 16kHz, we end up with 161-dimensional input features.",
"The first two layers are 2D-convolutions with 32 kernels at each layer with sizes 41 11 and 21 11 , respectively.",
"Both convolutional layers have a stride of 2 in the frequency domain while the first layer and second layer have a stride of 2 and 1, respectively, in the time domain.",
"This setting results in 1312 features per time frame after the second convolutional layer which we will henceforth refer to as CONV.",
"The convolutional layers are followed by 5 bidirectional LSTMs (Hochreiter and Schmidhuber, 1997), each with a hidden state size of 1024 dimensions.",
"These layers are henceforth referred to as RNN 0 , RNN 1 , RNN 2 , RNN 3 and RNN 4 .",
"The implementation of this model is adapted from Naren (2016).",
"This model is trained on 960 hours of US-accented speech obtained from the Librispeech corpus (Panayotov et al., 2015).",
"All subsequent experiments use this pretrained model, which we will refer to as DS2.",
"Table 1 shows the performance of DS2 when evaluated on speech samples from different accents.",
"Both word error rates (WER) and character error rates (CER) on the test sets are reported for each accent.",
"As expected, US and Canadian-accented samples perform best.",
"3 DS2 has the most trouble recognizing Indian-accented samples, incurring a high WER of 49 .",
"1% , followed by Scotland-accented samples with a WER of 36 .",
"7% .",
"The next three sections are grouped based on the probing techniques we adopt to examine the effect of accents on the internal representations learned by the model: Gradient-based analysis of the model (3).",
"3 US-accented samples are drawn from various parts of the US and are more diverse in accent, compared to the Canadianaccented samples.",
"We suspect this could be why US underper-forms compared to Canada.",
"Gradient-based techniques have been widely adopted as an explainability tool in both computer vision and NLP applications.",
"In this section, we adapt some of these techniques to be used with speech and derive insights based on how accents modify gradient behavior.",
"A simple gradient-based explanation method considers the gradient of the output f j from a neural network (where j denotes a target class) with respect to an input x i (where i refers to the i th input time-step used to index the input sequence x ):",
"Here, grad( j, i, x ) serves as an approximate measure of how much x i contributes to f j (Simonyan et al., 2014).",
"For speech as input, x i would be an acoustic feature vector (e.g. spectral features).",
"Thus, grad( j, i, x ) would be a vector of element-wise gradients with respect to x i .",
"For each x i , we use the L2 norm to reduce the gradient vectors to scalars: a i,j = (cid:107) grad( j, i, x ) (cid:107) 2 .",
"We refer to a i,j as an attribution .",
"We note here that instead of using the L2 norm, one could use the dot product of the gradient grad( j, i, x ) and the input x i as an alternate gradient-based method (Denil et al., 2014).",
"For our task, this attribution method seemed less suitable (compared to computing the L2 norm) as dot products would have the undesirable effect of being sensitive to prosodic variations in speech and speech sounds like fricatives or stop onsets which have sparse spectral distributions.",
"(We refer interested readers to Appendix C for visualizations using the dot product-based attribution method.)",
"We compute character-level attribution from the DS2 system using the following two-step approach.",
"First, we consider the output character with the highest softmax probability at each output time-step. Next, we consider only non-blank characters produced as output and sum the gradients over all contiguous repetitions of a character (that would be reduced to a single character by the CTC algorithm) 4 . Word-level attribution can be similarly computed by summing the character-level attributions corresponding to each character that makes up the word.",
"Figure 2 illustrates how attribution changes for a specific word, FIRE\", across different accents. We consider speech samples from all seven accents corresponding to the same underlying reference text, The burning fire had been extinguished\". Each subplot also shows the phonetic alignment of the text on its x-axis. We observe that the attributions for FIRE\" are fairly well-aligned with the underlying speech in the US and Canadian samples; the attributions appear to deviate more in their alignments",
"4 The CTC algorithm produces output probabilities for observing a blank, signifying no label.",
"Excluding the blank symbol from our analysis helped with reducing gradient computation time.",
"We also confirmed that including the blank symbol did not change the results from our analysis.",
"To quantify the differences in alignment across accents suggested by the visualization in Figure 2, we measure the alignment accuracy using the earth mover's distance (EMD).",
"For each accent, we compute the EMD between two distributions, one derived from the attributions and the other from the reference phonetic alignment.",
"The EMD between two distributions p and q over the set of frames (or rather, frame sequence numbers) T is defined as EMD( p, q ) = inf Z (cid:88) i,j T | i j | Z ( i, j ) where the infimum is over all transportation func-tions Z : T T R + such that (cid:80) j TZ ( i, j ) = p ( i ) (for all i ) and (cid:80) i TZ ( i, j ) = q ( j ) (for all j ).",
"Given a correctly predicted word, we define the distribution p as the uniform distribution over the frames that are aligned with the word, and q as the distribution obtained by normalizing the word-level attribution of the word in the utterance.",
"For each accent, we sample a set of words that were correctly predicted (equally many for all accents) and compute the average of the EMD between the dis-3743 Accent EMD C 0 C 1 C 2 Overall US 43.54 42.42 39.55 42.6 Canada 42.17 39.68 40.47 40.94 Indian 53.07 47.47 49.63 50.34 African 46.63 42.61 41.05 44.3 England 47.0 41.52 43.44 44.3 Scotland 45.34 41.38 41.65 43.26 Australian 46.91 44.24 47.45 45.87 Table 2: EMD trends quantifying the difference in attributions across accents.",
"This average serves as an alignment accuracy measure for the accent.",
"For the EMD analysis, we restrict ourselves to a set of 380 sentences that have corresponding speech utterances in all accents.",
"This way, the content is mostly identical across all accents.",
"Table 2 shows the averaged EMD values for each accent computed across all correctly predicted words.",
"Larger EMD values signify poorer alignments.",
"The overall values clearly show that the alignments from US and Canadian-accented samples are most accurate and the alignments from the Indian-accented samples are most inaccurate.",
"We also cluster the words based on the number of phones in each word, with C 0 , C 1 and C 2 referring to words with {1-2}, 3 and {4-5} phones, respectively.",
"As expected, words in C 0 , being smallest in size, deviate most from the reference distribution and incur larger EMD values (compared to C 1 and C 2 ).",
"The overall trend across accents remains the same for each cluster.",
"Another gradient-based analysis we carried out is to check if accents affected how, at various levels, the representation at each frame is influenced by the signal at the corresponding input frame.",
"One can expect that, in layers higher up, the representation at each frame mixes information from more and more input frames.",
"However, it is reasonable to expect that most of the contribution to the representation should still come from the frames in a window corresponding to the same phone .",
"(We examine the contribution of neighboring phones in Appendix B)",
"As detailed below, we devise quantities that measure the extent of information mixing and apply them to our systems.",
"Not surprisingly, as shown below, we do observe that mixing increases as one CONV RNN_0 RNN_1 RNN_2 RNN_3 RNN_4 Layers 0.0 0.2 0.4 0.6 0.8 P hone F o c u s TIMITcanadausenglandaustraliascotlandafricanindian Figure 3: Comparison of phone focus across layers for various accents.",
"climbs through the layers.",
"But somewhat surprisingly, we find that there is little variation of these trends across accents.",
"This suggests that information mixing is largely dictated by the network itself, rather than by the details of the data.",
"The quantities we use to measure information mixing are inspired by Brunner et al. (2020).",
"We define the contribution of the i th input frame x i to the final output of the network f via the representation e lj in a given layer l corresponding to frame j as: g li,j = (cid:13)(cid:13)(cid:13)(cid:13) d (cid:88) k =1 (cid:18) f e lj ( k ) (cid:19)(cid:18) e lj ( k ) x i (cid:19)(cid:13)(cid:13)(cid:13)(cid:13) 2 (1) where e lj is a d -dimensional vector ( e lj ( k ) refers to the k th dimension of e lj ), and f consists of the non-blank characters in the maximum probability output (after the softmax layer).",
"We use a normalized version of g li,j to compare the contribution to e lj from different x i : g li,j = g li,j (cid:80) Tn =1 g ln,j For this analysis, we used a subset of 250 utterances for each accent that have almost the same underlying content.",
"5 A measure of focus of an input phone at level l how much the frames at level l corresponding to that phone draw their contributions from the corresponding frames in the input is obtained by summing up g li,j over i, j corresponding to the phone.",
"Figure 3 shows this quantity, averaged over all phones in all the utterances for each accent.",
"We observe that the focus decreases as we move from CONV to RNN 4 , with the largest drop appearing between CONV and RNN 0 .",
"This is intuitive as we expect some of the focus to shift from individual 5 This smaller sample was chosen for faster gradient computations and gave layer-wise phone accuracies similar to what we obtained for the complete test set of 1000 utterances.",
"A plot showing these consistent trends is included in Appendix D 3744 CONV RNN_0 RNN_1 RNN_2 RNN_3 RNN_4 Layers 0.35 0.40 0.45 0.50 0.55 0.60 0.65 0.70 0.75 B i na r y F o c u s M ea s u r e TIMIT canada us england australia scotland african indian Figure 4: Variation in binary focus measure, averaged over all the phones, across layers for various accents.",
"phones to their surrounding context, as we move to a recurrent layer (from the CONV layer).",
"This trend persists in moving from RNN 0 to RNN 4 with the focus on individual phones steadily dropping.",
"We also see a consistent albeit marginal trend across accents with US/Canadian-accented samples showing the lowest focus.",
"For each input phone, one can also define a binary measure of focus at level l , which checks that the focus of the frames at that level has not shifted to an input phone other than the one whose frames it corresponds to.",
"That is, this binary focus measure is 1 if the focus of the phone at a level as defined above is larger than the contribution from the input frames of every other phone.",
"Figure 4 shows how this measure, averaged across all phones for each accent, varies across layers.",
"Again, we see that focus is highest in the first CONV layer, dropping to 70% at RNN 1 and 45% at RNN 3 .",
"Further, again, we observe very similar trends across all accents.",
"From both the above analyses of focus, we observe that there is a pronounced drop in focus through the layers, but this trend is largely indepen-dent of the accent.",
"We also plot variations for the well-known TIMIT database (Garofolo, 1993) in both Figures 3 and 4 to confirm that the same trend persists.",
"For TIMIT, we used the samples from the specified test set along with the phonetic alignments that come with the dataset.",
"We conclude that information mixing, and in particular, the measures of focus we used, are more a feature of the network than the data.",
"In the previous section, we used gradient-based methods to examine how much an input frame (or a set of frames corresponding to a phone or a word) contributes to the output and how these measures change with varying accents.",
"Without computing gradients, one could also directly measure how SPEC CONV RNN_0 RNN_1 RNN_2 RNN_3 RNN_4 Layers 0.05 0.10 0.15 0.20 0.25 0.30 0.35 MI 1000 Clusters 5000 Clusters 10000 Clusters Figure 5: Mutual Information between hidden representations and accents across layers.",
"much information about accents is encoded within the representations at each layer.",
"Towards this, motivated by Voita et al. (2019), we compute the mutual information (MI) between random variables e l x and , where e l x refers to a representation at layer l corresponding to input x and [0 , 6] is a discrete random variable signifying accents.",
"We define a probability distribution for e l x by discretizing the space of embeddings via k -means clustering (Saj-jadi et al., 2018).",
"We use mini-batched k -means to cluster all the representations corresponding to files in the test sets mentioned in Table 1 across accents and use the cluster labels thereafter to compute MI.",
"Figure 5 shows how MI varies across different layers for three different values of k .",
"Increasing k would naturally result in larger MI values.",
"(The maximum possible value of MI for this task would be log 2 (7)",
".) We observe a dip in MI going from spectral features SPEC to CONV, which is natural considering that unprocessed acoustic features would contain most information about the underlying accent.",
"Interestingly, we observe a rise in MI going from CONV to RNN 0 signifying that the first layer of RNN-based representations carries the most information about accent (not considering the acoustic features).",
"All subsequent RNN layers yield lower MI values.",
"Apart from the MI between representations and accents that capture how much accent information is encoded within the hidden representations, we also compute MI between representations and a discrete random variable signifying phones.",
"The MI computation is analogous to what we did for accents.",
"We will now have a separate MI plot across layers corresponding to each accent.",
"Figure 6 shows the MI values across layers for each accent when k = 500 and k = 2000 .",
"We see an overall trend of increasing MI from initial to later layers.",
"Interestingly, the MI values across ac-3745 SPEC CONV RNN_0 RNN_1 RNN_2 RNN_3 RNN_4 Layers 1.0 1.5 2.0 2.5 3.0 MI us canada indian scotland england australia african",
"cents at RNN 4 exhibit a familiar ordering where US/Canadian accents receive the highest MI value while Indian and Scotland's accents receive the lowest MI value.",
"We also attempt to visualize the learned phone representations by projecting down to 2D.",
"For a specific phone, we use the precomputed alignments to compute averaged layer-wise representations across the frames within each phone alignment.",
"Figure 7 shows t-SNE based (Maaten and Hinton, 2008) 2D visualizations of representations for the 10 most frequent phones in our data, { ah', ih', iy', dh', d', l', n', r', s', t' }.",
"Each subplot corresponds to a layer in the network.",
"The plots for phones from the US-accented samples appear to have slightly more well-formed clusters, compared to the Indian-accented samples.",
"These kinds of visualizations of representations are, however, limiting and thus motivates the need for analysis like the MI computation presented earlier.",
"We train an accent classifier for each layer that takes the corresponding representations from the layer as its input.",
"We implemented a classifier with two convolutional layers of kernel size, stride and padding set to (31,21), (3,2), (15,10) and (11,5), (2,1) and (5,2), respectively.",
"We used batch",
"nor-(a) US accent",
"malization (Ioffe and Szegedy, 2015) followed by ReLU activations for each unit.",
"The network also contained two max-pooling layers of size (5,3) and (3,2), respectively, and a final linear layer with hidden dimensionality of 500 (with a dropout rate of 0.4).",
"Table 1 lists the number of utterances we used for each accent for training and evaluation.",
"The accent classifiers were trained for 25 epochs using Adam optimizer (Kingma and Ba, 2015) and a learning rate of 0.001.",
"Figure 8 shows the accent accuracies obtained by the accent classifier specific to each layer (along with error bars computed over five different runs).",
"RNN 0 is most accurate with an accuracy of about 33% and RNN 4 is least accurate.",
"It is interesting that RNN 0 representations are most discriminative across accents; this is also consistent with what we observe in the MI plots in Figure 5.",
"Akin to accent classifiers, we build a phone classifier for each layer whose input representations are labeled using phone alignments.",
"We train a simple multi-layer perceptron for each DS2 layer (500-dimensional, dropout rate of 0.4) for 10 epochs SPEC CONV RNN_0 RNN_1 RNN_2 RNN_3 RNN_4 Layers 10 15 20 25 30 35 A cc u r a cy % Figure 8: Accuracy (%) of accent probes trained on hidden representations at different layers.",
"using the Adam optimizer.",
"We train both frame-level classifiers, as well as phone-level classifiers that use averaged representations for each phone as input.",
"The accuracies of both types of phone classifiers are shown in Figure 9.",
"As expected, the phone accuracies improve going from SPEC to RNN 4 and the accuracies of US/Canadian samples are much higher than that of Indian samples.",
"Classifiers using the averaged representations consistently perform much better than their frame-level counterparts.",
"We note that Belinkov and Glass (2017) report a dip in phone accuracies for the last RNN layers, which we do not observe in our experiments.",
"To resolve this inconsistency, we ran phone classifiers on TIMIT (which was used in Belinkov and Glass (2017)) using representations from our DS2 model and the dip in RNN 4 accuracies surfaced (as shown in Figure 9).",
"This points to differences between the TIMIT and Mozilla Common Voice datasets.",
"(An additional experiment examining how phone classifiers behave on different datasets is detailed in Appendix D.) 6 Discussion This is the first detailed investigation of how accent information is reflected in the internal representations of an end-to-end ASR system.",
"In devising analysis techniques for ASR, while we do follow the broad approaches in the literature, the details are often different.",
"Most notably, the use of EMD for attribution analysis is novel, and could be of interest to others working with speech and other temporal data.",
"Similarly, the phone focus measures in the information mixing analysis are new.",
"We also highlight that this is the first instance of analysis of ASR consisting of multiple analysis techniques.",
"On the one hand, this has uncovered robust trends that manifest in more than one analysis.",
"On the other hand, it also shows how some trends are influenced more by the neural-network architecture more than the data.",
"This provides a platform for future work in speech neural-network analysis, across architectures, data-sets and tasks.",
"In our results, we encountered some unexpected details.",
"For instance, while the RNN 0 layer is seen to reduce the phone focus the most, uniformly across all accents (as shown in Figure 3, it is also seen to segregate accent information the most, recovering accent information lost in the convolution layer (as shown in Figure 5).",
"We also see this trend surfacing in Figure 8 where the accent classifier gives the highest accuracy for RNN 0 and the accuracies quickly taper off for subsequent layers.",
"This suggests that the first RNN layer is most discriminative of accents.",
"Models that use an adversarial objective to force the representations to be accent invariant (e.g., (Sun et al., 2018)) might benefit from defining the adversarial loss as a function of the representations in the first RNN layer.",
"Huang et al. (2001) show that accents are the primary source of speaker variability.",
"This poses a real-world challenge to ASR models which are primarily trained on native accented datasets.",
"The effect of accents is not limited to the English language, but also abundant in other languages such as Mandarin, Spanish, etc.",
"An interesting line of work exploits the ability to identify accents in order to improve performance.",
"Zheng et al. (2005) combine accent detection, accent discriminative acoustic features, acoustic model adaptation using MAP/MLLR and model selection to achieve improvements over accented Mandarin speech.Vergyri et al. (2010) investigate the effect of multiple accents on the performance of an English broadcast news recognition system using a multiple accented English dataset.",
"They report improvements by including data from all accents for an accent-independent acoustic model training.",
"Sun et al. (2018) propose the use of domain adversarial training (DAT) with a Time Delay Neu-3747 ral Network (TDNN)-based acoustic model.",
"They use native speech as the source domain and accented speech as the target domain, with the goal of generating accent-invariant features which can be used for recognition.",
"Jain et al. (2018) also use an accent classifier in conjunction with a multi-accent TDNN based acoustic model in a multitask learning (MTL) framework.",
"Further, Viglino et al. (2019) extended the MTL framework to use an end-to-end model based on the DS2 architecture and added a secondary accent classifier that uses representations from intermediate recurrent layers as input.",
"Chen et al. (2020) propose an alternate approach using generative adversarial networks (GANs) to disentangle accent-specific and accent-invariant components from the acoustic features.",
"Nagamine et al. (2015, 2016) were the first to examine representations of a DNN-based acoustic model trained to predict phones.",
"They computed selectivity metrics for each phoneme and found better selectivity and more significance in deeper layers.",
"This analysis was, however, restricted to the acoustic model.",
"Belinkov and Glass (2017) were the first to analyze a Deep Speech 2 model by training phone classifiers that used representations at each layer as its input.",
"These ideas were further extended in Belinkov et al. (2019) with classifiers used to predict phonemes, graphemes and articulatory features such as place and manner of articulation.",
"Belinkov and Glass (2019) present a comparison of different analysis methods that have been used in prior work for speech and language.",
"The methods include recording activations of pretrained networks on linguistically annotated datasets, using probing classifiers, analyzing attention weights and ABX discrimination tasks (Schatz et al., 2013).",
"Other related work includes the analysis of an audio-visual model for recognition in Alishahi et al. (2017), where the authors analyzed the activations of hidden layers for phonological information and observed a hierarchical clustering of the activations.",
"Elloumi et al. (2018) use auxiliary classifiers to predict the underlying style of speech as being spontaneous or non-spontaneous and as having a native or non-native accent; their main task was to predict the performance of an ASR system on unseen broadcast programs.",
"Analogous to saliency maps used to analyze images, Li et al. (2020) propose reconstructing speech from the hidden representations at each layer using highway networks.",
"Apart from ASR, analysis techniques have also been used with speaker embeddings for the task of speaker recognition (Wang et al., 2017).",
"The predominant tool of choice for analyzing ASR models in prior work has been classifiers that are trained to predict various phonological attributes using quantities extracted from the model as its input.",
"We propose a number of alternatives other than just the use of classifiers to probe for information within an end-to-end ASR model.",
"We hope this spurs more analysis-driven investigations into end-to-end ASR models.",
"This work presents a thorough analysis of how accent information manifests within an end-to-end ASR system.",
"The insights we gleaned from this investigation provide hints on how we could potentially adapt such end-to-end ASR models, using auxiliary losses, to be robust to variations across accents.",
"We will investigate this direction in future work.",
"The authors thank the anonymous reviewers for their constructive feedback and comments.",
"The second author gratefully acknowledges support from a Google Faculty Research Award and IBM Research, India (specifically the IBM AI Horizon Networks-IIT Bombay initiative)."
] | [
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"result",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"objective",
"abstain",
"other",
"abstain",
"abstain",
"method",
"objective",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"other",
"other",
"abstain",
"method",
"abstain",
"method",
"method",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"abstain",
"method",
"method",
"abstain",
"other",
"other"
] |
[
"Keyphrases, that concisely summarize the high-level topics discussed in a document, can be categorized into present keyphrase which explicitly appears in the source text, and absent keyphrase which does not match any contiguous subsequence but is highly semantically related to the source.",
"Most existing keyphrase generation approaches synchronously generate present and absent keyphrases without explicitly distinguishing these two categories.",
"In this paper, a S electG uideG enerate (SGG) approach is proposed to deal with present and absent keyphrase generation separately with different mechanisms.",
"Specifically, SGG is a hierarchical neural network which consists of a pointing-based selector at low layer concentrated on present keyphrase generation, a selection-guided generator at high layer dedicated to absent keyphrase generation, and a guider in the middle to transfer information from selector to generator.",
"Experimental results on four keyphrase generation benchmarks demonstrate the effectiveness of our model, which significantly outperforms the strong baselines for both present and absent keyphrases generation.",
"Furthermore, we extend SGG to a title generation task which indicates its extensibility in natural language generation tasks.",
"Automatic keyphrase prediction recommends a set of representative phrases that are related to the main topics discussed in a document (Liu et al., 2009).",
"Since keyphrases can provide a high-level topic description of a document, they are beneficial for a wide range of natural language processing (NLP) tasks, such as information extraction (Wan and Xiao, 2008), text summarization (Wang and Cardie, 2013) and question generation (Subrama-nian et al., 2018).",
"Existing methods for keyphrase prediction can be categorized into extraction and generation approaches.",
"Specifically, keyphrase extraction methods identify important consecutive words from a given document as keyphrases, which means that the extracted keyphrases (denoted as present keyphrases ) must exactly come from the given document.",
"However, some keyphrases (denoted as absent keyphrases ) of a given document do not match any contiguous subsequence but are highly semantically related to the source text.",
"The extraction methods fail to predict these absent keyphrases.",
"Therefore, generation methods have been proposed to produce a keyphrase verbatim from a predefined vocabulary, no matter whether the generated keyphrase appears in the source text.",
"Compared with conventional extraction methods, generation methods have the ability of generating absent keyphrases as well as present keyphrases.",
"CopyRNN (Meng et al., 2017) is the first to employ the sequence-to-sequence (Seq2Seq) framework (Sutskever et al., 2014) with the copying mechanism (Gu et al., 2016) to generate keyphrases for the given documents.",
"Following the CopyRNN, several Seq2Seq-based keyphrase generation approaches have been proposed to improve the generation performance (Chen et al., 2018; Ye and Wang, 2018; Chen et al., 2019; Zhao and Zhang, 2019; Wang et al., 2019; Yuan et al., 2020).",
"All these existing methods generate present and absent keyphrases synchronously without ex-Training(%) Test(%) Inspec Krapivin NUS SemEval 49.79 13.12 11.74 11.30 11.25 Table 1: Proportions of absent keyphrases in training set and predictions of CopyRNN on four commonly used datasets, where top-10 predictions are considered.",
"plicitly distinguishing these two different categories of keyphrases, which leads to two problems: (1) They complicate the identification of present keyphrases.",
"Specifically, they search for words over the entire predefined vocabulary containing a vast amount of words ( e.g. , 50,000 words) to generate a present keyphrase verbatim, which is overparameterized since a present keyphrase can be simply selected from a continuous subsequence of the source text containing limited words ( e.g. , less than 400 words).",
"(2) They weaken the generation of absent keyphrases.",
"Existing models for absent keyphrase generation are usually trained on datasets mixed with a large proportion of present keyphrases.",
"Table 1 shows that nearly half of the training data are present keyphrases, which leads to the extremely low proportions of absent keyphrases generated by such a model, i.e. , CopyRNN.",
"The above observation demonstrates that these methods are biased towards replicating words from source text for present keyphrase generation, which will inevitably affect the performance on generating absent keyphrases.",
"To address the aforementioned problems, we propose a S electG uideG enerate (SGG) approach, which deals with present and absent keyphrase generation separately with different stages based on different mechanisms.",
"Figure 1 illustrates an example of keyphrase prediction by SGG.",
"The motivation behind is to solve keyphrase generation problem from selecting to generating, and use the selected results to guide the generation.",
"Specifi-cally, our SGG is implemented with a hierarchical neural network which performs Seq2Seq learning by applying a multi-task learning strategy.",
"This network consists of a selector at low layer, a generator at high layer, and a guider at middle layer for information transfer.",
"The selector generates present keyphrases through a pointing mechanism (Vinyals et al., 2015), which adopts attention distributions to select a sequence of words from the source text as output.",
"The generator further generates the absent keyphrases through a pointing-generating (PG) mechanism (See et al., 2017).",
"Since present keyphrases have already been generated by the selector, they should not be generated again by the generator.",
"Therefore, a guider is designed to memorize the generated present keyphrases from the selector, and then fed into the attention module of the generator to constrain it to focus on generating absent keyphrases.",
"We summarize our main contributions as follows: We propose a SGG approach which models present and absent keyphrase generation separately in different stages, i.e. , select, guide, and generate, without sacrificing the end-to-end training through back-propagation.",
"Extensive experiments are conducted to verify the effectiveness of our model, which not only improves present keyphrase generation but also dramatically boosts the performance of absent keyphrase generation.",
"Furthermore, we adopt SGG to a title generation task, and the experiment results indicate the extensibility and effectiveness of our SGG approach on generation tasks.",
"As mentioned in Section 1, the extraction and generation methods are two different research directions in the field of keyphrase prediction.",
"The existing extraction methods can be broadly classified into supervised and unsupervised approaches.",
"The supervised approaches treat keyphrase extraction as a binary classification task, which train the models with the features of labeled keyphrases to determine whether a candidate phrase is a keyphrase (Witten et al., 1999; Medelyan et al., 2009; Gollapalli et al., 2017).",
"In contrast, the unsupervised approaches treat keyphrase extraction as a ranking task, scoring each candidate using some different ranking metrics, such as clustering (Liu et al., 2009), or graph-based ranking (Mihalcea and Tarau, 2004; Wang et al., 2014; Gollapalli and Caragea, 2014; Zhang et al., 2017).",
"This work is mainly related to keyphrase generation approaches which have demonstrated good performance on keyphrase prediction task.",
"Following CopyRNN (Meng et al., 2017), several extensions have been proposed to boost the generation capability.",
"In CopyRNN, model training heavily relies on large amount of labeled data, which is often unavailable especially for the new domains.",
"To address this problem, Ye and Wang (2018) proposed a semi-supervised keyphrase generation model that utilizes both abundant unlabeled data and limited labeled data.",
"CopyRNN uses the concatenation of article title and abstract as input, ignoring the leading role of the title.",
"To address this deficiency, Chen et al. (2019) proposed a title-guided Seq2Seq network to sufficiently utilize the already summarized information in title.",
"In addition, some research attempts to introduce external knowledge into keyphrase generation, such as syntactic constraints (Zhao and Zhang, 2019) and latent topics (Wang et al., 2019).",
"These approaches do not consider the one-to-many relationship between the input text and target keyphrases, and thus fail to model the correlation among the multiple target keyphrases.",
"To overcome this drawback, Chen et al. (2018) incorporated the review mechanism into keyphrase generation and proposed a model CorrRNN with correlation constraints.",
"Similarly, SGG separately models one-to-many relationship between the input text and present keyphrases and absent keyphrases.",
"To avoid generating duplicate keyphrases, Chen et al. (2020) proposed an exclusive hierarchical decoding framework that includes a hierarchical decoding process and either a soft or a hard exclusion mechanism.",
"For the same purpose, our method deploys a guider to avoid the generator generating duplicate present keyphrases.",
"Last but most important, all these methods do not consider the difference between present and absent keyphrases.",
"We are the first to discriminately treat present and absent keyphrases in keyphrase generation task.",
"Given a dataset including K data samples, where the j -th data item (cid:104) x ( j ) , y ( j,p ) , y ( j,a ) (cid:105) consists of a source text x ( j ) , a set of present keyphrases y ( j,p ) and a set of absent keyphrases y ( j,a ) .",
"Different from CopyRNN (Meng et al., 2017) splitting each data item into multiple training examples, each of which contains only one keyphrase as target, we regard each data item as one training example by concatenating its present keyphrases as one target and absent keyphrases as another one.",
"Specifically, assume that the j -th data item consists of m present keyphrases { y ( j,p ) 1 , ..., y ( j,p ) m } and n absent keyphrases { y ( j,a ) 1 , ..., y ( j,a ) n } , the target present keyphrases y ( j,p ) and target absent keyphrases y ( j,a ) are represented as: y ( j,p ) = y ( j,p ) 1 || y ( j,p ) 2 || ... || y ( j,p ) m y ( j,a ) = y ( j,a ) 1 || y ( j,a ) 2 || ... || y ( j,a ) n where || is a special splitter to separate the keyphrases.",
"We then get the source text x ( j ) , the present keyphrases y ( j,p ) and the absent keyphrases y ( j,a ) all as word sequences.",
"Under this setting, our model is capable of generating multiple keyphrases in one sequence as well as capturing the mutual relations between these keyphrases.",
"A keyphrase generation model is to learn the mapping from the source text x ( j ) to the target keyphrases ( y ( j,p ) , y ( j,a ) ) .",
"For simplicity, ( x, y p , y a ) is used to denote each item in the rest of this paper, where x denotes a source text sequence, y p denotes its present keyphrase sequence and y a denotes its absent keyphrase sequence.",
"The architecture of our proposed S electG uide-G enerate (SGG) approach is illustrated in Figure 2.",
"Our model is the extension of Seq2Seq framework which consists of a text encoder , a selector , a guider , and a generator .",
"The text encoder converts the source text x into a set of hidden representation vectors { h i } Li =1 with a bidirectional Long Short-term Memory Network (bi-LSTM) (Hochreiter and Schmidhuber, 1997), where L is the length of source text sequence.",
"The selector is a uni-directional LSTM, which predicts the present keyphrase sequence y p based on the attention distribution over source words.",
"After selecting present keyphrases, a guider is produced by a guider to memorize the prediction information of the selector, and then fed to the attention module of a generator to adjust the information it pays attention to.",
"The selection-guided generator is also implemented as a uni-directional LSTM, which produces the absent keyphrase sequence y a based on two distributions over predefined-vocabulary and source words, respectively.",
"At the same time, a soft switch gate p gen is employed as a trade-off between the above two distributions.",
"The goal of a text encoder is to provide a series of dense representations { h i } Li =1 of the source text.",
"In our model, the text encoder is implemented as a bi-LSTM (Hochreiter and Schmidhuber, 1997) which reads an input sequence x = { x i } Li =1 from Guider Attention Step 1 Step t Step M Bi-LSTM LSTM , S o u r c e w o r d s Step 1 Step t Step N Attention LSTM S o u r c e w o r d s r V o c a bu l a r y Softmax Generator , Concate Guide , , , Sigmoid Selector Encoder Figure 2: The architecture of the proposed SGG which is implemented with a hierarchical neural network.",
"two directions and outputs a sequence of forward hidden states { h i } Li =1 and backward hidden states { h i } Li =1 by iterating the following equations: h i = LSTM ( x i , h i 1 ) (1) h i = LSTM ( x i , h i +1 ) (2) The final hidden representation h i of the i -th source word is the concatenation of forward and backward hidden states, i.e. , h i = [ h i ; h i ] .",
"A selector is designed to generate present keyphrase sequences through the pointer mechanism (Vinyals et al., 2015), which adopts the attention distribution as a pointer to select words from the source text as output.",
"Specifically, given source text sequence x and previously generated words { y p 1 , ..., y pt 1 } , the probability distribution of predicting next word y pt in present keyphrases is: P ( y pt | y p<t , x ) = p,t = softmax ( u p,t ) (3) u p,ti = V Tp tanh ( W p [ s pt ; h i ] + b p ) (4) where p,t is the attention (Bahdanau et al., 2015) distribution at decoding time step t , i (1 , ..., L ) , and V p , W p and b p are trainable parameters of the model.",
"u p,t can be viewed as the degree of matching between input at position i and output at position t .",
"A guider is designed to fully utilize the attention information of the selector to guide the generator on absent keyphrase generation.",
"The idea behind is to utilize a guider r to softly indicate which words in source text have been generated by the selector.",
"This is important for helping the generator to focus on generating the absent keyphrases.",
"Specifically, r is constructed through the accumulation of the attention distributions over all decoding time steps of the selector, computed as: r = M (cid:88) t =1 p,t (6) where M is the length of present keyphrase sequence.",
"r is an unnormalized distribution over the source words.",
"As the attention distribution of selector is equal to the probability distribution over the source words, r represents the possibility that these words have been generated by the selector.",
"The calculation of guider is inspired by the coverage vector (Tu et al., 2016) that is sequentially updated during the decoding process.",
"In contrast to this, the guider here is a static vector which is capable of memorizing a global information.",
"A generator aims to predict an absent keyphrase sequence based on the guidance of the selection information from the guider.",
"Unlike present keyphrases, most words in absent keyphrases do not appear in source text.",
"Therefore, the generator generates absent keyphrases by picking up words from both a predefined large scale vocabulary and the source text (See et al., 2017; Gu et al., 2016).",
"The probability distribution of predicting next word y at in absent keyphrases is defined as: P ( y at | y a<t , x ) = p gen P vocab ( y at ) + (1 p gen ) (cid:88) i : y at = x i a,ti (7) where P vocab is the probability distribution over the predefined vocabulary, which is zero if y at is an out-of-vocabulary (OOV) word.",
"Similarly, if y at does not appear in the source text, then (cid:80) i : y at = x i a,ti is zero.",
"P vocab is computed as: P vocab ( y at ) = softmax ( W [ s at ; c at ] + b ) (8) where W and b are learnable parameters, s at is the hidden state of generator, and c at is the context vector for generating absent keyphrase sequence, computed by the following equations: c at = L (cid:88) i =1 a,ti h i (9) a,t = softmax ( u a,t ) (10) u a,ti = V Ta tanh ( W a [ s at ; h i ; r ] + b a ) (11) where V a , W a and b a are learnable parameters.",
"r is a vector produced by the guider.",
"The generation probability p gen at time step t is computed as: p gen = ( W gen [ c at ; s at ; emb ( y at 1 )]+ b gen ) (12) where W gen and b gen are learnable parameters, ( ) represents a sigmoid function and emb ( y at 1 ) is the embedding of y at 1 .",
"In addition, p gen in formula (7) is used as a soft switch to choose either generating words over vocabulary or copying words from source text based on distribution a,t .",
"Given the set of data pairs { x ( j ) , y ( j,p ) , y ( j,a ) } Kj =1 the loss function of the keyphrase generation consists of two parts of cross entropy losses:",
"L p ( ) = K (cid:88) j =1 M (cid:88) i =1 log ( P ( y ( j,p ) i | x ( j ) ; )) (13) L a ( ) = K (cid:88) j =1 N (cid:88) i =1 log ( P ( y ( j,a ) i | x ( j ) ; )) (14)",
"where L p and L a are the losses of generating present and absent keyphrases, respectively.",
"N is the word sequence length of absent keyphrases, and are the parameters in our model.",
"The training objective is to jointly minimize the two losses: L = L p + L a .",
"We use the dataset collected by Meng et al. (2017) from various online digital libraries, which contains approximately 570K samples, each of which",
"contains a title and an abstract of a scientific publication as source text, and author-assigned keywords as target keyphrases.",
"We randomly select the example which contains at least one present keyphrase to construct the training set.",
"Then, a validation set containing 500 samples will be selected from the remaining examples.",
"In order to evaluate our proposed model comprehensively, we test models on four widely used public datasets from the scientific domain, namely Inspec (Hulth and Megyesi, 2006), Krapivin (Krapivin et al., 2009), SemEval-2010 (Kim et al., 2010) and NUS (Nguyen and Kan, 2007), the statistic information of which are summarized in Table 2.",
"For present keyphrase prediction, we compare our model with both extraction and generation approaches.",
"Extraction approaches include two unsupervised extraction methods: TF-IDF, TextRank (Mihalcea and Tarau, 2004) and one classic supervised extraction method KEA (Witten et al., 1999).",
"For the generation baselines, some models, such as CopyRNN, split each data item into multiple training examples, each of which only contains one keyphrase, while the other models concatenate all keyphrases as target.",
"To simplicity, the pattern of training model only with one keyphrase is denoted as one-to-one and with the concatenation of all keyphrases as one-to-many .",
"The generation baselines are the following state-of-the-art encoder-decoder models: CopyRNN(one-to-one) (Meng et al., 2017) represents a RNN-based encoder-decoder model incorporating the copying mechanism.",
"CorrRNN(one-to-many) (Chen et al., 2018) is an extension of CopyRNN incorporating the coverage mechanism (Tu et al., 2016).",
"CatSeq(one-to-many) (Yuan et al., 2020) has the same model structure as CopyRNN.",
"The difference is CatSeq is trained by one-to-many.",
"The baseline CopyTrans has not been reported in existing papers and thus is retrained.",
"The implementation of Transformer is base on open source tool OpenNMT 1 .",
"For our experiments of absent keyphrase generation, only generation methods are chosen as baselines.",
"The copying mechanism used in all reimplemented generation models is based on the version (See et al., 2017), which is slightly different from the implementations by version (Meng et al., 2017; Gu et al., 2016).",
"SGG indicates the full version of our proposed model, which contains a selector, a guider, and a generator.",
"Note that SGG is also trained under one-to-many pattern.",
"evalua-1 https://github.com/OpenNMT/OpenNMT-py",
"tion metrics for the present and absent keyphrases respectively.",
"The choice of larger N ( i.e. , 50 v.s. 5 and",
"10) for absent keyphrase is due to the fact that absent keyphrases are more difficult to be generated than present keyphrases.",
"For present keyphrase evaluation, exact match is used for determining whether the predictions are correct.",
"For absent keyphrase evaluation, Porter Stemmer is used to stem all the words in order to remove words' suffix before comparisons.",
"We set maximal length of source sequence as 400, 25 for target sequence of selector and generator, and 50 for the decoders of all generation baselines.",
"We choose the top 50,000 frequently-occurred words as our vocabulary.",
"The dimension of the word embedding is 128.",
"The dimension of hidden state in encoder, selector and generator is 512.",
"The word embedding is randomly initialized and learned during training.",
"We initialize the parameters of models with uniform distribution in [-0.2,0.2].",
"The model is optimized using Ada-grad (Duchi et al., 2011) with learning rate = 0.15, initial accumulator = 0.1 and maximal gradient normalization = 2.",
"In the inference process, we use beam search to generate diverse keyphrases and the beam size is 200 same as baselines.",
"All the models are trained on a single Tesla P40.",
"In this section, we present the results of present and absent keyphrase generation separately.",
"The results of predicting present keyphrases are shown in Table 3, in which the F1 at top-5 and top-10 predictions are given.",
"We first compare our proposed Method Inspec Krapivin NUS SemEval CopyRNN 13.12 11.74 11.30 11.25 SGG 79.16 79.28 76.02 79.20 Table 5: Proportion of absent keyphrases in the predictions of CopyRNN and generator.",
"model with the conventional keyphrase extraction methods.",
"The results show that our model performs better than extraction methods with a large margin, demonstrating the potential of the Seq2Seq-based generation models in automatic keyphrase extraction task.",
"We then compare our model with the generation baselines, and the results indicate that our model still outperforms these baselines significantly.",
"The better performance of SGG illustrates the pointing based selector is sufficient and more effective to generate present keyphrase.",
"We further analyze the experimental results of absent keyphrase generation.",
"Table 4 presents the recall results of the generation baselines and our model on four datasets.",
"It can be observed that our model significantly improves the performance of absent keyphrase generation, compared to the generation baselines.",
"This is because SGG is equipped with a generator that is not biased to generate present keyphrases and the designed guider in SGG further guides the generator to focus on generating absent keyphrases.",
"Table 5 shows the proportion of absent keyphrases generated by SGG.",
"The comparison of Table 1 and 5 demonstrates that our model have the ability to generate large portions of absent keyphrases rather than tending to generate present keyphrases.",
"In addition, an interesting phenomenon can be found from the results of CopyRNN and CatSeq that one-to-one pattern generally performs better than one-to-many if under the same model structure in absent keyphrase generation.",
"To explore this phenomenon, we use the same code, same training set to retrain CopyRNN under one-to-one and one-to-many patterns, and the test results show that one-to-one could boost the performance in absent keyphrase generation.",
"However, SGG cannot be trained under one-to-one pattern as the core of guider in SGG is to memory all present keyphrases.",
"Even so, SGG still has better performance than CopyRNN.",
"The results of SGG achieve 1.6% average gain than CopyRNN and 31.8% average gain than the best-performing results of one-to-many baselines over four test sets.",
"In this section, we explore the extensibility of SGG in other natural language generation (NLG) tasks, i.e. , title generation.",
"We adopt the same dataset described in Section 4.1 for title generation, which contains abstracts, present keyphrases, absent keyphrases, and titles.",
"Specifically, a title generation model takes an abstract as input and generates a title as output.",
"To train SGG model for title generation, present keyphrases appearing in the titles are used as labels to train the selectors 2 , and the titles are used to train the generators.",
"The idea behind is to utilize the present keyphrase generation as an auxiliary task to help the main title generation task.",
"In order to evaluate SGG on title generation, we choose models CopyTrans and pointer-generator (PG-Net) (See et al., 2017) as baselines.",
"We use ROUGE-1 (unigram), ROUGE-2 (bi-gram), ROUGE-L (LCS) and human evaluation as evaluation metrics.",
"For human evaluation, we randomly selects 100 abstracts for each test set, then distribute them to four people on average.",
"The evaluation standard is the fluency of generated title and whether it correctly provides the core topics of an abstract.",
"2 The present keyphrase information used for training SGG is not used during inference.",
"Datasets without given present keyphrases should consider to conduct labeling.",
"model SGG achieves better performance than the strong baselines on all datasets, proving that SGG could be directly applied to title generation task and still keep highly effective.",
"In this section, we further study the effectiveness of our proposed guider module.",
"Table 7 displays the results of SG (only a s elector, a g enerator, no guider) and its comparison with SGG on the two largest test sets Inspec and Krapivin, which illustrates that the guider has a remarkable effect on absent keyphrase and title generation tasks.",
"In more detail, we analyze that the function of guiders on these two tasks is different, which depends on the correlation between the targets of selector and generator.",
"For example, in the task of keyphrase generation, the words predicted from selector should not be repeatedly generated by generator because the present keyphrases and absent keyphrases in a given text usually do not have overlapping words.",
"However, in the task of title generation, the selected words by selector should be paid more attention on by generator since they are usually part of the target titles.",
"To verify the above analysis, we visualize two examples of the attention scores in generators for the two tasks in Figure 4.",
"For keyphrase generation, SG repeatedly generates implicit surfaces that has already been generated by its selector.",
"In contrast, SGG successfully avoids this situation and it correctly generates the absent keyphrase particle constraint.",
"For title generation, the guider helps SGG to assign higher attention scores to the words in seat reservation that has been generated by selector.",
"Figure 3 gives the proportion of test examples that the predictions of generator overlap with the predictions of selector.",
"We observe that SG is more likely to generate the words that have been generated by selector than SGG in keyphrase generation.",
"In contrast, the results on title generation indicate that SGG is more likely to generate previously selected words than SG for this task.",
"Through the analysis above, we conjecture that the guider is able to correctly guide the behaviour of generator in different tasks, i.e. , learn to encourage or discourage generating previously selected words.",
"In this paper, a Select-Guide-Generate (SGG) approach is proposed and implemented with a hierarchical neural model for keyphrase generation, which separately deals with the generation of",
"present and absent keyphrases.",
"Comprehensive empirical studies demonstrate the effectiveness of SGG.",
"Furthermore, a title generation task indicates the extensibility of SGG in other generation tasks.",
"Acknowledgments",
"Mikalai Krapivin, Aliaksandr Autaeu, and Maurizio Marchese.",
"2009.",
"Large dataset for keyphrases extraction.",
"Technical report, University of Trento.",
"Zhiyuan Liu, Peng Li, Yabin Zheng, and Maosong Sun.",
"2009.",
"Clustering to find exemplar terms for keyphrase extraction.",
"In Proceedings of EMNLP .",
"Olena Medelyan, Eibe Frank, and Ian H Witten.",
"2009.",
"Human-competitive tagging using automatic keyphrase extraction.",
"In Proceedings of EMNLP .",
"Rui Meng, Sanqiang Zhao, Shuguang Han, Daqing He, Peter Brusilovsky, and Yu Chi.",
"2017.",
"Deep keyphrase generation.",
"In Proceedings of ACL .",
"Rada Mihalcea and Paul Tarau.",
"2004.",
"Textrank: Bringing order into text.",
"In Proceedings of EMNLP .",
"Thuy Dung Nguyen and Min-Yen Kan. 2007.",
"Keyphrase extraction in scientific publications.",
"In Proceedings of International Conference on Asian Digital Libraries .",
"Abigail See, Peter J. Liu, and Christopher D. Manning.",
"2017.",
"Get to the point: Summarization with pointer-generator networks.",
"In Proceedings of ACL .",
"Sandeep Subramanian, Tong Wang, Xingdi Yuan, Saizheng Zhang, Adam Trischler, and Yoshua Ben-gio.",
"2018.",
"Neural models for key phrase extraction and question generation.",
"In Proceedings of ACL .",
"Ilya Sutskever, Oriol Vinyals, and Quoc V. Le.",
"2014.",
"Sequence to sequence learning with neural networks.",
"In Proceedings of NIPS .",
"Zhaopeng Tu, Zhengdong Lu, Yang Liu, Xiaohua Liu, and Hang Li.",
"2016.",
"Modeling coverage for neural machine translation.",
"In Proceedings of ACL .",
"Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, ukasz Kaiser, and Illia Polosukhin.",
"2017.",
"Attention is all you need.",
"In Proceedings of NIPS .",
"Oriol Vinyals, Meire Fortunato, and Navdeep Jaitly.",
"2015.",
"Pointer networks.",
"In Proceedings of NIPS .",
"Xiaojun Wan and Jianguo Xiao.",
"2008.",
"Single document keyphrase extraction using neighborhood knowledge.",
"In Proceedings of AAAI .",
"Fang Wang, Zhongyuan Wang, Senzhang Wang, and Zhoujun Li.",
"2014.",
"Exploiting description knowledge for keyphrase extraction.",
"In Proceedings of PRICAI .",
"Lu Wang and Claire Cardie.",
"2013.",
"Domain-independent abstract generation for focused meeting summarization.",
"In Proceedings of ACL .",
"Yue Wang, Jing Li, Hou Pong Chan, Irwin King, Michael R. Lyu, and Shuming Shi.",
"2019.",
"Topic-aware neural keyphrase generation for social media language.",
"In Proceedings of ACL .",
"This work is supported by the National Key Research and Development Program of China under Grant No. 2018YFB2100802."
] | [
"abstain",
"abstain",
"objective",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"result",
"result",
"other",
"other",
"other",
"other",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"objective",
"abstain",
"method",
"other",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain"
] |
[
"Variational autoencoders (VAEs) with an autoregressive decoder have been applied for many natural language processing (NLP) tasks.",
"The VAE objective consists of two terms, ( i ) reconstruction and ( ii ) KL regularization, balanced by a weighting hyper-parameter \u0000 .",
"One notorious training difficulty is that the KL term tends to vanish.",
"In this paper we study scheduling schemes for \u0000 , and show that KL vanishing is caused by the lack of good latent codes in training the decoder at the beginning of optimization.",
"To remedy this, we propose a cyclical annealing schedule, which repeats the process of increasing \u0000 multiple times.",
"This new procedure allows the progressive learning of more meaningful latent codes, by leveraging the informative representations of previous cycles as warm re-starts.",
"The effectiveness of cyclical annealing is validated on a broad range of NLP tasks, including language modeling, dialog response generation and unsupervised language pre-training.",
"Variational autoencoders (VAEs) (Kingma and Welling, 2013; Rezende et al., 2014) have been applied in many NLP tasks, including language modeling (Bowman et al., 2015; Miao et al., 2016), dialog response generation (Zhao et al., 2017; Wen et al., 2017), semi-supervised text classification (Xu et al., 2017), controllable text generation (Hu et al., 2017), and text compression (Miao and Blunsom, 2016).",
"A prominent component of a VAE is the distribution-based latent representation for text sequence observations.",
"This flexible representation allows the VAE to explicitly model holistic properties of sentences, such as style, topic, and high-level linguistic and semantic features.",
"Samples from the prior latent distribution can produce Corresponding author Equal Contribution diverse and well-formed sentences through simple deterministic decoding (Bowman et al., 2015).",
"Due to the sequential nature of text, an autoregressive decoder is typically employed in the VAE.",
"This is often implemented with a recurrent neural network (RNN); the long short-term mem-ory (LSTM) (Hochreiter and Schmidhuber, 1997) RNN is used widely.",
"This introduces one notorious issue when a VAE is trained using traditional methods: the decoder ignores the latent variable, yielding what is termed the KL vanishing problem.",
"Several attempts have been made to ameliorate this issue (Yang et al., 2017; Dieng et al., 2018; Zhao et al., 2017; Kim et al., 2018).",
"Among them, perhaps the simplest solution is monotonic KL annealing, where the weight of the KL penalty term is scheduled to gradually increase during training (Bowman et al., 2015).",
"While these techniques can effectively alleviate the KL-vanishing issue, a proper unified theoretical interpretation is still lacking, even for the simple annealing scheme.",
"In this paper, we analyze the variable dependency in a VAE, and point out that the autoregressive decoder has two paths (formally defined in Section 3.1) that work together to generate text sequences.",
"One path is conditioned on the latent codes, and the other path is conditioned on previously generated words.",
"KL vanishing happens because ( i ) the first path can easily get blocked, due to the lack of good latent codes at the beginning of decoder training; ( ii ) the easiest solution that an expressive decoder can learn is to ignore the latent code, and relies on the other path only for decoding.",
"To remedy this issue, a promising approach is to remove the blockage in the first path, and feed meaningful latent codes in training the decoder, so that the decoder can easily adopt them to generate controllable observations (Bowman et al., 2015).",
"This paper makes the following contributions: ( i ) We provide a novel explanation for the KL-vanishing issue, and develop an understanding of the strengths and weaknesses of existing scheduling methods ( e.g. , constant or monotonic annealing schedules).",
"( ii )",
"Based on our explanation, we propose a cyclical annealing schedule.",
"It repeats the annealing process multiple times, and can be considered as an inexpensive approach to leveraging good latent codes learned in the previous cycle, as a warm restart, to train the decoder in the next cycle.",
"( iii )",
"We demonstrate that the proposed cyclical annealing schedule for VAE training improves performance on a large range of tasks (with negligible extra computational cost), including text modeling, dialog response generation, and unsupervised language pre-training.",
"To generate a text sequence of length T , x = [ x 1 , , x T ] , neural language models (Mikolov et al., 2010) generate every token x t conditioned on the previously generated tokens:",
"<t",
"The VAE model for text consists of two parts, generation and inference (Kingma and Welling, 2013; Rezende et al., 2014; Bowman et al., 2015).",
"The generative model ( decoder ) draws a continuous latent vector z from prior p ( z ) , and generates the text sequence x from a conditional distribution p ( x | z ) ; p ( z ) is typically assumed a multivariate Gaussian, and represents the neural network parameters.",
"The following auto-regressive decoding process is usually used: p ( x | z ) = TY t =1 p ( x t | x <t , z ) .",
"Parameters are typically learned by maximizing the marginal log likelihood log p ( x ) = log R p ( z ) p ( x | z ) d z .",
"However, this marginal term is intractable to compute for many decoder choices.",
"Thus, variational inference is considered, and the true posterior p ( z | x ) / p ( x | z ) p ( z ) is approximated via the variational distribution q \u0000 ( z | x ) is (often known as the inference model or encoder ), implemented via a \u0000 -parameterized neural network.",
"It yields the evidence lower bound (ELBO) as an objective: log p ( x ) \u0000 LELBO = (2) E q \u0000 ( z | x ) log p ( x | z ) \u0000 KL ( q \u0000 ( z | x ) || p ( z )) Typically, q \u0000 ( z | x ) is modeled as a Gaussian distribution, and the re-parametrization trick is used for efficient learning (Kingma and Welling, 2013).",
"There is an alternative interpretation of the ELBO: the VAE objective can be viewed as a regularized version of the autoencoder (AE) (Goodfel-low et al., 2016).",
"It is thus natural to extend the negative of LELBO in (2) by introducing a hyper-parameter \u0000 to control the strength of regularization: L \u0000 = LE + \u0000 LR , with (3) LE = \u0000 E q \u0000 ( z | x ) log p ( x | z ) (4) LR = KL ( q \u0000 ( z | x ) || p ( z )) (5) where LE is the reconstruction error (or negative log-likelihood (NLL)), and LR is a KL regularizer.",
"The cost function L \u0000 provides a unified perspective for understanding various autoencoder variants and training methods.",
"When \u0000 = 1 , we recover the VAE in (2).",
"When \u0000 = 0 , and q \u0000 ( z | x ) is a delta distribution, we recover the AE.",
"In other words, the AE does not regularize the variational distribution toward a prior distribution, and there is only a point-estimate to represent the text sequence's latent feature.",
"In practice, it has been found that learning with an AE is prone to overfitting (Bowman et al., 2015), or generating plain dialog responses (Zhao et al., 2017).",
"Hence, it is desirable to retain meaningful posteriors in real applications.",
"Two different schedules for \u0000 have been commonly used for a text VAE.",
"Constant Schedule The standard approach is to keep \u0000 = 1 fixed during the entire training procedure, as it corresponds to optimizing the true VAE objective.",
"Unfortunately, instability on text analysis has been witnessed, in that the KL term LR becomes vanishingly small during training (Bow-man et al., 2015).",
"This issue causes two undesirable outcomes: ( i ) an encoder that produces posteriors almost identical to the Gaussian prior, for all observations (rather than a more interesting pos-terior); and ( ii ) a decoder that completely ignores the latent variable z , and a learned model that reduces to a simpler language model.",
"This is known as the KL vanishing issue in text VAEs.",
"(b) VAE with an auto-regressive decoder Figure 1: Illustration of learning parameters { \u0000 , } in the two different paradigms.",
"Monotonic Annealing Schedule.",
"A simple remedy has been proposed in (Bowman et al., 2015) to alleviate KL collapse.",
"It sets \u0000 = 0 at the beginning of training, and gradually increases \u0000 until \u0000 = 1 is reached.",
"In this setting, we do not optimize the proper lower bound in (2) during the early stages of training, but nonetheless improvements on the value of that bound are observed at convergence in previous work (Bowman et al., 2015; Zhao et al., 2017).",
"The monotonic annealing schedule has become the de facto standard in training text VAEs, and has been widely adopted in many NLP tasks.",
"Though simple and often effective, this heuristic still lacks a proper justification.",
"Further, how to best schedule \u0000 is largely unexplored.",
"In the traditional VAE (Kingma and Welling, 2013), z generates x directly, and the reconstruction depends only on one path of { \u0000 , } passing through z , as shown in Figure",
"1(a).",
"Hence, z can largely determine the reconstructed x .",
"In contrast, when an auto-regressive decoder is used in a text VAE (Bowman et al., 2015), there are two paths from x to its reconstruction, as shown in Figure",
"1(b).",
"Path A is the same as that in the standard VAE, where z is the global representation that controls the generation of x ; Path B leaks the partial ground-truth information of x at every time step of the sequential decoding.",
"It generates x t conditioned on x <t .",
"Therefore, Path B can potentially bypass Path A to generate x , leading to KL vanishing.",
"From this perspective, we hypothesize that the model-collapse problem is related to the low quality of z at the beginning phase of decoder training.",
"A lower quality z introduces more difficulties in reconstructing x via Path A. As a result, the model is forced to learn an easier solution to decoding: generating x via Path B only.",
"We argue that this phenomenon can be easily observed due to the powerful representation capability of the auto-regressive decoder.",
"It has been shown empirically that auto-regressive decoders are able to capture highly-complex distributions, such as natural language sentences (Mikolov et al., 2010).",
"This means that Path B alone has enough capacity to model x , even though the decoder takes { x <t , z } as input to produce x t .",
"Zhang et al. (2017a) has shown that flexible deep neural networks can easily fit randomly labeled training data, and here the decoder can learn to rely solely on x <t for generation, when z is of low quality.",
"Constant Schedule The two loss terms in (2) are weighted equally in the constant schedule.",
"At the early stage of optimization, { \u0000 , } are randomly initialized and the latent codes z are of low quality.",
"The KL term LR pushes q \u0000 ( z | x ) close to an uninformative prior p ( z ) : the posterior becomes more like an isotropic Gaussian noise, and less representative of their corresponding observations.",
"In other words, LR blocks Path A, and thus z remains uninformative during the entire training process: it starts with random initialization and then is regularized towards a random noise.",
"Although the reconstruction term LE can be satisfied via two paths, since z is noisy, the decoder learns to discard Path A ( i.e., ignores z ), and chooses Path B to generate the sentence word-by-word.",
"Monotonic Annealing Schedule The monotonic schedule sets \u0000 close to 0 in the early stage of training, which effectively removes the blockage LR on Path A, and the model reduces to a denois-ing autoencoder 1 .",
"LE becomes the only objective, 1 The Gaussian sampling remains for q \u0000 ( z | x ) which can be reached by both paths.",
"Though randomly initialized, z is learned to capture useful information for reconstruction of x during training.",
"At the time when the full VAE objective is considered ( \u0000 = 1 ), z learned earlier can be viewed as the VAE initialization; such latent variables are much more informative than random, and thus are ready for the decoder to use.",
"To mitigate the KL-vanishing issue, it is key to have meaningful latent codes z at the beginning of training the decoder, so that z can be utilized.",
"The monotonic schedule under-weights the prior regularization, and the learned q \u0000 ( z | x ) tends to collapse into a point estimate ( i.e., the VAE reduces to an AE).",
"This underestimate can result in sub-optimal decoder learning.",
"A natural question concerns how one can get a better distribution estimate for z as initialization, while retaining low computational cost.",
"Our proposal is to use z q \u0000 ( z | x ) , which has been trained under the full VAE objective, as initialization.",
"To learn to progressively improve latent representation z , we propose a cyclic annealing schedule.",
"We start with \u0000 = 0 , increase \u0000 at a fast pace, and then stay at \u0000 = 1 for subsequent learning iterations.",
"This encourages the model to converge towards the VAE objective, and infers its first raw full latent distribution.",
"Unfortunately, Path A is blocked at \u0000 = 1 .",
"The optimization is then continued at \u0000 = 0 again, which perturbs the VAE objective, dislodges it from the convergence, and reopens Path A. Importantly, the decoder is now trained with the latent code from a full distribution z q \u0000 ( z | x ) , and both paths are considered.",
"We repeat this process several times to achieve better convergences.",
"Formally, \u0000 has the form: \u0000 t = f ( ) , R 1 , > R with (6) = mod( t \u0000 1 , d T/M e ) T/M , (7) where t is the iteration number, T is the total number of training iterations, f is a monotonically increasing function, and we introduce two new hyper-parameters associated with the cyclical annealing schedule: M : number of cycles (default M = 4 ); 0 0.5 1 \u0000 Monotonic 0 5K 10K 15K 20K 25K 30K 35K 40K Iteration 0 0.5 1 \u0000 Cyclical Figure 2: Comparison between",
"In other words, we split the training process into M cycles, each starting with \u0000 = 0 and ending with \u0000 = 1 .",
"We provide an example of a cyclical schedule in Figure",
"2(b), compared with the monotonic schedule in Figure",
"2(a).",
"Within one cycle, there are two consecutive stages (divided by R ): Annealing .",
"\u0000 is annealed from 0 to 1 in the first R d T/M e training steps over the course of a cycle.",
"For example, the steps [1 , 5 K ] in the Figure",
"2(b).",
"\u0000 = f (0) = 0 forces the model to learn representative z to reconstruct x .",
"As depicted in Figure",
"1(b), there is no interruption from the prior on Path A, z is forced to learn the global representation of x .",
"By gradually increasing \u0000 towards f ( R ) = 1 , q ( z | x ) is regularized to transit from a point estimate to a distribution estimate, spreading out to match the prior.",
"Fixing .",
"As our ultimate goal is to learn a VAE model, we fix \u0000 = 1 for the rest of training steps within one cycle, e.g., the steps [5 K, 10 K ] in Figure",
"2(b).",
"This drives the model to optimize the full VAE objective until convergence.",
"As illustrated in Figure 2, the monotonic schedule increasingly anneals \u0000 from 0 to 1 once, and fixes \u0000 = 1 during the rest of training.",
"The cyclical schedules alternatively repeats the annealing and fixing stages multiple times.",
"A Practical Recipe The existing schedules can be viewed as special cases of the proposed cyclical schedule.",
"The cyclical schedule reduces to the constant schedule when R = 0 , and it reduces to an monotonic schedule when M = 1 and R is relatively small 2 .",
"In theory, any monotonically increasing function f can be adopted for the cyclical schedule, as long as f (0) = 0 and f ( R ) = 1 .",
"In practice, we suggest to build the cyclical schedule upon the success of monotonic schedules: we adopt the same f , and modify it by setting M and R (as default).",
"Three widely used increasing functions for f are linear (Fraccaro et al., 2016; Goyal et al., 2017), Sigmoid (Bowman et al., 2015) and Consine (Lai et al., 2018).",
"We present the comparative results using the linear function f ( ) = /R in Figure 2, and show the complete comparison for other functions in Figure 7 of the Supplementary Material (SM).",
"This section derives a bound for the training objective to rigorously study the impact of \u0000 ; the proof details are included in SM.",
"For notational convenience, we identify each data sample with a unique integer index n q ( n ) , drawn from a uniform random variable on { 1 , 2 , , N } .",
"Further we define q ( z | n ) = q \u0000 ( z | x n ) and q ( z , n ) = q ( z | n ) q ( n ) = q ( z | n ) 1 N .",
"Following (Makhzani et al., 2016), we refer to q ( z ) = P Nn =1 q ( z | n ) q ( n ) as the aggregated posterior.",
"This marginal distribution captures the aggregated z over the entire dataset.",
"The KL term in (5) can be decomposed into two refined terms (Chen et al., 2018; Hoffman and Johnson, 2016): FR = E q ( n ) [ KL ( q ( z | n ) || p ( z ))] = I q ( z , n ) | {z } F 1 : Mutual Info.",
"+ KL ( q ( z ) || p ( z )) | {z } F 2 : Marginal KL (8) where F 1 is the mutual information (MI) measured by q .",
"Higher MI can lead to a higher correlation between the latent variable and data variable, and encourages a reduction in the degree of KL vanishing.",
"The marginal KL is represented by F 2 , and it measures the fitness of the aggregated posterior to the prior distribution.",
"The reconstruction term in (5) provides a lower bound for MI measured by q , based on Corollary 3 in (Li et al., 2017): FE = E q ( n ) , z q ( z | n ) (log p ( n | z ))] + H q ( n ) I q ( z , n ) (9) where H ( n ) is a constant.",
"2 In practice, the monotonic schedule usually anneals in a very fast pace, thus R is small compared with the entire training procedure.",
"Analysis of \u0000 When scheduled with \u0000 , the training objective over the dataset can be written as: F = \u0000 FE + \u0000 FR (10) \u0000 ( \u0000 \u0000 1) I q ( z , n ) + \u0000 KL ( q ( z ) || p ( z )) (11) To reduce KL vanishing, we desire an increase in the MI term I ( z , n ) , which appears in both FE and FR , modulated by \u0000 .",
"It shows that reducing KL vanishing is inversely proportional with \u0000 .",
"When \u0000 = 0 , the model fully focuses on maximizing the MI.",
"As \u0000 increases, the model gradually transits towards fitting the aggregated latent codes to the given prior.",
"When \u0000 = 1 , the implementation of MI becomes implicit in KL ( q ( z ) || p ( z )) .",
"It is determined by the amortized inference regularization (implied by the encoder's expressivity) (Shu et al., 2018), which further affects the performance of the generative density estimator.",
"We compare different schedule methods by visualizing the learning processes on an illustrative problem.",
"Consider a dataset consisting of 10 sequences, each of which is a 10-dimensional one-hot vector with the value 1 appearing in different positions.",
"A 2-dimensional latent space is used for the convenience of visualization.",
"Both the encoder and decoder are implemented using a 2-layer LSTM with 64 hidden units each.",
"We use T = 40 K total iterations, and the scheduling schemes in Figure 2. The learning curves for the ELBO, reconstruction error, and KL term are shown in Figure 3. The three schedules share very similar values.",
"However, the cyclical schedule provides substantially lower reconstruction error and higher KL divergence.",
"Interestingly, the cyclical schedule improves the performance progressively: it becomes better than the previous cycle, and there are clear periodic patterns across different cycles.",
"This suggests that the cyclical schedule allows the model to use the previously learned results as a warm-restart to achieve further improvement.",
"We visualize the resulting division of the latent space for different training steps in Figure 4, where each color corresponds to z q ( z | n ) , for n = 1 , , 10 .",
"We observe that the constant schedule produces heavily mixed latent codes z for different sequences throughout the entire training process.",
"The monotonic schedule starts with a mixed z , but soon divides the space into a mixture",
"of 10 cluttered Gaussians in the annealing process (the division remains cluttered in the rest of train-ing).",
"The cyclical schedule behaves similarly to the monotonic schedule in the first 10K steps (the first cycle).",
"But, starting from the 2nd cycle, much more divided clusters are shown when learning on top of the 1st cycle results.",
"However, \u0000 < 1 leads to some holes between different clusters, making q ( z ) violate the constraint of p ( z ) .",
"This is alleviated at the end of the 2nd cycle, as the model is trained with \u0000 = 1 .",
"As the process repeats, we see clearer patterns in the 4th cycle than the 2nd cycle for both \u0000 < 0 and \u0000 = 1 .",
"It shows that more structured information is captured in z using the cyclical schedule, which is beneficial in downstream applications as shown in the experiments.",
"Solutions to KL vanishing Several techniques have been proposed to mitigate the KL vanishing issue.",
"The proposed method is most closely related to the monotonic KL annealing technique in (Bowman et al., 2015).",
"In addition to introducing a specific algorithm, we have comprehensively studied the impact of \u0000 and its scheduling schemes.",
"Our explanations can be used to interpret other techniques, which can be broadly categorized into two classes.",
"The first category attempts to weaken Path B, and force the decoder to use Path A. Word drop decoding (Bowman et al., 2015) sets a certain percentage of the target words to zero.",
"It has shown that it may degrade the performance when the drop rate is too high.",
"The dilated CNN was considered in (Yang et al., 2017) as a new type of decoder to replace the LSTM.",
"By changing the decoder's dilation architecture, one can control Path B: the effective context from x <t .",
"The second category of techniques improves the dependency in Path A, so that the decoder uses latent codes more easily.",
"Skip connections were developed in (Dieng et al., 2018) to shorten the paths from z to x in the decoder.",
"Zhao et al. (2017) introduced an auxiliary loss that requires the decoder to predict the bag-of-words in the dialog response (Zhao et al., 2017).",
"The decoder is thus forced to capture global information about the target response.",
"Zhao et al. (2019) enhanced Path A via mutual information.",
"Concurrent with our work, He et al. (2019) proposed to update encoder multiple times to achieve better latent code before updating decoder.",
"Semi-amortized training (Kim et al., 2018) was proposed to perform stochastic variational inference (SVI) (Hoffman et al., 2013) on top of the amortized inference in VAE.",
"It shares a similar motivation with the proposed approach, in that better latent codes can reduce KL vanishing.",
"However, the computational cost to run SVI is high, while our monotonic schedule does not require any additional compute overhead.",
"The KL scheduling methods are complementary to these techniques.",
"As shown in experiments, the proposed cyclical schedule can further improve them.",
"\u0000 -VAE The VAE has been extended to \u0000 regularized versions in a growing body of work (Higgins et al., 2017; Alemi et al., 2018).",
"Perhaps the seminal work is \u0000 -VAE (Higgins et al., 2017), which was extended in (Kim and Mnih, 2018; Chen et al., 2018) to consider \u0000 on the refined terms in the KL decomposition.",
"Their primary goal is to learn disentangled latent representations to explain the data, by setting \u0000 > 1 .",
"From an information-theoretic point of view, (Alemi et al., 2018) suggests a simple method to set \u0000 < 1 to ensure that latent-variable models with powerful stochastic decoders do not ignore their latent code.",
"However, \u0000 6 = 1 results in an improper statistical model.",
"Further, \u0000 is static in their work; we consider dynamically scheduled \u0000 and find it more effective.",
"Cyclical schedules Warm-restart techniques are common in optimization to deal with multimodal functions.",
"The cyclical schedule has been used to train deep neural networks (Smith, 2017), warm restart stochastic gradient descent (Loshchilov and Hutter, 2017), improve convergence rates (Smith and Topin, 2017), obtain model ensembles (Huang et al., 2017) and explore multimodal distributions in MCMC sampling (Zhang et al., 2019).",
"All these works applied cyclical schedules to the learning rate.",
"In contrast, this paper represents the first to consider the cyclical schedule for \u0000 in VAE.",
"Though the techniques seem simple and similar, our motivation is different: we use the cyclical schedule to re-open Path A in Figure",
"1(b) and provide the opportunity to train the decoder with high-quality z .",
"The source code to reproduce the experimental results will be made publicly available on GitHub 3 .",
"For a fair comparison, we follow the practical recipe described in Section 3.2, where the monotonic schedule is treated as a special case of cycli-3 https://github.com/haofuml/cyclical_ annealing Schedule Rec KL ELBO PPL VAE M 101.73 0.907 -102.63 108.09 C 100.51 1.955 -102.46 107.25 SA-VAE M 100.75 1.796 -102.54 107.64 M 101.83 1.053 -102.89 109.33 C 100.50 2.261 -102.76 108.71 \u0000 m =0 .",
"cal schedule (while keeping all other settings the same).",
"The default hyper-parameters of the cyclical schedule are used in all cases unless stated otherwise.",
"We study the impact of hyper-parameters in the SM, and show that larger M can provide higher performance for various R .",
"We show the major results in this section, and put more details in the SM.",
"The monotonic and cyclical schedules are denoted as M and C , respectively.",
"We first consider language modeling on the Penn Tree Bank (PTB) dataset (Marcus et al., 1993).",
"Language modeling with VAEs has been a challenging problem, and few approaches have been shown to produce rich generative models that do not collapse to standard language models.",
"Ideally a deep generative model trained with variational inference would pursue higher ELBO, making use of the latent space ( i.e., maintain a nonzero KL term) while accurately modeling the underlying distribution ( i.e., lower reconstruction errors).",
"We implemented different schedules based on the code 4 published by Kim et al. (2018).",
"The latent variable is 32-dimensional, and 40 epochs are used.",
"We compare the proposed cyclical annealing schedule with the monotonic schedule baseline that, following (Bowman et al., 2015), anneals linearly from 0 to 1.0 over 10 epochs.",
"We also compare with semi-amortized (SA) training (Kim et al., 2018), which is considered as the state-of-the-art technique in preventing KL vanishing.",
"We set SVI steps to 10.",
"Results are shown in Table 1. The perplexity is reported in column PPL.",
"The cyclical schedule outperforms the monotonic schedule for both standard VAE and SA-VAE training.",
"SA-VAE training 4 https://github.com/harvardnlp/sa-vae",
"can effectively reduce KL vanishing, it takes 472s per epoch.",
"However, this is significantly more expensive than the standard VAE training which takes 30s per epoch.",
"The proposed cyclical schedule adds almost zero cost.",
"We show the learning curves for VAE and SA-VAE in Figure 5. Interestingly, the cyclical schedule exhibits periodical learning behaviours.",
"The performance of the cyclical schedule gets better progressively, after each cycle.",
"While ELBO and PPL ar similar, the cyclical schedule improves the reconstruction ability and KL values for both VAE and SA-VAE.",
"We observe clear over-fitting issues for the SA-VAE with the monotonic schedule, while this issue is less severe for SA-VAE with the cyclical schedule.",
"Finally, we further investigate whether our improvements are from simply having a lower \u0000 , rather than from the cyclical schedule re-opening Path A for better learning.",
"To test this, we use a monotonic schedule with maximum \u0000 = 0 .",
"5 .",
"We observe that the reconstruction and KL terms perform better individually, but the ELBO is substantially worse than \u0000 = 1 , because \u0000 = 0 .",
"5 yields an improper model.",
"Even so, the cyclical schedule improves its performance.",
"We use a cyclical schedule to improve the latent codes in (Zhao et al., 2017), which are key to diverse dialog-response generation.",
"Follow-Model CVAE CVAE+BoW Schedule M C M C Rec-P # 36.16 29.77 18.44 16.74 KL Loss \" 0.265 4.104 14.06 15.55 B4 prec 0.185 0.234 0.211 0.219 B4 recall 0.122 0.220 0.210 0.219 A-bow prec 0.957 0.961 0.958 0.961 A-bow recall 0.911 0.941 0.938 0.940 E-bow prec 0.867 0.833 0.830 0.828 E-bow recall 0.784 0.808 0.808 0.805 Table 3: Comparison on dialog response generation.",
"ing (Zhao et al., 2017), Switchboard (SW) Corpus (Godfrey and Holliman, 1997) is used, which has 2400 two-sided telephone conversations.",
"Two latent variable models are considered.",
"The first one is the Conditional VAE (CVAE), which has been shown better than the encoder-decoder neural dialog (Serban et al., 2016).",
"The second is to augment VAE with a bag-of-word (BoW) loss to tackle the KL vanishing problem, as proposed in (Zhao et al., 2017).",
"Table 2 shows the sample outputs generated from the two schedules using CVAE.",
"Caller Alice begins with an open-ended statement on choosing a college, and the model learns to generate responses from Caller Bob .",
"The cyclical schedule generated highly diverse answers that cover multiple plausible dialog acts.",
"On the contrary, the responses from the monotonic schedule are limited to repeat plain responses, i.e., i'm not sure .",
"Quantitative results are shown in Table 3, using the evaluation metrics from (Zhao et al., 2017).",
"( i )",
"Smoothed Sentence-level BLEU (Chen and Cherry, 2014): BLEU is a popular metric that measures the geometric mean of modified n-gram precision with a length penalty.",
"We use BLEU-1 to 4 as our lexical similarity metric and normalize the score to 0 to 1 scale.",
"( ii )",
"Cosine Distance of Bag-of-word Embedding (Liu et al., 2016): a simple method to obtain sentence embeddings is to take the average or extreme of all the word embeddings in the sentences.",
"We used Glove embedding and denote the average method as A \u0000 bow and extreme method as E \u0000 bow .",
"The score is normalized to [0 , 1] .",
"Higher values indicate more plausible responses.",
"The BoW indeed reduces the KL vanishing issue, as indicated by the increased KL and decreased reconstruction perplexity.",
"When applying the proposed cyclical schedule to CVAE, we also see a reduced KL vanishing issue.",
"Interestingly, it also yields the highest BLEU scores.",
"This suggests that the cyclical schedule can generate dialog responses of higher fidelity with lower cost, as the auxiliary BoW loss is not necessary.",
"Further, BoW can be improved when integrated with the cyclical schedule, as shown in the last column of Table 3. 6.3 Unsupervised Language Pre-training We consider the Yelp dataset, as pre-processed in (Shen et al., 2017) for unsupervised language pre-training.",
"Text features are extracted as the latent codes z of VAE models, pre-trained with monotonic and cyclical schedules.",
"The AE is used as the baseline.",
"A good VAE can learn to cluster data into meaningful groups (Kingma and Welling, 2013), indicating that well-structured z are highly informative features, which usually leads to higher classification performance.",
"To clearly compare the quality of z , we build a simple one-layer classifier on z , and fine-tune the model on different proportions of labelled data (Zhang et al., 2017b).",
"The results are shown in Figure 6.",
"The cyclical schedule consistently yields the highest accuracy relative to other methods.",
"We visualize the tSNE embeddings (Maaten and Hinton, 2008) of z in Figure 9 of the SM, and observe that the cyclical schedule exhibits clearer clustered patterns.",
"To enhance the performance, we propose to apply the cyclical schedule to the learning rate on real tasks.",
"It ensures that the optimizer has the same length of optimization trajectory for each \u0000 cycle (so that each cycle can fully converge).",
"To investigate the impact of cyclical on , we perform two more ablation experiments: ( i ) We make only \u0000 cyclical, keep constant.",
"( ii )",
"We make only cyclical, keep \u0000 monotonic.",
"The last epoch num-bers are shown in Table 4, and the learning curves on shown in Figure 10 in SM.",
"Compared with the baseline, we see that it is the cyclical \u0000 rather than cyclical that contributes to the improved performance.",
"We provide a novel two-path interpretation to explain the KL vanishing issue, and identify its source as a lack of good latent codes at the beginning of decoder training.",
"This provides an understanding of various \u0000 scheduling schemes, and motivates the proposed cyclical schedule.",
"By reopening the path at \u0000 = 0 , the cyclical schedule can progressively improve the performance, by leveraging good latent codes learned in the previous cycles as warm re-starts.",
"We demonstrate the effectiveness of the proposed approach on three NLP tasks, and show that it is superior to or complementary to other techniques."
] | [
"abstain",
"abstain",
"abstain",
"result",
"objective",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"objective",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"method",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"other",
"objective",
"objective",
"other",
"abstain",
"other",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"objective"
] |
[
"The lack of publicly available evaluation data for low-resource languages limits progress in Spoken Language Understanding (SLU).",
"As key tasks like intent classification and slot filling require abundant training data, it is desirable to reuse existing data in high-resource languages to develop models for low-resource scenarios.",
"We introduce XSID, a new benchmark for cross-lingual ( X ) Slot and Intent Detection in 13 languages from 6 language families, including a very low-resource dialect.",
"To tackle the challenge, we propose a joint learning approach, with English SLU training data and non-English auxiliary tasks from raw text, syntax and translation for transfer.",
"We study two setups which differ by type and language coverage of the pre-trained embeddings.",
"Our results show that jointly learning the main tasks with masked language modeling is effective for slots, while machine translation transfer works best for intent classification.",
"1 1 Introduction Digital conversational assistants have become an integral part of everyday life and they are available, e.g., as standalone smart home devices or in smartphones.",
"Key steps in such task-oriented conversational systems are recognizing the intent of a user's utterance, and detecting the main arguments, also called slots .",
"For example, for an utterance like Add reminder to swim at 11am tomorrow, these key Natural Language Understanding (NLU), or Spoken Language Understanding (SLU) tasks are illustrated in Figure 1. As slots depend on the intent type, leading models typically adopt joint solutions (Chen et al., 2019; Qin et al., 2020).",
"Despite advances in neural modeling for slot and intent detection ( 6), datasets for SLU remain limited, hampering progress toward providing SLU for many language varieties.",
"Most avail-1 The source code, dataset and predictions are available at: https://bitbucket.org/robvanderg/xsid Add reminder to swim at 11am tomorrow intent: add reminder Figure 1: English example from XSID annotated with intents ( add reminder ) and slots ( todo , datetime ).",
"We release XSID, a new benchmark intended for SLU evaluation in low-resource scenarios.",
"XSID contains evaluation data for 13 languages from six language families, including a very low-resource dialect.",
"It homogenizes annotation styles of two recent datasets (Schuster et al., 2019; Coucke et al., 2018) and provides the broadest public multilingual evaluation data for modern digital assistants.",
"Most previous efforts to multilingual SLU typically focus on translation or multilingual embeddings transfer.",
"In this work, we propose an orthogonal approach, and study non-English auxiliary tasks for transfer.",
"We hypothesize that jointly training on target language auxiliary tasks helps to learn properties of the target language while learning a related task simultaneously.",
"We expect that this helps to refine the multilingual representations for better SLU transfer to a new language.",
"We evaluate a broad range of auxiliary tasks not studied before in such combination, exploiting raw data, syntax in Universal Dependencies (UD) and parallel data.",
"Our contributions",
"i) We provide XSID, a new cross-lingual SLU evaluation dataset covering Arabic (ar), Chinese (zh), Danish (da), Dutch (nl), English (en), German (de), Indonesian (id), Italian (it), Japanese (ja), Kazakh (kk), Serbian (sr), Turkish (tr) and an Austro-Bavarian German dialect, South Tyrolean (de-st).",
"ii) We experiment with new Dataset Source Langs.",
"non-English auxiliary tasks for joint cross-lingual transfer on slots and intents: UD parsing, machine translation (MT), and masked language modeling.",
"iii) We compare our proposed models to strong baselines, based on multilingual pre-trained language models mBERT (Devlin et al., 2019) and xlm-mlm-tlm-xnli15-1024 (Conneau et al., 2020) (henceforth XLM15), where the former was pre-trained on 12 of our 13 languages, and XLM15 on 5 of our 13 languages, thereby simulating a low-resource scenario.",
"We also compare to a strong machine translation model (Qin et al., 2020).",
"The remainder of this paper is structured as follows: we start by giving an overview of existing datasets and introduce XSID ( 2), then we discuss our baselines and proposed extensions ( 3).",
"After this, we discuss the performance of these models ( 4), and provide an analysis ( 5) before we end with the related work on cross-lingual SLU ( 6) and the conclusion ( 7).",
"An overview of existing datasets is shown in Table 1. It should be noted that we started the creation of XSID at the end of 2019, when less variety was available.",
"We choose to use the Snips (Coucke et al., 2018) and Facebook (Schuster et al., 2019) data as a starting point.",
"Most existing datasets are English only (all datasets in Table 1 include English), and they differ in the domains they cover.",
"For example, Atis (Hemphill et al., 1990) is focused on airline-related queries, CSTOP (Einolghozati et al., 2021) 2 The notion of domain is ill-defined within the scope of this task.",
"We report the numbers from the paper, and, for Snips, we have identified the following: alarm, reminder, weather, restaurant, creative works.",
"other datasets cover multiple domains.",
"Extensions of Atis to new languages are a main direction.",
"These include translations to Chinese (He et al., 2013), Italian (Bellomaria et al., 2019), Hindi and Turkish (Upadhyay et al., 2018) and very recently, the MultiAtis++ corpus (Xu et al., 2020) with 9 languages in 4 language families.",
"To the best of our knowledge, this is the broadest publicly available SLU corpus to date in terms of the number of languages, yet the data itself is less varied.",
"Almost simultaneously, Schuster et al. (2019) provide a dataset for three new top-ics (alarm, reminder, weather) in three languages (English, Spanish and Thai).",
"English utterances for a given intent were first solicited from the crowd, translated into two languages (Spanish and Thai), and manually annotated for slots.",
"We follow these approaches, but depart from the Snips (Coucke et al., 2018) and Facebook (Schuster et al., 2019) datasets to create a more varied resource covering 13 languages, while homogenizing the annotations.",
"XSID is a cross-lingual SLU evaluation dataset covering 13 languages from six language families with English training data.",
"In what follows, we provide details on the creation of XSID ( 2.2), including homogenization of annotation guidelines and English source training data ( 2.3).",
"For data statement and guidelines, we refer the reader to Section E, F and G in the Appendix.",
"As a starting point, we extract 400 random English utterances from the Snips data (Coucke et al., 2018) as well as 400 from the Facebook data (Schuster et al., 2019), which for both consist of 250 utterances from the test-split and 150 from the dev-split.",
"We maintain the splits from the original data in Lang.",
"XSID (i.e. sentences in XSID test are from Snips test or Facebook test).",
"We then translate this sample into all of our target languages.",
"It should be noted that some duplicates occur in the random sample of the Facebook data.",
"Since these instances naturally occur more often, we decided to retain them to give a higher weight to common queries in the final evaluation.",
"3 XSID includes Arabic (ar), Chinese (zh), Danish (da), Dutch (nl), English (en), German (de), Indonesian (id), Italian (it), Japanese (ja), Kazakh (kk), Serbian (sr), Turkish (tr) and an Austro-Bavarian German dialect, South Tyrolean (de-st).",
"4 We have 13 evaluation languages with 800 sentences per language 5 resulting in a final dataset of 10,000 sentences.",
"The language selection is based on availability of translators/annotators (most of them are co-authors of this paper, i.e. highly-educated with a background in NLP).",
"We favor this setup over crowd-sourcing, i.e. quality and breadth in annotation and languages, and because for some languages crowd-sourcing is not an option.",
"6 For more information on the data and annotators we refer to the dataset statement in Appendix E. 3 This decision has been made after discussion with a real-world digital assistant team.",
"4 The dialect is spoken by roughly 450,000 speakers in an Alpine province in Northern Italy.",
"It has no official ISO language code nor a normed writing form.",
"The first step of the dataset creation was the translation.",
"For this, the goal was to provide a fluent translation which was as close as possible to the original meaning.",
"Because the data consists of simple, short utterances, we consider our annotator pool to be adequate for this task (even though they are not professional translators).",
"The intents could easily be transferred from the English data, but the slots needed to be re-annotated, which was done by the same annotators.",
"Unfortunately, we were unable to retrieve annotation guidelines from the earlier efforts.",
"Hence, as a first step of and as part of training, we derived annotation guidelines by jointly re-annotating dev and test portions of the English parts of the two data sources.",
"These guidelines were revised multiple times in the process to derive the final guidelines for the whole dataset.",
"Ultimately, the data collection process proceeded in two steps: translation of the data from English, and slot annotation in the target language.",
"The aim of the guidelines was to generalize labels to make them more broadly applicable to other intent subtypes, and remove within-corpus annotation variation (see Appendix G for details).",
"We calculated inter-annotator agreement for the guidelines; three annotators native in Dutch annotated 100 samples, and reached a Fleiss Kappa (Fleiss, 1971) score of 0.924, which is very high agreement.",
"Common mistakes included annotation of question words, inclusion of locations in reminders, and the inclusion of function words in the spans.",
"We updated the guidelines after the agreement study.",
"After these target phase annotation rounds, we fi-nalized the guidelines, which are provided in the Appendix G and form the basis for the provided data.",
"Table 2 provides an example annotation for all 13 languages for the example sentence I'd like to see the showtimes for Silly Movie 2.0 at the movie house.",
"These example translations illustrate not only the differences in scripts, but also differences in word order and length of spans, confirming the distances between the languages.",
"Because of our revised guidelines for the Facebook data and mismatches in granularity of labels between the Snips and Facebook data, we homogenize the original training data for both sources and include it in our release.",
"For the Facebook data, this includes rule-based fixing of spans and recognition of the REFERENCE and RECURRING TIME labels.",
"7 For the Snips data, we convert a variety of labels that describe a location to the LOCATION label which is used in the Facebook data, and labels describing a point or range in time to DATETIME .",
"After this process, we simply concatenate both resulting datasets, and shuffle them before training.",
"The resulting training data has 43,605 sentences.",
"Our main hypothesis is that we can improve zero-shot transfer with target-language auxiliary tasks.",
"We hypothesize that this will help the multilingual pre-trained base model to learn peculiarities about the target language, while it is learning the target task as well.",
"To this end, we use three (sets of) tasks with a varying degree of complexity and availability: 1) Masked Language Modeling (MLM): which is in spirit similar to pre-training on another domain (Gururangan et al., 2020), however, we learn this jointly with the target task to avoid catastrophic forgetting (McCloskey and Cohen, 1989); 2) Neural Machine Translation (NMT): where we learn English SLU as well as translation from English to the target language; and 3) Universal Dependency (UD) parsing: to insert linguistic knowledge into the shared parameter space to learn from syntax as auxiliary task besides learning the SLU task.",
"In the following subsections, we first describe the implementation of our baseline model, and the machine translation-based model, and then de-7 For more details on this procedure, we refer to scripts/0.fixOrigAnnotation.py in the repo.",
"scribe the implementation of all auxiliary tasks (and the data used to train them).",
"Auxiliary tasks are sorted by dataset availability (MLM (cid:31) NMT (cid:31) UD), where the first type can be used with any raw text, the second one needs parallel data which is readily available for many languages as a byproduct of multilingual data sources and the last one requires explicit human annotation.",
"For South Tyrolean, a German dialect, no labeled target data of any sort is available; we use the German task data instead.",
"We provide more details of data sources and sizes in Appendix B. 3.1 Baseline All our models are implemented in MaChAmp v0.2 (van der Goot et al., 2021), an AllenNLP-based (Gardner et al., 2018) multi-task learning toolkit.",
"It uses contextual embeddings, and fine-tunes them during training.",
"In the multi-task setup, the encoding is shared, and each task has its own decoder.",
"For slot prediction, a greedy decoding with a softmax layer is used, for intents it uses a linear classification layer over the [CLS] token (see Figure 2).",
"8 The data for each task is split in batches, and the batches are then shuffled.",
"We use the default hyperparameters of MaChAmp for all experiments which were optimized on a wide variety of tasks (van der Goot et al., 2021).",
"9 The following models are extensions of this baseline.",
"In the NMT-transfer model ( 3.2), the training data is translated before passing it into the model.",
"For the auxiliary models ( 3.3, 3.4 and 3.5), we simply add another decoder next to the intent and slot decoders.",
"The losses are summed, and typically weighted (multiplied) by a factor which is given in 8 We also tried to use a CRF layer for slots which consistently led to lower performance.",
"corresponding subsections.",
"We enable the proportional sampling option of MaChAmp (multinomial sampling = 0 . 5 ) in all multi-task experiments, to avoid overfitting to the auxiliary task.",
"For comparison, we trained a NMT model to translate the NLU training data into the target language, and map the annotations using attention.",
"As opposed to most previous work using this method (Xu et al., 2020; He et al., 2013; Schuster et al., 2019), we opt for an open-source implementation and provide the scripts to rerun the experiments.",
"More specifically, we use the Fairseq toolkit (Ott et al., 2019) implementation of the Transformer-based model (Vaswani et al., 2017) with default hyperparameters.",
"Sentences were encoded using byte-pair encoding (BPE) (Sennrich et al., 2016), with a shared vocabulary of 32,000 tokens.",
"At inference time, we set the beam size to 4, and extracted alignment scores to target tokens calculated from the attention weights matrix.",
"These scores are used to align annotation labels to target language outputs; we map the label of each token to the highest scoring alignment target token.",
"We convert the output to valid BIO tags: we use the label of the B for the whole span, and an I following an O is converted to a B. Data To ensure that our machine translation data is suitable for the target domain, we choose to use a combination of transcribed spoken parallel data.",
"For languages included in the IWSLT 2016 Ted talks dataset (Cettolo et al., 2016), we use the train and development data included, and enlarge the training data with the training split from Opensubtitles 10 2018 (Lison and Tiedemann, 2016), and Tatoeba (Tiedemann, 2012).",
"For languages absent in IWSLT2016, we used the Opensubtitles data for training and Tatoeba as development set.",
"For Kazakh, the Opensubtitles data only contains 2,000 sentences, so we concatenated out-of-domain data from the WMT2019 data (Barrault et al., 2019), consisting of English-Kazakh crawled corpora.",
"We adapt the BertBasic tokenizer (which splits punctuation, it does not perform subword tokenization) to match the Facebook and Snips dataset tokenization and use this to pre-tokenize the data.",
"Previous work has shown that continuing to train a language model with an MLM objective on raw data close to the target domain leads to performance improvements (Gururangan et al., 2020).",
"However, in our setup, task-specific training data and target data are from different languages.",
"Therefore, in order to learn to combine the language and the task in a cross-lingual way, we train the model jointly with MLM and task-specific classification objective on target and training languages respectively.",
"We apply the original BERT masking strategy and we do not include next sentence prediction following Liu et al. (2019a).",
"For computational efficiency, we limit the number of input sentences to 100,000 and use a loss weight of 0.01 for MLM training.",
"Data For our masked language modeling objective, we use the target language machine translation data described above.",
"To jointly learn to transfer linguistic knowledge from English to the target language together with the target task, we implement a NMT decoder based on the shared encoder.",
"We use a sequence-to-sequence model (Sutskever et al., 2014) with a recurrent neural network decoder, which suits the auto-regressive nature of the machine translation tasks (Cho et al., 2014), and an attention mechanism to avoid compressing the whole source sentence into a fixed-length vector (Bahdanau et al., 2015).",
"We found that fine-tuning the shared encoder achieves good performance on our machine translation datasets (Conneau and Lample, 2019; Clinchant et al., 2019), alleviating the need for freezing its parameters during training in order to avoid catastrophic forgetting (Imamura and Sumita, 2019; Goodfellow et al., 2014).",
"Similar to MLM, we use 100,000 sentences, and a weight of 0.01.",
"Using syntax in hierarchical multi-task learning has previously shown to be beneficial (Hashimoto et al., 2017; Godwin et al., 2016).",
"We here use full Universal Dependency (UD) parsing, i.e., part-of-speech (POS) tagging, lemmatization, morphological tagging and dependency parsing as joint mBERT en de-st de da nl it sr id ar zh kk tr ja Avg.",
"auxiliary tasks, as opposed to previous hierarchical MTL work.",
"For all tasks we use the default settings of MaChAmp and set the loss weight of each UD subtask to 0.25.",
"Data For each language, we manually picked a matching UD treebank from version 2.6 (Nivre et al., 2020) (details in the Appendix).",
"Whenever available, we picked an in-language treebank, otherwise we choose a related language.",
"We used size, annotation quality, and domain as criteria.",
"We target a low-resource setup, and hence all our experiments assume no target-language training nor development data for the target task.",
"For all our experiments we use the English training from the Facebook and Snips data, and their English development sets (all converted to match our guidelines, see 2).",
"We use strict-span F1 score for slots (where both span and label must match exactly) and accuracy for intents as main evaluation metric as is standard for these tasks.",
"11 All reported results (including analysis and test data) are the average over 5 runs with different random seeds.",
"To choose the final model, we use the scores on the English development data.",
"We are aware that this was recently shown to be sub-optimal in some settings (Keung et al., 2020), however there is no clear solution on how to circumvent this in a pure zero-shot cross-lingual setup (i.e. without assuming any target language target task annotation data).",
"We use multilingual BERT (mBERT) as contextual encoder for our experiments.",
"We are also interested in low-resource setups.",
"As all of our languages are included in pre-training of mBERT (ex-cept the de-st dialect), we also study XLM15 ( XLMMLM-TLM-XNLI 15-1024), which in pre-training covers only 5 of the 13 XSID languages, to simulate further a real low-resource setup.",
"Table 3 reports the scores on 13 XSID languages, for 2 tasks (slot and intent prediction) and 2 pre-11 Ill-formed spans are automatically converted to match the BIO-scheme (first word with I is converted to B, and B-I spans with different labels are converted to all match the first label).",
"trained language models.",
"Languages are ordered by language distance, whenever available.",
"Below we discuss the main findings per task.",
"Slots For slot filling, auxiliary tasks are beneficial for the majority of the languages, and the best performing multi-task model (aux-mlm) achieves +1.3 for mBERT and +7.7 for XLM15 average improvements over the baseline.",
"By comparing mBERT and XLM15, there are significant performance drops for languages not seen during XLM15 pre-training, e.g., Danish (da) and Indonesian (id).",
"This confirms that having a language in pre-training has a large impact on cross-lingual transfer for this task.",
"For other languages involved in pre-training, both aux-mlm and aux-ud beat the baseline model.",
"This supports our hypothesis that, after multilingual pre-training, auxiliary tasks (with token-level prediction both self-supervised and supervised) help the model learn the target language and a better latent alignment for cross-lingual slot filling.",
"Intents For intent classification the nmt-transfer model is very strong as it uses explicit translations, especially for languages not seen during pretraining.",
"Using nmt as an auxiliary task does not come close, however, it should be noted that this only uses a fraction of the data and computational costs (see 5.4).",
"One main limitation of the nmt-transfer model is that it is dependant on a high-quality translation model, which in turn requires a large quantity of in-domain parallel data.",
"Results on Kazakh (kk) confirm this, where the translation model is trained on out-of-domain data, because in-domain data was not available ( 3.2).",
"Our main findings are confirmed on the test data (Table 4), where we also evaluate on MultiAtis++.",
"The nmt-transfer model perform superior on intents, whereas its performance on slots is worse.",
"The best auxiliary setups are aux-mlm followed by aux-ud.",
"Most significant gains with auxiliary tasks are obtained for languages not included in pre-training (XLM15).",
"We believe there is a bug for aux-nmt with XLM15 (see also results in Appendix C), which we unfortunately could not resolve before submission time.",
"Furthermore, we 0.2 0.4 0.6 0.8 mBERT base nmt-transfer aux-mlm aux-nmt aux-ud 0.2 0.4 0.6 0.8 XLM15 strict F1 unlabeled F1 loose F1 Figure 4: F1 scores variants for each model, averaged over 12 languages (English is not included).",
"believe more tuning of machine translation can increase its viability as auxiliary task.",
"In general our results on MultiAtis++ are lower compared to Xu et al. (2020), which is probably because they used a black-box translation model.",
"In Figure 3a we plot the performance increase over baseline for each auxiliary task with respect to the language distance when using mBERT.",
"The results confirm that aux-mlm is the most promising auxiliary model, and clearly show that it is most beneficial for languages with a large distance to English.",
"Figure 3b shows the same plot for the XLM15 models, and here the trends are quite different.",
"First, we see that also for close languages, aux-ud as well as aux-mlm are beneficial.",
"Second, the aux-ud model also performs better for the more distant languages.",
"To evaluate whether the detection of the slots or the classification of the label is the bottleneck, we experiment with two varieties of the F1 score.",
"For the first variant, we ignore the label and consider only whether the span is correct.",
"We refer to this as unlabeled F1.",
"For span detection, we allow for partial matches (but with the same label) which count towards true positives for precision and recall.",
"We refer to this metric as loose F1.",
"Average scores with all three F1 scores for both pre-trained embeddings are plotted in Figure 4. One of the main findings is that nmt-transfer does very well on the loose F1 metric, which means that b a s e n m t t r a n s f e r a u x m l m a u x n m t a u x u d b a s e n m t t r a n s f e r a u x m l m a u x n m t a u x u d 0.0 0.2 0.4 0.6 0.8 1.0 mBERT XLM15 Lang2vec Auxiliary Figure 5: Pearson correlations between target tasks performance (average of slots/intents) and 1) language distance as estimated by lang2vec, and 2) the auxiliary task.",
"it is poor at finding spans, instead of labeling them.",
"For the other models the difference between strict and unlabeled F1 is smaller, and both can gain approximately 5-10% absolute score for both types of errors.",
"The only other large difference is for aux-nmt with XLM15, which makes more errors in the labeling (unlabeled F1 is higher).",
"An analysis of the per-language results show that this is mainly due to errors made in the Kazakh dataset.",
"In Figure 5 we plot the absolute Pearson correlations between the auxiliary task (auxiliary task performance can be found in Appendix C) and the target tasks performance as well as between the target tasks and the language distance (from lang2vec, see Table 3).",
"Here we use the average of slots/intents as score for the target task.",
"The results show that when using only datasets from languages included in the pre-trained language model (i.e., mBERT), both language distance and auxiliary task performance are competitive predictors, whereas if also new languages are considered (XLM15) auxiliary task performance is clearly a stronger predictor.",
"All experiments are executed on a single v100 Nvidia GPU.",
"To compare computational costs, Table 5 reports the average training time over all languages for each of the models.",
"The training time for nmt-transfer is the highest, followed by aux-nmt, then come the leaner auxiliary tasks.",
"The inference Model Time (minutes) base 3 nmt-transfer 5,145 aux-mlm 220 aux-nmt 464 aux-ud 57 Table 5: Average minutes to train a model, averaged over all languages and both embeddings.",
"time of all the models for the SLU tasks is highly similar due to the similar architecture (except for nmt-transfer requiring fairSeq a-priori).",
"Our lowest-resource language variety de-st is not included in either embeddings, and the performance on it is generally low.",
"To mitigate this, we investigate whether a small amount of raw data could improve the aux-mlm model.",
"We scraped 23,572 tweets and 6,583 comments from ask.fm manually identified by a native speaker, and used these as auxiliary data in the aux-mlm model.",
"Although this data is difficult to obtain and contains a mix including standard German and others, it resulted in an increase from 49.9 to 56.2 in slot F1 scores and from 68.0 to 68.7 for intents, compared to using the German data in aux-mlm, thereby largely outperforming the baseline.",
"This shows that even small amounts of data are highly beneficial in aux training, confirming results of Muller et al. (2021).",
"For related datasets, we refer to 2.1; in this section we will discuss different approaches on how to tackle cross-lingual SLU.",
"Work on cross-lingual SLU can broadly be divided into two approaches, whether it is based mainly on parallel data or multilingual representations.",
"The first stream of research focuses on generating training data in the target language with machine translation and mapping the slot labels through attention or an external word aligner.",
"The translation-based approach can be further improved by filtering the resulting training data (Gaspers et al., 2018; Do and Gaspers, 2019), post-fixing the annotation by humans (Castellucci et al., 2019), or by using a soft-alignment based on attention, which alleviates error propagation and outperforms annotation projection using external word aligners (Xu et al., 2020).",
"The second stream of research uses multilingual representations.",
"Upadhyay et al. (2018) use bilingual word embeddings based on Smith et al. (2017) in a bidirectional Long Short-Term Memory model for zero-shot SLU.",
"Recent work focuses on finding better multilingual representations.",
"Schuster et al. (2019) use a multilingual machine translation encoder as word representations.",
"Liu et al. (2019b) propose refining the alignment of bilingual word representations.",
"The best performing variants use contextualized BERT variants (Chen et al., 2019; Xu et al., 2020), which we depart from.",
"We propose a third, orthogonal line of research: joint target-language auxiliary task learning.",
"We hypothesize that jointly training on target language auxiliary tasks helps to learn properties of the target language while learning a related task simultaneously.",
"We frame masked language modeling, Universal Dependency parsing and machine translation as new auxiliary tasks for SLU.",
"Some work on SLU showed that syntax in graph convolution networks is beneficial for slots (Qin et al., 2020).",
"Contemporary work shows that high-resource English data helps target language modeling in sequential transfer setups (Phang et al., 2020).",
"We focus on non-English target data for joint SLU in a single cross-lingual multi-task model instead.",
"We introduced XSID, a multilingual dataset for spoken language understanding with 13 languages from 6 language families, including an unstudied German dialect.",
"XSID includes a wide variety of intent types and homogenized annotations.",
"We propose non-English multi-task setups for zero-shot transfer to learn the target language: masked language modeling, neural machine translation and UD parsing.",
"We compared the effect of these auxiliary tasks in two settings.",
"Our results showed that masked language modeling led to the most stable performance improvements; however, when a language is not seen during pre-training, UD parsing led to an even larger performance increase.",
"On the intents, generating target language training data using machine translation was outperforming all our proposed models, at a much higher computational cost however.",
"Our analysis further shows that nmt-transfer struggles with span detection.",
"Given training time and availability trade-off, MLM multitasking is a viable approach for SLU.",
"We would like to thank Yiping Duan, Kristian Nrgaard Jensen, Illona Flecchi, Mike Zhang, and Caroline van der Goot for their annotation efforts.",
"We thank Dennis Ulmer for his help with significance testing.",
"Furthermore, we thank Fabian Triefenbach, Judith Gaspers and the anonymous reviewers for the feedback.",
"We also thank NVIDIA, Google cloud computing and the ITU High-performance Computing cluster for computing resources.",
"This research is supported in part by the Independent Research Fund Denmark (DFF) grant 9131-00019B and 9063-00077B and an Amazon Faculty Research (ARA) Award."
] | [
"abstain",
"abstain",
"objective",
"objective",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"objective",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"method",
"objective",
"other",
"objective",
"abstain",
"other",
"method",
"objective",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"objective",
"objective",
"objective",
"other",
"other",
"method",
"abstain",
"abstain",
"objective",
"method",
"result",
"objective",
"result",
"abstain",
"other",
"other",
"other",
"other",
"other"
] |
[
"Existing works on information extraction (IE) have mainly solved the four main tasks separately (entity mention recognition, relation extraction, event trigger detection, and argument extraction), thus failing to benefit from inter-dependencies between tasks.",
"This paper presents a novel deep learning model to simultaneously solve the four tasks of IE in a single model (called FourIE).",
"Compared to few prior work on jointly performing four IE tasks, FourIE features two novel contributions to capture inter-dependencies between tasks.",
"First, at the representation level, we introduce an interaction graph between instances of the four tasks that is used to enrich the prediction representation for one instance with those from related instances of other tasks.",
"Second, at the label level, we propose a dependency graph for the information types in the four IE tasks that captures the connections between the types expressed in an input sentence.",
"A new regularization mechanism is introduced to enforce the consistency between the golden and predicted type dependency graphs to improve representation learning.",
"We show that the proposed model achieves the state-of-the-art performance for joint IE on both monolingual and multilingual learning settings with three different languages.",
"Information Extraction (IE) is an important and challenging task in Natural Language Processing (NLP) that aims to extract structured information from unstructured texts.",
"Following the terminology for IE in the popular ACE 2005 program (Walker et al., 2006), we focus on four major IE tasks in this work: entity mention extraction (EME), relation extraction (RE), event trigger detection (ETD), and event argument extraction (EAE).",
"Given an input sentence, a vast majority of prior work has solved the four tasks in IE independently at both instance and task levels (called independent Person Vehicle Transport Facility A man driving what appeared to be a taxicab came to the checkpoint , Person waved soldiers over , appeared to be having mechanical problems of some kind . PHYS ART 00 Artifact Destination PHYS Figure 1: A sentence example with the annotations for the four IE tasks. Blue words corresponds to entity mentions while red words are event triggers. Also, or-ange edges represent relations while green edges indicate argument roles. prediction models).",
"First, at the instance level, each IE task often requires predictions/classifications for multiple instances in a single input sentence.",
"For instance, in RE, one often needs to predict relations for every pair of entity mentions (called relation instances) in the sentence while multiple word spans in the sentence can be viewed as multiple instances where event type predictions have to be made in ETD (trigger instances).",
"As such, most prior work on IE has performed predictions for instances in a sentence separately by treating each instance as one example in the dataset (Zhou et al., 2005; Nguyen and Grishman, 2015a; Santos and Guimaraes, 2015; Chen et al., 2015; Nguyen and Grishman, 2015b; Lai et al., 2020).",
"Second, at the task level, prior work on IE tends to perform the four tasks in a pipelined approach where outputs from one task are used as inputs for other tasks (e.g., EAE is followed by EME and ETD) (Li et al., 2013; Chen et al., 2015; Veyseh et al., 2020c).",
"Despite its popularity, the main issue of the independent prediction models is that they suffer from the error propagation between tasks and the failure to exploit the cross-task and cross-instance interdependencies within an input sentence to improve the performance for IE tasks.",
"For instance, such systems are unable to benefit from the dependency that the Victim of a Die event has a high chance to Span Detection Mention Trigger A man driving what appeared to be a taxicab came to the checkpoint , waved soldiers over , BERT Encoder + Two Conditional Random Fields for event trigger and entity mention sequence labeling Instance Interaction A man driving what appeared to be a taxicab came to the checkpoint , waved soldiers over , came man taxicab checkpoint soldiers Event trigger Entity mention Event argument Relation (Candidates) Type Prediction & Regularization Instance representations: Soft predicted labels: Gold labels: Gumbel-Softmax One-hot samples: Figure 2: Overall architecture of our proposed model.",
"also be the Victim of an Attack event in the same sentence (i.e., type or label dependencies).",
"To address these issues, some prior work has explored joint inference models where multiple tasks of IE are performed simultaneously for all task instances in a sentence, using both feature-based models (Roth and Yih, 2004; Li et al., 2013; Miwa and Sasaki, 2014; Yang and Mitchell, 2016) and recent deep learning models (Miwa and Bansal, 2016; Zhang et al., 2019).",
"However, such prior work has mostly considered joint models for a subset of the four IE tasks (e.g., EME+RE or ETD+EAE), thus still suffering from the error propagation issue (with the missing tasks) and failing to fully exploit potential inter-dependencies between the four tasks.",
"To this end, this work aims to design a single model to simultaneously solve the four IE tasks for each input sentence (joint four-task IE) to address the aforementioned issues of prior joint IE work.",
"Few recent work has considered joint four-task IE, using deep learning to produce state-of-the-art (SOTA) performance for the tasks (Wadden et al., 2019; Lin et al., 2020).",
"However, there are still two problems that hinder further improvement of such models.",
"First, at the instance level, an important component of deep learning models for joint IE involves the representation vectors of the instances that are used to perform the corresponding prediction tasks for IE in an input sentence (called predictive instance representations).",
"For joint four-task IE, we argue that there are inter-dependencies between predictive representation vectors of related instances for the four tasks that should be modeled to improve the performance for IE.",
"For instance, the entity type information encoded in the predictive representation vector for an entity mention can constrain the argument role that the representation vector for a related EAE instance (e.g., involving the same entity mention and some event trigger in the same sentence) should capture and vice versa.",
"As such, prior work for joint four-task IE has only computed predictive representation vectors for instances of the tasks independently using shared hidden vectors from some deep learning layer (Wad-den et al., 2019; Lin et al., 2020).",
"Although this shared mechanism helps capture the interaction of predictive representation vectors to some extent, it fails to explicitly present the connections between related instances of different tasks and encode them into the representation learning process.",
"Consequently, to overcome this issue, we propose a novel deep learning model for joint four-task IE (called FourIE ) that creates a graph structure to explicitly capture the interactions between related instances of the four IE tasks in a sentence.",
"This graph will then be consumed by a graph convolutional network (GCN) (Kipf and Welling, 2017; Nguyen and Grishman, 2018) to enrich the representation vector for an instance with those from the related (neigh-boring) instances for IE.",
"Second, at the task level, existing joint four-task models for IE have only exploited the cross-task type dependencies in the decoding step to constrain predictions for the input sentence (by manually converting the type dependency graphs of the input sentence into global feature vectors for scoring the predictions in the beam search-based decoding) (Lin et al., 2020).",
"The knowledge from cross-task type dependencies thus cannot contribute to the training process of the IE models.",
"This is unfortunate as we expect that deeper integration of this knowledge into the training process could provide useful information to enhance representation learning for IE tasks.",
"To this end, we propose to use the knowledge from cross-task type dependencies to obtain an additional training signal for each sentence to directly supervise our joint four-task IE model.",
"In particular, our motivation is that the types expressed in a sentence for the four IE tasks can be organized into a dependency graph between the types (global type dependencies for the sentence).",
"As such, in order for a joint model to perform well, the type dependency graph generated by its predictions for a sentence should be similar to the dependency graph obtained from the golden types (i.e., a global type constraint on the predictions in the training step).",
"A novel regularization term is thus introduced into the training loss of our joint model to encode this constraint, employing another GCN to learn representation vectors for the predicted and golden dependency graphs to facilitate the graph similarity promotion.",
"To our knowledge, this is the first work that employs global type dependencies to regularize joint models for IE.",
"Finally, our extensive experiments demonstrate the effectiveness of the proposed model on benchmark datasets in three different languages (e.g., English, Chinese, and Spanish), leading to state-of-the-art performance on different settings.",
"Problem Statement : The joint four-task IE problem in this work takes a sentence as the input and aims to jointly solve four tasks EAE, ETD, RE, and EAE using an unified model.",
"As such, the goal of EME is to detect and classify entity mentions (names, nominals, pronouns) according to a set of predefined (semantic) entity types (e.g., Person ).",
"Similarly, ETD seeks to identify and classify event triggers (verbs or normalization) that clearly evoke an event in some predefined set of event types (e.g., Attack ).",
"Note that event triggers can involve multiple words.",
"For RE, its concern is to predict the semantic relationship between two entity mentions in the sentence.",
"Here, the set of relations of interest is also predefined and includes a special type of None to indicate no-relation .",
"Finally, in EAE, given an event trigger, the systems need to predict the roles (also in a predefined set with a special type None ) that each entity mention plays in the corresponding event.",
"Entity mentions are thus also called event argument candidates in this work.",
"Figure 1 presents a sentence example where the expected outputs for each IE task are illustrated.",
"Graph Convolutional Networks (GCN) : As GCNs are used extensively in our model, we present their computation process in this section to facilitate the discussion.",
"Given a graph G = ( V , E ) where V = { v 1 , . . . , v u } is the node set (with u nodes) and E is the edge set.",
"In GCN, the edges in G are often captured via the adjacency matrix A R u u .",
"Also, each node v i V is associated with an initial hidden vector v 0 i .",
"As such, a GCN model involves multiple layers of abstraction in which the hidden vector v li for the node v i V at the l -th layer is computed by ( l 1 ): v li = ReLU ( (cid:80) uj =1 A ij W l v l 1 j + b l (cid:80) uj =1 A ij ) where W l and b l are trainable weight and bias at the l -th layer.",
"Assuming NGCN layers, the hidden vectors for the nodes in V at the last layer v N 1 , . . . , v Nu would capture richer and more abstract information for the nodes, serving as the outputs of the GCN model.",
"This process is denoted by: v N 1 , . . . , v Nu = GCN ( A ; v 01 , . . . , v 0 u ; N ) .",
"Given an input sentence w = [ w 1 , w 2 , . . . , w n ] (with n words), our model for joint four-task IE on w involves three major components:",
"(i) Span Detection,",
"(ii) Instance Interaction, and",
"(iii) Type Dependency-based Regularization.",
"This component aims to identify spans of entity mentions and event triggers in w that would be used to form the nodes in the interaction graph between different instances of our four IE tasks for w .",
"As such, we formulate the span detection problems as sequence labeling tasks where each word w i in w is associated with two BIO tags to capture the span information for entity mentions and event triggers in w .",
"Note that we do not predict entity types and event types at this step, leading to only three possible values (i.e., B, I, and O) for the tags of the words.",
"In particular, following (Lin et al., 2020), we first feed w into the pre-trained BERT encoder (Devlin et al., 2019) to obtain a sequence of vectors X = [ x 1 , x 2 , . . . , x n ] to represent w .",
"Here, each vector x i serves as the representation vector for the word w i w that is obtained by averaging the hidden vectors of the word-pieces of w i returned by BERT.",
"Afterward, X is fed into two conditional random field (CRF) layers to determine the best BIO tag sequences for event mentions and event triggers for w , following (Chiu and Nichols, 2016).",
"As such, the Viterbi algorithm is used to decode the input sentence while the negative log-likelihood losses are employed as the training objectives for the span detection component of the model.",
"For convenience, let L entspan and L trgspan be the negative log-likelihoods of the gold tag sequences for entity mentions and event triggers (respectively) for w .",
"These terms will be included in the overall loss function of the model later.",
"Based on the tag sequences for w from the previous component, we can obtain two separate span sets for the entity mentions and event triggers in w (the golden spans are used in the training phase to avoid noise).",
"For the next computation, we first compute a representation vector for each span ( i, j ) ( 1 i j n ) in these two sets by averaging the BERT-based representation vectors for the words in this span (i.e., x i , . . . , x j ).",
"For convenience, let R ent = { e 1 , e 2 , . . . , e n ent } ( n ent = | R ent | ) and R trg = { t 1 , t 2 , . . . , t n trg } ( n trg = | R trg | ) be the sets of span representation vectors for the entity mentions and event triggers in w 1 .",
"The goal of this component is to leverage such span representation vectors to form instance representations and enrich them with instance interactions to perform necessary predictions in IE.",
"Instance Representation .",
"Prediction instances in our model amount to the specific objects that we need to predict a type for one of the four IE 1 We will also refer to entity mentions and event triggers interchangeably with their span representations e i and t i in this work.",
"tasks.",
"As such, the prediction instances for EME and ETD, called entity and trigger instances, correspond directly to the entity mentions and event triggers in R ent and R trg respectively (as we need to predict the entity types for e i R ent and the event types for t i R trg in this step).",
"Thus, we also use R ent and R trg as the sets of initial representation vectors for the entity/event instances for EME and ETD in the following.",
"Next, for RE, the prediction instances (called relation instances) involve pairs of entity mentions in R ent .",
"To obtain the initial representation vector for a relation instance, we concatenate the representation vectors of the two corresponding entity mentions, leading to the set of representation vectors rel ij for relation instances: R rel = { rel ij = [ e i , e j ] | e i , e j R ent , i < j } ( | R rel | = n ent ( n ent 1) / 2 ).",
"Finally, for EAE, we form the prediction instances (called argument instances) by pairing each event trigger in R trg with each entity mention in R ent (for the argument role predictions of the entity mentions with respect to the event triggers/mentions).",
"By concatenating the representation vectors of the paired entity mentions and event triggers, we generate the initial representation vectors arg ij for the corresponding argument instances: R arg = { arg ij = [ t i , e j ] | t i R trg , e j R ent } ( | R arg | = n trg n ent ) 2 .",
"We also use the prediction instances and their representation vectors interchangeably in this work.",
"Instance Interaction .",
"The initial representation vectors for the instances so far do not explicitly consider beneficial interactions between related instances.",
"To address this issue, we explicitly create an interaction graph between the prediction instances for the four IE tasks to connect related instances to each other.",
"This graph will be consumed by a GCN model to enrich instance representations with interaction information afterward.",
"In particular, the node set N inst in our instance interaction graph G inst = { N inst , E inst } involves all prediction instances for the four IE tasks, i.e., N inst = R ent R trg R rel R arg .",
"The edge set E inst then captures instance interactions by connecting the instance nodes in N inst that involve the same entity mentions or event triggers (i.e., two instances are related if they concern the same entity mention or event trigger).",
"As such, the edges in E inst are created as follows: 2 In our implementation, R rel and R arg are transformed into vectors of the same size with those in R ent and R trg (us-ing one-layer feed forward networks) for future computation.",
"(i) An entity instance node e i is connected to all relation instance nodes of the forms rel ij = [ e i , e j ] and rel ki = [ e k , e i ] (sharing entity mention e i ).",
"(ii) An entity instance node e j is connected to all argument instance nodes of the form arg ij = [ t i , e j ] (sharing entity mention e j ).",
"(iii) A trigger node t i is connected to all argument instance nodes of the form arg ij = [ t i , e j ] (i.e., sharing event trigger t i ).",
"GCN .",
"To enrich the representation vector for an instance in N inst with the information from the related (neighboring) nodes, we feed G inst into a GCN model (called GCN inst ).",
"For convenience, we rename the initial representation vectors of all the instance nodes in N inst by: N inst = { r 1 , . . . , r n i } ( n i = | N inst | ).",
"Also, let A inst { 0 , 1 } n i n i be the adjacency matrix of the interaction graph G inst where A instij = 1 if the instance nodes r i and r j are connected in G inst or i = j (for self-connections).",
"The interaction-enriched representation vectors for the instances in N inst are then computed by the GCN inst model: r inst 1 , . . . , r instn i = GCN inst ( A inst ; r 1 , . . . , r n i ; N i ) where N i is the number of layers for the GCN inst model.",
"Type Embedding and Prediction .",
"Finally, the enriched instance representation vectors r inst 1 , . . . , r instn i will be used to perform the predictions for the four IE tasks.",
"In particular, let t k { ent, trg, rel, arg } be the corresponding task index and y k be the ground-truth type (of the task t k ) for the prediction instance r k in N inst .",
"Also, let T = T ent T trg T rel T arg be the union of the possible entity types (in T ent for EME), event types (in T trg for ETD), relations (in T rel for RE), and argument roles (in T arg for EAE) in our problem ( y k T t k ).",
"Note that T rel and T arg contain the special types None .",
"To prepare for the type predictions and the type dependency modeling in the next steps, we associate each type in T with an embedding vector (of the same size as e i and t i ) that is initialized randomly and updated during our training process.",
"For convenience, let T = [ t 1 , . . . , t n t ] where t i is used interchangeably for both a type and its embedding vector in T ( n t is the total number of types).",
"As such, to perform the prediction for an instance r k in N inst , we compute the dot products between r instk and each type embedding vectors in T t k T to estimate the possibilities that r k has a type in T t k .",
"Afterward, these scores are normalized by the softmax function to obtain the probability distribution y k over the possible types in T t k for r k : y k = softmax ( r instk t T | t T t k T ) .",
"In the decoding phase, the predicted type y k for r k is obtained via the argmax function (greedy decoding): y k = argmax y k .",
"The negative log-likelihood over all the prediction instances is used to train the model: L type = (cid:80) n i k =1 log y k [ y k ] .",
"In this section, we aim to obtain the type dependencies across tasks and use them to supervise the model in the training process (to improve the representation vectors for IE).",
"As presented in the introduction, our motivation is to generate global dependency graphs between types of different IE tasks for each input sentence whose representations are leveraged to regularize the model during training.",
"In particular, starting with the golden types y = y 1 , y 2 , . . . , y n i and the predicted types y = y 1 , y 2 , . . . , y n i for the instance nodes in N inst , we build two dependency graphs G gold and G pred to capture the global type dependencies for the tasks (called the golden and predicted dependency graphs respectively).",
"Afterward, to supervise the training process, we seek to constrain the model so the predicted dependency graph G pred is similar to the golden graph G gold (i.e., using the dependency graphs as the bridges to inject the global type dependency knowledge in G gold into the model).",
"Dependency Graph Construction .",
"Both G gold and G pred involve the types of all the four IE tasks in T as the nodes.",
"To encode the type dependencies, the connections/edges in G gold are computed based on the golden types y = y 1 , y 2 , . . . , y n i for the instance nodes in N inst as follows:",
"(i) For each relation instance node r k = [ e i , e j ] N inst that has the golden type y k (cid:54) = None , the relation type node y k is connected to the nodes of the golden entity types for the corresponding entity mentions e i and e j (called en-tity_relation type edges ).",
"(ii) For each argument instance node r k = [ t i , e j ] that has the role type y k (cid:54) = None , the role type node y k is connected to both the node for the golden event type of t i (called event_argument type edges ) and the node for the golden entity type of e j (called entity_argument type edges ).",
"The same procedure can be applied to build the predicted dependency graph G pred based on the predicted types y = y 1 , y 2 , . . . , y n i .",
"Also, for convenience, let A gold and A pred (of size n t n t ) be the binary adjacency matrices of G gold and G pred (including the self-loops) respectively.",
"Regularization .",
"In the next step, we obtain the representation vectors for the dependency graphs G gold and G pred by feeding them into a GCN model (called GCN type ).",
"This GCN model has N t layers and uses the initial type embeddings T = [ t 1 , . . . , t n t ] as the inputs.",
"In particular, the outputs of GCN type for the two graphs involve t gold 1 , . . . , t goldn t = GCN type ( A gold ; t 1 , . . . , t n t ; N t ) and t pred 1 , . . . , t predn t = GCN type ( A pred ; t 1 , . . . , t n t ; N t ) that encode the underlying information for the type dependencies presented in G gold and G pred .",
"Finally, to promote the similarity of the type dependencies in G gold and G pred , we introduce the mean square difference between their GCN type -induced representation vectors into the overall loss function for minimization: L dep = (cid:80) n t i =1 || t goldi t predi || 22 .",
"Our final training loss is thus: L = L entspan + L trgspan + L type + L dep ( is a trade-off parameter).",
"Approximating A pred .",
"We distinguish two types of parameters in our model so far, i.e., the parameters used to compute instance representations, e.g., those in BERT and G inst (called inst ), and the parameters for type dependency regularization, i.e., those for the type embeddings t 1 , . . . , t n t and G type (called dep ).",
"As such, the current implementation only enables the training signal from L dep to back-propagate to the parameters dep and disallows L dep to influence the instance representation-related parameters inst .",
"To enrich the instance representation vectors with type dependency information, we expect L dep to be deeper integrated into the model by also contributing to inst .",
"To achieve this goal, we note that the block of back-propagation between L dep and inst is due to their only connection in the model via the adjacency matrix A pred , whose values are either one or zero.",
"As such, the values in A pred are not directly dependent on any parameter in inst , making it impossible for the back-propagation to flow.",
"To this end, we propose to approximate A pred with a new matrix A pred that directly involves inst in its values.",
"In particular, let I inst be the index set of the non-zero cells in A pred : I inst = { ( i, j ) | A predij = 1 } .",
"As the elements in I inst are determined by the indexes i 1 , . . . , i n i in T of the predicted types y 1 , y 2 , . . . , y n i (respectively), we also seek to compute the values for the approximated matrix A pred based on such indexes.",
"Accordingly, we first define the matrix B = { b ij } i,j =1",
"..n t where the element b ij at the i -th row and j -th column is set to b ij = i n t + j .",
"Here, > 0 is a large constant.",
"For each element ( i, j ) I inst , all the elements in the matrix ( B in t j ) 2 are strictly positive, except for the element at ( i, j ) , which is zero.",
"Thus, with a large value for , the matrix exp( ( B in t j ) 2 ) has the value of one at cell ( i, j ) and nearly zero at other cells.",
"Consequently, the values of A pred at the positions in I inst are close to one while those at other positions are close to zero, thus approximating our expected matrix A pred and still directly depending on the indexes i 1 , . . . , i n t .",
"Addressing the Discreteness of Indexes .",
"Even with the approximation A pred , the back-propagation still cannot flow from L dep to inst due to the block of the discrete and non-differentiable index variables i 1 , . . . , i n t .",
"To address this issue, we propose to apply the Gumbel-Softmax distribution (Jang et al., 2017) that enables the optimization of models with discrete random variables, by providing a method to approximate one-hot vectors sampled from a categorical distribution with continuous ones.",
"In particular, we first rewrite each index i k by: i k = h k c Tk , where c k is a vector whose each dimension contains the index of a type in T t k in the joint type set T , and h k is the binary one-hot vector whose dimensions correspond to the types in T t k .",
"h k is only turned on at the position corresponding to the predicted type y k T t k (indexed at i k in T ).",
"In our current implementation, y k (thus the index i k and the one-hot vector h k ) is obtained via the argmax function: y k = argmax y k , which causes the discreteness.",
"As such, the Gumbel-Softmax distribution method helps to relax argmax by approximating h k with a sample h k = h k, 1 , . . . , h k, |T tk | from the Gumbel-Softmax distribution: h k,j = exp (( log ( k,j ) + g j ) / ) (cid:80) |T tk | j (cid:48) =1 exp (( log ( k,j (cid:48) ) + g j (cid:48) ) / ) (2) where k,j = y k,j = softmax j ( r instk t T | t T t k T ) , g 1 , . . . , g |T tk | are the i.i.d samples drawn from Gumbel(0,1) distribution (Gumbel, 1948): g j = log ( log ( u j )) ( u j Uniform (0 , 1) ), and is the temperature parameter.",
"As 0 , the sample h k would become close to our expected one-hot vector h k .",
"Finally, we replace h k with the approximation h k in the computation for i k : i k = h k c Tk that directly depends on r instk and is applied in A pred .",
"This allows the gradients to flow from L dep to the parameters inst and completes the description of our model.",
"Datasets .",
"Following the prior work on joint four-task IE (Wadden et al., 2019; Lin et al., 2020), we evaluate our joint IE model (FourIE) on the ACE 2005 (Walker et al., 2006) and ERE datasets that provide annotation for entity mentions, event triggers, relations, and argument roles.",
"In particular, we use three different versions of the ACE 2005 dataset that focus on three major joint inference settings for IE:",
"(i) ACE05-R for joint inference of EME and RE,",
"(ii) ACE05-E for joint inference of EME, ETD and EAE, and",
"(iii) ACE05-E+ for joint inference of the four tasks EME, ETD, RE, and EAE.",
"ACE05-E+ is our main evaluation setting as it fits to our model design with the four IE tasks of interest.",
"For ERE, following (Lin et al., 2020), we combine the data from three datasets for English (i.e., LDC2015E29, LDC2015E68, and LDC2015E78) that are created under the Deep Exploration and Filtering of Test (DEFT) program (called ERE-EN ).",
"Similar to ACE05-E+, ERE-EN is also used to evaluate the joint models on four IE tasks.",
"four-IE datasets on Chinese and Spanish.",
"Following (Lin et al., 2020), we use the ACE 2005 dataset for the evaluation on Chinese (called ACE05-CN ) and the ERE dataset (LDC2015E107) for Spanish (called ERE-ES ).",
"To ensure a fair comparison, we adopt the same data pre-processing and splits (train/dev/test) in prior work (Lin et al., 2020) for all the datasets.",
"As such, ACE05-R, ACE05-E, ACE05-E+, and AC05-CN involve 7 entity types, 6 relation types, 33 event types, and 22 argument roles while ERE-ES and ERE-EN include 7 entity types, 5 relation types, 38 event types, and 20 argument roles.",
"The statistics for the datasets are shown in Table 1.",
"Hyper-parameters and Evaluation Criteria .",
"We fine-tune the hyper-parameters for our model using the development data.",
"The suggested values are shown in the appendix.",
"To achieve a fair comparison with (Lin et al., 2020), we employ the bert-large-cased model for the English datasets and bert-multilingual-cased model for the Chinese and Spanish datasets.",
"Finally, we follow the same evaluation script and correctness criteria for entity mentions, event triggers, relations, and argument as in prior work (Lin et al., 2020).",
"The reported results are the average performance of 5 model runs using different random seeds.",
"Performance Comparison .",
"We compare the proposed model FourIE with two prior models for joint four-task IE:",
"(i) DyGIE++ (Wadden et al., 2019): a BERT-based model with span graph propagation, and",
"(ii) OneIE (Lin et al., 2020): the current state-of-the-art (SOTA) model for joint four-task IE based on BERT and type dependency constraint at the decoding step.",
"Table 2 presents the performance (F1 scores) of the models on the test data of the English datasets.",
"Note that in the tables, the prefixes Ent, Trg, Rel, and Arg represent the extraction tasks for entity mentions, event triggers, relations, and arguments respectively while the suffixes -I and -C correspond to the identification performance (only concerning the offset correctness) and identification+classification performance (evaluating both offsets and types).",
"As can be seen from the table, FourIE is consistently better than the two baseline models (Dy-GIE++ and OneIE) across different datasets and tasks.",
"The performance improvement is significant for almost all the cases and clearly demonstrates the effectiveness of the proposed model.",
"FourIE and OneIE on the Chinese and Spanish datasets (i.e., ACE05-CN and ERE-ES).",
"In addition to the monolingual setting (i.e., trained and evaluated on the same languages), following (Lin et al., 2020), we also evaluate the models on the multilingual training settings where ACE05-CN and ERE-ES are combined with their corresponding English datasets ACE05-E+ and EAE-EN (respectively) to train the models (for the four IE tasks), and the performance is then evaluated on the test sets of the corresponding languages (i.e., ACE05-CN and ERE-ES).",
"It is clear from the table that FourIE also significantly outperforms OneIE across nearly all the different setting combinations for languages, datasets and tasks.",
"This further illustrates the portability of FourIE to different languages.",
"Effects of GCN inst and GCN type .",
"This section evaluates the contributions of the two important components in our proposed model FourIE, i.e., the instance interaction graph with GCN inst and the type dependency graph with GCN type .",
"In particular, we examine the following ablated/varied models for FourIE:",
"(i) FourIE -GCN inst : this model excludes the instance interaction graph and the GCN model GCN inst from FourIE so the initial instance representations r k are directly used to predict the types for the instances (replacing the enriched vectors r instk ),",
"(ii) FourIE -GCN type : this model eliminates the type dependency graph and the GCN model GCN type (thus the loss term L dep as well) from FourIE,",
"(iii) FourIE -GCN inst GCN type : this model removes both the instance interaction and type dependency graphs from FourIE,",
"(iv) FourIE -GCN type +TDDecode: this model also excludes GCN type ; however, it additionally applies the global type dependencies features to score the joint predictions for the beam search in the decoding step (the implementation for this beam search is inherited from (Lin et al., 2020) for a fair comparison), and",
"(v) FourIE A pred : instead of employing the approximation matrix A pred in FourIE, this model directly uses the adjacency matrix A pred in the L dep regularizer ( L dep thus does not influence the instance representation-related parameters inst ).",
"Table 4 shows the performance of the models on the development dataset of ACE05-E+ for four IE tasks.",
"The most important observation from the table is that both GCN inst and GCN type are necessary for FourIE to achieve the highest performance for the four IE tasks.",
"Importantly, replacing GCN type in FourIE with the global type dependency features for decoding (i.e., FourIE GCN type +TDDecode) as in (Lin et al., 2020) or eliminating the approximation A pred for L dep produces inferior performance, especially for relation and argument extraction.",
"This clearly demonstrates the benefits for deeply integrating knowledge from type dependencies to influence representation learning parameters with L dep for joint four-task IE.",
"Contributions of Type Dependency Edges .",
"Our type dependency graphs G gold and G pred involves three categories of edges, i.e., entity_relation, en-tity_argument, and event_argument type edges.",
"Table 5 presents the performance of FourIE (on the development data of ACE05-E+) when each of these edge categories is excluded from our type dependency graph construction.",
"dependency edges on the ACE05-E+ dev data.",
"The table clearly shows the importance of different categories of type dependency edges for FourIE as the elimination of any category would generally hurt the performance of the model.",
"In addition, we see that the contribution level of the type dependency edges intuitively varies according to the tasks of consideration.",
"For instance, entity_relation type edges are helpful mainly for entity mention, relation and argument extraction.",
"Finally, an error analysis is conducted in the appendix to provide insights about the benefits of the type dependency graphs G gold and G pred for FourIE (i.e., by comparing the outputs of FourIE and FourIE -GCN type ).",
"The early joint methods for IE have employed feature engineering to capture the dependencies between IE tasks, including Integer Linear Programming for Global Constraints (Roth and Yih, 2004; Li et al., 2011), Markov Logic Networks (Riedel et al., 2009; Venugopal et al., 2014), Structured Perceptron (Li et al., 2013, 2014; Miwa and Sasaki, 2014; Judea and Strube, 2016), and Graphical Models (Yu and Lam, 2010; Yang and Mitchell, 2016).",
"Recently, the application of deep learning has facilitated the joint modeling for IE via shared parameter mechanisms across tasks.",
"These joint models have focused on different subsets of the IE tasks, including EME and RE (Zheng et al., 2017; Katiyar and Cardie, 2017; Bekoulis et al., 2018; Fu et al., 2019; Luan et al., 2019; Sun et al., 2019; Veyseh et al., 2020b,a), event and temporal RE (Han et al., 2019), and ETD and EAE (Nguyen et al., 2016; Zhang et al., 2019; Nguyen and Nguyen, 2019).",
"However, none of these work has explored joint inference for four IE tasks EME, ETD, RE, and EAE as we do.",
"The two most related works to ours include (Wadden et al., 2019) that leverages the BERT-based information propagation via dynamic span graphs, and (Lin et al., 2020) that exploits BERT and global type dependency features to constrain the decoding step.",
"Our model is different from these works in that we introduce a novel interaction graph for instance representations for four IE tasks and a global type dependency graph to directly inject the knowledge into the training process.",
"We present a novel deep learning framework to jointly solve four IE tasks (EME, ETD, RE, and EAE).",
"Our model attempts to capture the interdependencies between instances of the four tasks and their types based on instance interaction and type dependency graphs.",
"GCN models are employed to induce representation vectors to perform type predictions for task instances and regularize the training process.",
"The experiments demonstrate the effectiveness of the proposed model, leading to SOTA performance over multiple datasets on English, Chinese, and Spanish.",
"In the future, we plan to extend the model to include more IE tasks (e.g., coreference resolution).",
"This research has been supported by the Army Research Office (ARO) grant W911NF-21-1-0112.",
"This research is also based upon work supported by the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), via IARPA Contract No. 2019-19051600006 under the Better Extraction from Text Towards Enhanced Retrieval (BETTER) Program.",
"The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of ARO, ODNI, IARPA, the Department of Defense, or the U.S. Government.",
"The U.S. Government is authorized to reproduce and distribute reprints for governmental purposes notwithstanding any copyright annotation therein.",
"This document does not contain technology or technical data controlled under either the U.S. International Traffic in Arms Regulations or the U.S. Export Administration Regulations."
] | [
"abstain",
"objective",
"abstain",
"objective",
"objective",
"abstain",
"objective",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"other",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"objective",
"abstain",
"objective",
"objective",
"objective",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"objective",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"result",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"objective",
"other",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"objective",
"other",
"other",
"other",
"other",
"other"
] |
[
"Question answering ( QA ) systems for large document collections typically use pipelines that",
"(i) retrieve possibly relevant documents,",
"(ii) re-rank them,",
"(iii) rank paragraphs or other snippets of the top-ranked documents, and",
"(iv) select spans of the top-ranked snippets as exact answers.",
"Pipelines are conceptually simple, but errors propagate from one component to the next, without later components being able to revise earlier decisions.",
"We present an architecture for joint document and snippet ranking, the two middle stages, which leverages the intuition that relevant documents have good snippets and good snippets come from relevant documents.",
"The architecture is general and can be used with any neural text relevance ranker.",
"We experiment with two main instantiations of the architecture, based on POSITDRMM ( PDRMM ) and a BERT -based ranker.",
"Experiments on biomedical data from BIOASQ show that our joint models vastly outperform the pipelines in snippet retrieval, the main goal for QA , with fewer trainable parameters, also remaining competitive in document retrieval.",
"Furthermore, our joint PDRMM -based model is competitive with BERT -based models, despite using orders of magnitude fewer parameters.",
"These claims are also supported by human evaluation on two test batches of BIOASQ .",
"To test our key findings on another dataset, we modified the Natural Questions dataset so that it can also be used for document and snippet retrieval.",
"Our joint PDRMM -based model again outperforms the corresponding pipeline in snippet retrieval on the modified Natural Questions dataset, even though it performs worse than the pipeline in document retrieval.",
"We make our code and the modified Natural Questions dataset publicly available.",
"Question answering ( QA ) systems that search large document collections (Voorhees, 2001; Tsatsaro-nis",
"Tsatsaro-nis et al., 2015; Chen et al., 2017) typically use pipelines operating at gradually finer text granularities.",
"A fully-fledged pipeline includes components that",
"(i) retrieve possibly relevant documents typically using conventional information retrieval ( IR );",
"(ii) re-rank the retrieved documents employing a computationally more expensive document ranker;",
"(iii) rank the passages, sentences, or other snip-pets' of the top-ranked documents; and",
"(iv) select spans of the top-ranked snippets as exact' answers.",
"Recently, stages",
"(ii)(iv) are often pipelined neural models, trained individually (Hui et al., 2017; Pang et al., 2017; Lee et al., 2018; McDonald et al., 2018; Pandey et al., 2019; Mackenzie et al., 2020; Sekulic et al., 2020).",
"Although pipelines are conceptually simple, errors propagate from one component to the next (Hosein et al., 2019), without later components being able to revise earlier decisions.",
"For example, once a document has been assigned a low relevance score, finding a particularly relevant snippet cannot change the document's score.",
"We propose an architecture for joint document and snippet ranking, i.e., stages",
"(ii) and",
"(iii), which leverages the intuition that relevant documents have good snippets and good snippets come from relevant documents.",
"We note that modern web search engines display the most relevant snippets of the top-ranked documents to help users quickly identify truly relevant documents and answers (Sultan et al., 2016; Xu et al., 2019; Yang et al., 2019a).",
"The top-ranked snippets can also be used as a starting point for multi-document query-focused summarization, as in the BIOASQ challenge (Tsatsaro-nis et al., 2015).",
"Hence, methods that identify good snippets are useful in several other applications, apart from QA .",
"We also note that many neural models for stage",
"(iv) have been proposed, often called QA or Machine Reading Comprehension ( MRC ) models (Kadlec et al., 2016; Cui et al., 2017; Zhang et al., 2020), but they typically search for answers only in a particular, usually paragraph-sized snippet, which is given per question.",
"For QA systems that search large document collections, stages",
"(ii) and",
"(iii) are also important, if not more important, but have been studied much less in recent years, and not in a single joint neural model.",
"The proposed joint architecture is general and can be used in conjunction with any neural text relevance ranker (Mitra and Craswell, 2018).",
"Given a query and N possibly relevant documents from stage",
"(i), the neural text relevance ranker scores all the snippets of the N documents.",
"Additional neural layers re-compute the score (ranking) of each document from the scores of its snippets.",
"Other layers then revise the scores of the snippets taking into account the new scores of the documents.",
"The entire model is trained to jointly predict document and snippet relevance scores.",
"We experiment with two main instantiations of the proposed architecture, using POSIT-DRMM (McDonald et al., 2018), hereafter called PDRMM , as the neural text ranker, or a BERT -based ranker (Devlin et al., 2019).",
"We show how both PDRMM and BERT can be used to score documents and snippets in pipelines, then how our architecture can turn them into models that jointly score documents and snippets.",
"Experimental results on biomedical data from BIOASQ (Tsatsaronis et al., 2015) show the joint models vastly outperform the corresponding pipelines in snippet extraction, with fewer trainable parameters.",
"Although our joint architecture is engineered to favor retrieving good snippets (as a near-final stage of QA ), results show that the joint models are also competitive in document retrieval.",
"We also show that our joint version of PDRMM , which has the fewest parameters of all models and does not use BERT , is competitive to BERT -based models, while also outperforming the best system of BIOASQ 6 (Brokos et al., 2018) in both document and snippet retrieval.",
"These claims are also supported by human evaluation on two test batches of BIOASQ 7 (2019).",
"To test our key findings on another dataset, we modified Natural Questions (Kwiatkowski et al., 2019), which only includes questions and answer spans from a single document, so that it can be used for document and snippet retrieval.",
"Again, our joint PDRMM based model largely outperforms the corresponding pipeline in snippet retrieval on the modified Natural Questions, though it does not perform better than the pipeline in document retrieval, since the joint model is geared towards snippet retrieval, i.e., even though it is forced to extract snippets from fewer relevant documents.",
"Finally, we show that all the neural pipelines and joint models we considered improve the BM 25 ranking of traditional IR on both datasets.",
"We make our code and the modified Natural Questions publicly available.",
"1 2 Methods 2.1 Document Ranking with PDRMM Our starting point is POSIT-DRMM (McDonald et al., 2018), or PDRMM , a differentiable extension of DRMM (Guo et al., 2016) that obtained the best document retrieval results in BIOASQ 6 (Brokos et al., 2018).",
"McDonald et al. (2018) also reported it performed better than DRMM and several other neural rankers, including PACRR (Hui et al., 2017).",
"Given a query q = (cid:104) q 1 , . . . , q n (cid:105) of n query terms ( q-terms ) and a document d = (cid:104) d 1 , . . . , d m (cid:105) of m terms ( d-terms ), PDRMM computes context-sensitive term embeddings c ( q i ) and c ( d i ) from the static (e.g., WORD 2 VEC ) embeddings e ( q i ) and e ( d i ) by applying two stacked convolutional layers with trigram filters, residuals (He et al., 2016), and zero padding to q and d , respectively.",
"2 PDRMM then computes three similarity matrices S 1 , S 2 , S 3 , each of dimensions n m (Fig. 1).",
"Each element s i,j of S 1 is the cosine similarity between c ( q i ) and c ( d j ) .",
"S 2 is similar, but uses the static word embeddings e ( q i ) , e ( d j ) .",
"S 3 uses one-hot vectors for q i , d j , signaling exact matches.",
"Three row-wise pooling operators are then applied to S 1 , S 2 , S 3 : max-pooling (to obtain the similarity of the best match between the q-term of the row and any of the d-terms), average pooling (to obtain the average match), and average of k -max (to obtain the average similarity of the k best matches).",
"3 We thus obtain three scores from each row of each similarity matrix.",
"By concatenating row-wise the scores from the three matrices, we obtain a new n 9 matrix S (cid:48) (Fig. 1).",
"Each row of S (cid:48) indicates how well the corresponding q-term matched any of the d-terms, using the three different views of the terms (one-hot, static, context-aware embeddings).",
"Each row of S (cid:48) is then passed to a Multi-Layer Perceptron 1 See http://nlp.cs.aueb.gr/publications.",
"( MLP ) to obtain a single match score per q-term.",
"Each context aware q-term embedding is also concatenated with the corresponding IDF score (bottom left of Fig. 1) and passed to another MLP that computes the importance of that q-term (words with low IDF s may be unimportant).",
"Let v be the vector containing the n match scores of the q-terms, and u the vector with the corresponding n importance scores (bottom right of Fig. 1).",
"The initial relevance score of the document is r ( q, d ) = v T u .",
"Then r ( q, d ) is concatenated with four extra features : z-score normalized BM 25 (Robertson and Zaragoza, 2009); percentage of q-terms with exact match in d (regular and IDF weighted); percentage of q-term bigrams matched in d .",
"An MLP computes the final relevance r ( q, d ) from the 5 features.",
"Neural rankers typically re-rank the top N documents of a conventional IR system.",
"We use the same BM 25-based IR system as McDonald et al. (2018).",
"PDRMM is trained on triples (cid:104) q, d, d (cid:48) (cid:105) , where d is a relevant document from the top N of q , and d (cid:48) is a random irrelevant document from the top N .",
"We use hinge loss, requiring the relevance of d to exceed that of d (cid:48) by a margin.",
"Brokos et al. (2018) used the basic CNN ' ( BCNN ) of Yin et al. (2016) to score (rank) the sentences of the re-ranked top N documents.",
"The resulting pipeline, PDRMM + BCNN , had the best document and snippet results in BIOASQ 6, where snippets were sentences.",
"Hence, PDRMM + BCNN is a reasonable document and snippet retrieval baseline pipeline.",
"In another pipeline, PDRMM + PDRMM , we replace BCNN by a second instance of PDRMM that scores sentences.",
"The second PDRMM instance Figure 2: Final layers of JPDRMM and JBERT .",
"is the same as when scoring documents (Fig. 1), but the input is now the query ( q ) and a single sentence ( s ).",
"Given a triple (cid:104) q, d, d (cid:48) (cid:105) used to train the document-scoring PDRMM , the sentence-scoring PDRMM is trained to predict the true class (rele-vant, irrelevant) of each sentence in d and d (cid:48) using cross entropy loss (with a sigmoid on r ( q, s ) ).",
"As when scoring documents, the initial relevance score r ( q, s ) is combined with extra features using an MLP , to obtain r ( q, s ) .",
"The extra features are now different: character length of q and s , number of shared tokens of q and s (with/without stop-words), sum of IDF scores of shared tokens (with/without stop-words), sum of IDF scores of shared tokens divided by sum of IDF scores of q-terms, number of shared token bigrams of q and s , BM 25 score of s against the sentences of d and d (cid:48) , BM 25 score of the document ( d or d (cid:48) ) that contained s .",
"The two PDRMM instances are trained separately.",
"Given a document d with sentences s 1 , . . . , s k and a query q , the joint document/snippet ranking version of PDRMM , called JPDRMM , processes separately each sentence s i of d , producing a relevance score r ( q, s i ) per sentence, as when PDRMM scores sentences in the PDRMM + PDRMM pipeline.",
"The highest sentence score max i r ( q, s i ) is concatenated (Fig. 2) with the extra features that are used when PDRMM ranks documents, and an MLP produces the document's score.",
"4 JPDRMM then revises the sentence scores, by concatenating the score of each sentence with the document score 4 We also tried alternative mechanisms to obtain the document score from the sentence scores, including average of k -max sentence scores and hierarchical RNN s (Yang et al., 2016), but they led to no improvement.",
"and passing each pair of scores to a dense layer to compute a linear combination, which becomes the revised sentence score.",
"Notice that JPDRMM is mostly based on scoring sentences, since the main goal for QA is to obtain good snippets (almost final answers).",
"The document score is obtained from the score of the document's best sentence (and external features), but the sentence scores are revised, once the document score has been obtained.",
"We use sentence-sized snippets, for compatibility with BIOASQ , but other snippet granularities (e.g., paragraph-sized) could also be used.",
"JPDRMM is trained on triples (cid:104) q, d, d (cid:48) (cid:105) , where d, d (cid:48) are relevant and irrelevant documents, respectively, from the top N of query q , as in the original PDRMM ; the ground truth now also indicates which sentences of the documents are relevant or irrelevant, as when training PDRMM to score sentences in PDRMM + PDRMM .",
"We sum the hinge loss of d and d (cid:48) and the cross-entropy loss of each sentence.",
"5 We also experiment with a JPDRMM version that uses a pre-trained BERT model (Devlin et al., 2019) to obtain input token embeddings (of wordpieces) instead of the more conventional pre-trained (e.g., WORD 2 VEC ) word embeddings that JPDRMM uses otherwise.",
"We call it BJPDRMM if BERT is fine-tuned when training JPDRMM , and BJPDRMM-NF if BERT is not fine-tuned.",
"In another variant of BJPDRMM , called BJPDRMM-ADAPT , the input embedding of each token is a linear combination of all the embeddings that BERT produces for that token at its different Transformer layers.",
"The weights of the linear combination are learned via backprop-agation.",
"This allows BJPDRMM-ADAPT to learn which BERT layers it should mostly rely on when obtaining token embeddings.",
"Previous work has reported that representations from different BERT layers may be more appropriate for different tasks (Rogers et al., 2020).",
"BJPDRMM-ADAPT-NF is the same as BJPDRMM-ADAPT , but BERT is not fine-tuned; the weights of the linear combination of embeddings from BERT layers are still learned.",
"The BJPDRMM model we discussed above and its variants are essentially still JPDRMM , which in turn invokes the PDRMM ranker (Fig. 1, 2); BERT is used only to obtain token embeddings that are fed",
"5 Additional experiments with JPDRMM , reported in the appendix, indicate that further performance gains are possible by tuning the weights of the two losses.",
"to JPDRMM .",
"Instead, in this subsection we use BERT as a ranker, replacing PDRMM .",
"For document ranking alone (when not cosider-ing snippets), we feed BERT with pairs of questions and documents (Fig. 3).",
"BERT 's top-layer embedding of the classification' token [ CLS ] is concatenated with external features (the same as when scoring documents with PDRMM , Section 2.1), and a dense layer again produces the document's score.",
"We fine-tune the entire model using triples (cid:104) q, d, d (cid:48) (cid:105) with a hinge loss between d and d (cid:48) , as when training PDRMM to score documents.",
"6 Our two pipelines that use BERT for document ranking, BERT + BCNN and BERT + PDRMM , are the same as PDRMM + BCNN and PDRMM + PDRMM (Section 2.2), respectively, but use the BERT ranker (Fig. 3) to score documents, instead of PDRMM .",
"The joint JBERT model is the same as JPDRMM , but uses the BERT ranker (Fig. 3), now applied to sentences, instead of PDRMM (Fig. 1), to obtain the initial sentence scores.",
"The top layers of Fig. 2 are then used, as in all joint models, to obtain the document score from the sentence scores and revise the sentence scores.",
"Similarly to BJPDRMM , we also experimented with variations of JBERT , which do not fine-tune the parameters of BERT ( JBERT-NF ), use a linear combination (with trainable weights) of the [ CLS ] embeddings from all the BERT layers ( JBERT-ADAPT ), or both ( JBERT-ADAPT-NF ).",
"We include a BM 25+ BM 25 pipeline to measure the improvement of the proposed models on conventional IR engines.",
"This pipeline uses the question 6 We use the pre-trained uncased BERT BASE of Devlin et al. (2019).",
"The documents' of the BIOASQ dataset are concatenated titles and abstracts.",
"Most question-document pairs do not exceed BERT 's max.",
"length limit of 512 wordpieces.",
"If they do, we truncate documents.",
"The same approach could be followed in the modified Natural Questions dataset, where documents' are Wikipedia paragraphs, but we did not experiment with BERT -based models on that dataset.",
"as a query to the IR engine and selects the N d documents with the highest BM 25 scores.",
"7 The N d documents are then split into sentences and BM 25 is re-computed, this time over all the sentences of the N d documents, to retrieve the N s best sentences.",
"BioASQ data and setup Following McDonald et al. (2018) and Brokos et al. (2018), we experiment with data from BIOASQ (Tsatsaronis et al., 2015), which provides English biomedical questions, relevant documents from MEDLINE / PUBMED 8 , and relevant snippets (sentences), prepared by biomedical experts.",
"This is the only previous large-scale IR dataset we know of that includes both gold documents and gold snippets.",
"We use the BIOASQ 7 (2019) training dataset, which contains 2,747 questions, with 11 gold documents and 14 gold snippets per question on average.",
"We evaluate on test batches 15 (500 questions in total) of BIOASQ 7. 9 We measure Mean Average Precision ( MAP ) (Manning et al., 2008) for document and snippet retrieval, which are the official BIOASQ evaluation measures.",
"The document collection contains approx.",
"18 M articles (concatenated titles and abstracts only, discarding articles with no abstracts) from the MEDLINE / PUBMED baseline' 2018 dataset.",
"In PDRMM and BCNN , we use the biomedical WORD 2 VEC embeddings of McDonald et al. (2018).",
"We use the GALAGO 10 IR engine to obtain the top N = 100 documents per query.",
"After re-ranking, we return N d = 10 documents and N s = 10 sentences, as required by BIOASQ .",
"We train using Adam (Kingma and Ba, 2015).",
"Hyper-parameters were tuned on held-out validation data.",
"Natural Questions data and setup Even though there was no other large-scale IR dataset providing multiple gold documents and snippets per question, we needed to test our best models on a second dataset, other than BIOASQ .",
"Therefore we modified the Natural Questions dataset (Kwiatkowski et al., 2019) to a format closer to BIOASQ 's.",
"Each instance of Natural Questions consists of an HTML 7 In each experiment, the same IR engine and BM 25 hyper-parameters are used in all other methods.",
"All BM 25 hyperparameters are tuned on development data.",
"8 https://www.ncbi.nlm.nih.gov/pubmed 9 BIOASQ 8 (2020) was ongoing during this work, hence we could not use its data for comparisons.",
"document of Wikipedia and a question.",
"The answer to the question can always be found in the document as if a perfect retrieval engine were used.",
"A short span of HTML source code is annotated by humans as a short answer' to the question.",
"A longer span of HTML source code that includes the short answer is also annotated, as a long answer'.",
"The long answer is most commonly a paragraph of the Wikipedia page.",
"In the original dataset, more than 300,000 questions are provided along with their corresponding Wikipedia HTML documents, short answer and long answer spans.",
"We modified Natural Questions to fit the BIOASQ setting.",
"From every Wikipedia HTML document in the original dataset, we extracted the paragraphs and indexed each paragraph separately to an ElasticSearch 11 index, which was then used as our retrieval engine.",
"We discarded all the tables and figures of the HTML documents and any question that was answered by a paragraph containing a table.",
"For every question, we apply a query to our retrieval engine and retrieve the first N = 100 paragraphs.",
"We treat each paragraph as a document, similarly to the BIOASQ setting.",
"For each question, the gold (correct) documents are the paragraphs (at most two per question) that were included in the long answers of the original dataset.",
"The gold snippets are the sentences (at most two per question) that overlap with the short answers of the original dataset.",
"We discard questions for which the retrieval engine did not manage to retrieve any of the gold paragraphs in its top 100 paragraphs.",
"We ended up with 110,589 questions and 2,684,631 indexed paragraphs.",
"Due to lack of computational resources, we only use 4,000 questions for training, 400 questions for development, and 400 questions for testing, but we make the entire modified Natural Questions dataset publicly available.",
"Hyper-parameters were again tuned on held-out validation data.",
"All other settings were as in the BIOASQ experiments.",
"BioASQ results Table 1 reports document and snippet MAP scores on the BIOASQ dataset, along with the trainable parameters per method.",
"For completeness, we also show recall at 10 scores, but we base the discussion below on MAP , the official measure of BIOASQ , which also considers the ranking of the 10 documents and snippets BIOASQ allows participants to return.",
"The Oracle re-ranks the N 11 www.elastic.co/products/elasticsearch Method Params Doc.",
"= 100 documents (or their snippets) that BM 25 retrieved, moving all the relevant documents (or snippets) to the top.",
"Sentence PDRMM is an ablation of JPDRMM without the top layers (Fig. 2); each sentence is scored using PDRMM , then each document inherits the highest score of its snippets.",
"PDRMM + BCNN and PDRMM + PDRMM use the same document ranker, hence the document MAP of these two pipelines is identical (7.47).",
"However, PDRMM + PDRMM outperforms PDRMM + BCNN in snippet MAP (9.16 to 5.67), even though PDRMM has much fewer trainable parameters than BCNN , confirming that PDRMM can also score sentences and is a better sentence ranker than BCNN .",
"PDRMM + BCNN was the best system in BIOASQ 6 for both documents and snippets, i.e., it is a strong baseline.",
"Replacing PDRMM by BERT for document ranking in the two pipelines ( BERT + BCNN and BERT + PDRMM ) increases the document MAP by 1.32 points (from 7.47 to 8.79) with a marginal increase in snippet MAP for BERT + PDRMM (9.16 to 9.63) and a slightly larger increase for BERT + BCNN (5.67 to 6.07), at the expense of a massive increase in trainable parameters due to BERT (and computational cost to pre-train and fine-tune BERT ).",
"We were unable to include a BERT + BERT pipeline, which would use a second BERT ranker for sentences, with a total of approx.",
"220M trainable parameters, due to lack of computational resources.",
"The main joint models ( JPDRMM , BJPDRMM , JBERT ) vastly outperform the pipelines in snippet extraction, the main goal for QA (obtaining 15.72, 16.82, 16.29 snippet MAP , respectively), though their document MAP is slightly lower (6.69, 7.59, 7.93) compared to the pipelines (7.47, 8.79), but still competitive.",
"This is not surprising, since the joint models are geared towards snippet retrieval (they directly score sentences, document scores are obtained from sentence scores).",
"Human inspection of the retrieved documents and snippets, discussed below (Table 2), reveals that the document MAP of JPDRMM is actually higher than that of the best pipeline ( BERT + PDRMM ), but is penalized in Table 1 because of missing gold documents.",
"JPDRMM , which has the fewest parameters of all neural models and does not use BERT at all, is competitive in snippet retrieval with models that employ BERT .",
"More generally, the joint models use fewer parameters than comparable pipelines (see the zones of Table 1).",
"Not fine-tuning BERT (-NF variants) leads to a further dramatic decrease in trainable parameters, at the expense of slightly lower document and snippet MAP (7.59 to 6.84, and 16.82 to 15.77, respectively, for BJPDRMM , and similarly for JBERT ).",
"Using linear combinations of token embeddings from all BERT layers (-ADAPT variants) harms both document and snippet MAP when fine-tuning BERT , but is beneficial in most cases when not fine-tuning BERT (-NF ).",
"The snippet MAP of BJPDRMM-NF increases from 15.77 to 17.35, and document MAP increases from 6.84 to 7.42.",
"A similar increase is observed in the snippet MAP of JBERT-NF (15.99 to 16.53), but MAP decreases (7.90 to 7.84).",
"In the second and third result zones of Table 1, we underline the results of the best pipelines, the results of JPDRMM , and the results of the best BJPDRMM and JBERT variant.",
"In each zone and column, the differences between the underlined MAP scores are statistically significant ( p 0 . 01 ); we used single-tailed Approximate Randomization (Dror et al., 2018), 10k iterations, randomly swapping in each iteration the rankings of 50% of queries.",
"Removing the top layers of JPDRMM (Sentence PDRMM ), clearly harms performance for both documents and snippets.",
"The oracle scores indicate there is still scope for improvements in both documents and snippets.",
"BioASQ results after expert inspection At the end of each BIOASQ annual contest, the biomedical experts who prepared the questions and their gold documents and snippets inspect the responses of the participants.",
"If any of the documents and snippets returned by the participants are judged relevant to the corresponding questions, they are added to the gold responses.",
"This process enhances the gold responses and avoids penalizing participants for responses that are actually relevant, but had been missed by the experts in the initial gold responses.",
"However, it is unfair to use the post-contest enhanced gold responses to compare systems that participated in the contest to systems that did not, because the latter may also return documents and snippets that are actually relevant and are not included in the gold data, but the experts do not see these responses and they are not included in the gold ones.",
"The results of Table 1 were computed on the initial gold responses of BIOASQ 7, before the post-contest revision, because not all of the methods of that table participated in BIOASQ 7. 12 In Table 2, we show results on the revised post-contest gold responses of BIOASQ 7, for those of our methods that participated in the challenge.",
"We show results on test batches 4 and 5 only (out of 5 batches in total), because these were the only two batches were all three of our methods participated together.",
"Each batch comprises 100 questions.",
"We also show the best results (after inspection) of our competitors in BIOASQ 7, for the same batches.",
"A first striking observation in Table 2 is that all results improve substantially after expert inspection, i.e., all systems retrieved many relevant documents and snippets the experts had missed.",
"Again, the two joint models ( JPDRMM , BJPDRMMNF ) vastly outperform the BERT + PDRMM pipeline 12 Results without expert inspection can be obtained at any time, using the BIOASQ evaluation platform.",
"Results with expert inspection can only be obtained during the challenge.",
"in snippet MAP .",
"As in Table 1, before expert inspection the pipeline has slightly better document MAP than the joint models.",
"However, after expert inspection JPDRMM exceeds the pipeline in document MAP by almost two points.",
"BJPDRMM-NF performs two points better than JPDRMM in snippet MAP after expert inspection, though JPDRMM performs two points better in document MAP .",
"After inspection, the document MAP of BJPDRMM-NF is also very close to the pipeline's.",
"Table 2 confirms that JPDRMM is competitive with models that use BERT , despite having the fewest parameters.",
"All of our methods clearly outperformed the competition.",
"Natural Questions results Table 3 reports results on the modified Natural Questions dataset.",
"We experiment with the best pipeline and joint model of Table 1 that did not use BERT (and are computationally much cheaper), i.e., PDRMM + PDRMM and JPDRMM , comparing them to the more conventional BM 25+ BM 25 baseline.",
"Since there are at most two relevant documents and snippets per question in this dataset, we measure Mean Reciprocal Rank ( MRR ) (Manning et al., 2008), and Recall at top 1 and 2.",
"Both PDRMM + PDRMM and JPDRMM clearly outperform the BM 25+ BM 25 pipeline in both document and snippet retrieval.",
"As in Table 1, the joint JPDRMM model outperforms the PDRMM + PDRMM pipeline in snippet retrieval, but the pipeline performs better in document retrieval.",
"Again, this is unsurprising, since the joint models are geared towards snippet retrieval.",
"We also note that JPDRMM uses half of the trainable parameters of PDRMM + PDRMM (Table 1).",
"No comparison to previous work that used the original Natural Questions is possible, since the original dataset provides a single document per query (Section 3.1).",
"Neural document ranking (Guo et al., 2016; Hui et al., 2017; Pang et al., 2017; Hui et al., 2018; McDonald et al., 2018) only recently managed to improve the rankings of conventional IR ; see Lin (2019) for caveats.",
"Document or passage ranking models based on BERT have also been proposed, with promising results, but most use only simplistic task-specific layers on top of BERT (Yang et al., 2019b; Nogueira and Cho, 2019), similar to our use of BERT for document scoring (Fig. 3).",
"An exception is the work of MacAvaney et al. (2019), who explored combining ELMO (Peters et al., 2018) and BERT (Devlin et al., 2019) with complex neu-Before expert inspection After expert inspection Method Document MAP Snippet MAP Document MAP Snippet MAPBERT + PDRMM 7.29 7.58 14.86 15.61 JPDRMM 5.16 12.45 16.55 21.98 BJPDRMM-NF 6.18 13.89 14.65 23.96 Best BIOASQ 7 competitor n/a n/a 13.18 14.98 Table 2: Document and snippet MAP (%) on BIOASQ 7 test batches 4 and 5 before and after post-contest expert inspection of system responses, for methods that participated in BIOASQ 7. We also show the results (after inspection) of the best other participants of BIOASQ 7 for the same batches.",
"ral IR models, namely PACRR (Hui et al., 2017), DRMM (Guo et al., 2016), KNRM (Dai et al., 2018), CONVKNRM (Xiong et al., 2017), an approach that we also explored here by combining BERT with PDRMM in BJPDRMM and JBERT .",
"However, we retrieve both documents and snippets, whereas MacAvaney et al. (2019) retrieve only documents.",
"Models that directly retrieve documents by indexing neural document representations, rather than re-ranking documents retrieved by conventional IR , have also been proposed (Fan et al., 2018; Ai et al., 2018; Khattab and Zaharia, 2020), but none addresses both document and snippet retrieval.",
"Yang et al. (2019a) use BERT to encode, index, and directly retrieve snippets, but do not consider documents; indexing snippets is also computationally costly.",
"Lee et al. (2019) propose a joint model for direct snippet retrieval (and indexing) and answer span selection, again without retrieving documents.",
"No previous work combined document and snippet retrieval in a joint neural model.",
"This may be due to existing datasets, which do not provide both gold documents and gold snippets, with the exception of BIOASQ , which is however small by today's standards (2.7k training questions, Section 3.1).",
"For example, Pang et al. (2017) used much larger clickthrough datasets from a Chinese search engine, as well as datasets from the 2007 and 2008 TREC Million Query tracks (Qin et al., 2010), but these datasets do not contain gold snippets.",
"SQUAD (Rajpurkar et al., 2016) and SQUAD v.2 (Ra-jpurkar et al., 2018) provide 100k and 150k questions, respectively, but for each question they require extracting an exact answer span from a single given Wikipedia paragraph; no snippet retrieval is performed, because the relevant (paragraph-sized) snippet is given.",
"Ahmad et al. (2019) provide modified versions of SQUAD and Natural Questions, suitable for direct snippet retrieval, but do not consider document retrieval.",
"SearchQA (Dunn et al., 2017) provides 140k questions, along with 50 snippets per question.",
"The web pages the snippets were extracted from, however, are not included in the dataset, only their URL s, and crawling them may produce different document collections, since the contents of web pages often change, pages are removed etc.",
"MS-MARCO (Nguyen et al., 2016) was constructed using 1M queries extracted from Bing's logs.",
"For each question, the dataset includes the snippets returned by the search engine for the top-10 ranked web pages.",
"However the gold answers to the questions are not spans of particular retrieved snippets, but were freely written by humans after reading the returned snippets.",
"Hence, gold relevant snippets (or sentences) cannot be identified, making this dataset unsuitable for our purposes.",
"Our contributions can be summarized as follows: (1) We proposed an architecture to jointly rank documents and snippets with respect to a question, two particularly important stages in QA for large document collections; our architecture can be used with any neural text relevance model.",
"(2) We instantiated the proposed architecture using a recent neural relevance model ( PDRMM ) and a BERT based ranker.",
"(3) Using biomedical data (from BIOASQ ), we showed that the two resulting joint models ( PDRMM -based and BERT -based) vastly outperform the corresponding pipelines in snippet retrieval, the main goal in QA for document collections, using fewer parameters, and also remaining competitive in document retrieval.",
"(4) We showed that the joint model ( PDRMM -based) that does not use BERT is competitive with BERT -based models, outperforming the best BIOASQ 6 system; our joint models ( PDRMMand BERT -based) also outperformed all BIOASQ 7 competitors.",
"(5) We provide a modified version of the Natural Questions dataset, suitable for document and snippet retrieval.",
"(6) We showed that our joint PDRMM -based model also largely outperforms the corresponding pipeline on open-domain data (Natural Questions) in snippet retrieval, even though it performs worse than the pipeline in document retrieval.",
"(7) We showed that all the neural pipelines and joint models we considered improve the traditional BM 25 ranking on both datasets.",
"(8) We make our code publicly available.",
"We hope to extend our models and datasets for stage",
"(iv), i.e., to also identify exact answer spans within snippets (paragraphs), similar to the answer spans of SQUAD (Rajpurkar et al., 2016, 2018).",
"This would lead to a multi-granular retrieval task, where systems would have to retrieve relevant documents, relevant snippets, and exact answer spans from the relevant snippets.",
"BIOASQ already includes this multi-granular task, but exact answers are provided only for factoid questions and they are freely written by humans, as in MS-MARCO , with similar limitations.",
"Hence, appropriately modified versions of the BIOASQ datasets are needed.",
"We thank Ryan McDonald for his advice in stages of this work.",
"George Brokos, Polyvios Liosis, Ryan McDonald, Dimitris Pappas, and Ion Androutsopoulos.",
"2018.",
"AUEB at BioASQ 6: Document and Snippet Retrieval.",
"In Proceedings of the 6th BioASQ Workshop , pages 3039, Brussels, Belgium.",
"Danqi Chen, Adam Fisch, Jason Weston, and Antoine Bordes.",
"2017.",
"Reading Wikipedia to answer open-domain questions.",
"In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 1870 1879, Vancouver, Canada.",
"Yiming Cui, Zhipeng Chen, Si Wei, Shijin Wang, Ting Liu, and Guoping Hu.",
"2017.",
"Attention-over-Attention Neural Networks for Reading Comprehension.",
"In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Vol-ume 1: Long Papers) , pages 593602, Vancouver, Canada.",
"Zhuyun Dai, Chenyan Xiong, Jamie Callan, and Zhiyuan Liu.",
"2018.",
"Convolutional neural networks for soft-matching n-grams in ad-hoc search.",
"In Proceedings of the Eleventh ACM International Conference on Web Search and Data Mining , pages 126 134, Marina Del Rey, CA.",
"Rotem Dror, Gili Baumer, Segev Shlomov, and Roi Re-ichart.",
"2018.",
"The Hitchhiker's Guide to Testing Statistical Significance in Natural Language Processing.",
"In Proceedings of the 56th Annual Meeting of the ACL (Volume 1: Long Papers) , pages 13831392.",
"Matthew Dunn, Levent Sagun, Mike Higgins, V. Ugur G uney, Volkan Cirik, and Kyunghyun Cho.",
"2017.",
"SearchQA: A New Q&A Dataset Augmented with Context from a Search Engine.",
"ArXiv , abs/1704.05179.",
"Yixing Fan, Jiafeng Guo, Yanyan Lan, Jun Xu, Chengx-iang Zhai, and Xueqi Cheng.",
"2018.",
"Modeling Diverse Relevance Patterns in Ad-Hoc Retrieval.",
"In The 41st International ACM SIGIR Conference on Research & Development in Information Retrieval ."
] | [
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"objective",
"method",
"abstain",
"method",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"result",
"abstain",
"method",
"result",
"result",
"method",
"result",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"objective",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"other",
"abstain",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"objective",
"objective",
"objective",
"result",
"method",
"result",
"result",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain"
] |
[
"Decipherment of historical ciphers is a challenging problem.",
"The language of the target plaintext might be unknown, and ciphertext can have a lot of noise.",
"State-of-the-art decipherment methods use beam search and a neural language model to score candidate plaintext hypotheses for a given cipher, assuming the plaintext language is known.",
"We propose an end-to-end multilingual model for solving simple substitution ciphers.",
"We test our model on synthetic and real historical ciphers and show that our proposed method can decipher text without explicit language identification while still being robust to noise.",
"Libraries and archives have many enciphered documents from the early modern period.",
"Example documents include encrypted letters, diplomatic correspondences, and books from secret societies (Figure 1).",
"Previous work has made historical cipher collections available for researchers (Petters-son and Megyesi, 2019; Megyesi et al., 2020).",
"Decipherment of classical ciphers is an essential step to reveal the contents of those historical documents.",
"In this work, we focus on solving 1:1 substitution ciphers.",
"Current state-of-the-art methods use beam search and a neural language model to score candidate plaintext hypotheses for a given cipher (Kambhatla et al., 2018).",
"However, this approach assumes that the target plaintext language is known.",
"Other work that both identifies language and deciphers relies on a brute-force guess-and-check strategy (Knight et al., 2006; Hauer and Kondrak, 2016).",
"We ask: Can we build an end-to-end model that deciphers directly without relying on a separate language ID step?",
"The contributions of our work are: We propose an end-to-end multilingual decipherment model that can solve 1:1 substitution ciphers without explicit plaintext language identification, which we demonstrate on ciphers of 14 different languages.",
"We conduct extensive testing of the proposed method in different realistic decipherment conditions; different cipher lengths, no-space ciphers, and ciphers with noise, and demonstrate that our model is robust to these conditions.",
"We apply our model on synthetic ciphers as well as on the Borg cipher, a real historical cipher.",
"1 We show that our multilingual model can crack the Borg cipher using the first 256 characters of the cipher.",
"Decipherment conditions vary from one cipher to another.",
"For example, some cleartext might be found along with the encrypted text, which gives a hint to the plaintext language of the cipher.",
"In other cases, called known-plaintext attacks , some decoded material is found, which can be exploited to crack the rest of the encoded script.",
"However, in a ciphertext-only attack , the focus of this paper, the cryptanalyst only has access to the ciphertext.",
"This means that the encipherment method, the plaintext language, and the key are all unknown.",
"In this paper, we focus on solving 1:1 substitution ciphers.",
"We follow Nuhn et al. (2013) and Kambhatla et al. (2018) and use machine translation notation to formulate our problem.",
"We denote the ciphertext as f N 1 = f 1 . . . f j . . . f N and the plaintext as e M 1 = e 1 . . . e i . . . e M .",
"2 In a 1:1 substitution cipher , plaintext is encrypted into a ciphertext by replacing each plaintext character with a unique substitute according 1 https://cl.lingfil.uu.se/~bea/borg/ 2 Unless there is noise or space restoration, N = M ; see Sections 5.4 and 5.2.",
"3",
"to a substitution table called the key .",
"For example: the plaintext word doors would be enciphered to KFFML using the substitution table: Cipher Plain K d F o M r L s The decipherment goal is to recover the plaintext given the ciphertext.",
"Inspired by character-level neural machine translation (NMT), we view decipherment as a sequence-to-sequence translation task.",
"The motivation behind using a sequence-to-sequence model is: The model can be trained on multilingual data (Gao et al., 2020), making it potentially possible to obtain end-to-end multilingual decipherment without relying on a separate language ID step.",
"Due to transcription challenges of historical ciphers (Section 5.4), ciphertext could be noisy.",
"We would like the model to have the ability to recover from that noise by inserting, deleting, or substituting characters while generating plaintext.",
"Sequence-to-sequence models seem to be good candidates for this task.",
"To cast decipherment as a supervised translation task, we need training data, i.e. pairs of < f N 1 , e M 1 > to train on.",
"We can create this data using randomly generated substitution keys (Figure 2a).",
"We can then train a character-based sequence-to-sequence decipherment model and evaluate it on held-out text which is also encrypted with (different) randomly generated substitution keys.",
"However, if we attempt this experiment using the Transformer model described in Section 3.3, we get abysmal results (see Section 5.1 for scoring details).",
"Increasing the amount of training data won't help; there are 26!",
"4 10 26 possible keys for English ciphers, and even if every key is represented, most of the training data will still be encoded with keys that are not used to encode the test data.",
"In fact, since each training example uses a different key, we cannot assume that a character type has any particular meaning.",
"The fundamental assumption behind embeddings is therefore broken.",
"In the next section, we describe one way to overcome these challenges.",
"To address the aforementioned challenges, we employ a commonly used technique in cryptanalysis called frequency analysis .",
"Frequency analysis is attributed to the great polymath, Al-Kindi (801-873 C.E.) (Dooley, 2013).",
"This technique has been used in previous decipherment work (Hauer and Kondrak, 2016; Kambhatla et al., 2018).",
"It is based on the fact that in a given text, letters and letter combinations (n-grams) appear in varying frequencies, and that the character frequency distribution is roughly preserved in any sample drawn from a given language.",
"So, in different pairs of < f N 1 , e M 1 >, we expect the frequency distribution of characters to be similar.",
"To encode that information, we re-map each ciphertext character to a value based on its frequency rank (Figure 2b).",
"This way, we convert any ciphertext to a frequency-encoded cipher.",
"Intuitively, by frequency encoding, we are reducing the number of possible substitution keys (assuming frequency rank is roughly preserved across all ciphers from a given language).",
"This is only an approximation, but it helps restore the assumption that there is a coherent connection between a symbol and its type embedding.",
"For example, if the letters e and i",
"(b) Input: Example ciphers encoded according to frequency ranks in descending order.",
"Output: Plaintext in target language.",
"are the most frequent characters in English, then in any 1:1 substitution cipher, they will be encoded as 0 or 1 instead of a randomly chosen character.",
"We follow the character-based NMT approach in Gao et al. (2020) and use the Transformer model (Vaswani et al., 2017) for our decipherment problem.",
"The Transformer is an attention-based encoder-decoder model that has been widely used in the NLP community to achieve state-of-the-art performance on many sequence modeling tasks.",
"We use the standard Transformer architecture, which consists of six encoder layers and six decoder layers as described in Gao et al. (2020).",
"For training, we create 1:1 substitution ciphers for 14 languages using random keys.",
"For English, we use English Gigaword (Parker et al., 2011).",
"We scrape historical text from Project Gutenberg for 13 other languages, namely: Catalan, Danish, Dutch, Finnish, French, German, Hungarian, Italian, Latin, Norwegian, Portuguese, Spanish, and Swedish.",
"4 Table 1 summarizes our datasets.",
"Following previous literature (Nuhn et al., 2013; Aldarrab, 2017; Kambhatla et al., 2018), we lowercase all characters and remove all non-alphabetic and non-space symbols.",
"We make sure ciphers do not end in the middle of a word.",
"We strip accents for languages other than English.",
"To make our experiments comparable to previous work (Nuhn et al., 2013; Kambhatla et al., 2018),",
"we create test ciphers from the English Wikipedia article about History.",
"5 We use this text to create ciphers of length 16, 32, 64, 128, and 256 characters.",
"We generate 50 ciphers for each length.",
"We follow the same pre-processing steps to create training data.",
"We carry out four sets of experiments to study the effect of cipher length, space encipher-ment/removal, unknown plaintext language, and transcription noise.",
"Finally, we test our models on a real historical cipher, whose plaintext language was not known until recently.",
"As an evaluation metric, we follow previous literature (Kambhatla et al., 2018) and use Symbol Error Rate (SER).",
"SER is the fraction of incorrect symbols in the deciphered text.",
"For space restora-tion experiments (Section 5.2), we use Translation Edit Rate (TER) (Snover et al., 2006), but on the 5 https://en.wikipedia.org/wiki/History character level.",
"where possible edits include the insertion, deletion, and substitution of single characters.",
"When the ciphertext and plaintext have equal lengths, SER is equal to TER.",
"We use FAIRSEQ to train our models (Ott et al., 2019).",
"We mostly use the same hyperparameters as Gao et al. (2020) for character NMT, except that we set the maximum batch size to 10K tokens and use half precision floating point computation for faster training.",
"The model has about 44M parameters.",
"Training on a Tesla V100 GPU takes about 110 minutes per epoch.",
"We train for 20 epochs.",
"Decoding takes about 400 character tokens/s.",
"We use a beam size of 100.",
"Unless otherwise stated, we use 2M example ciphers to train, 3K ciphers for tuning, and 50 ciphers for testing in all experiments.",
"We report the average SER on the 50 test ciphers of each experiment.",
"We first experiment with ciphers of length 256 using the approach described in Section 3.1 (i.e. we train a Transformer model on pairs of < f N 1 , e M 1 > without frequency encoding).",
"As expected, the model is not able to crack the 50 test ciphers, resulting in an SER of 71.75%.",
"For the rest of the experiments in this paper, we use the frequency encoding method described in Section 3.2.",
"Short ciphers are more challenging than longer ones.",
"Following previous literature, we report results on different cipher lengths using our method.",
"Table 2 shows decipherment results on ciphers of length 16, 32, 64, 128, and 256.",
"For the 256 length ciphers, we use the aforementioned 2M train and 3K development splits.",
"For ciphers shorter than 256 characters, we increase the number of examples such that the total number of characters remains nearly constant, at about 512M characters.",
"We experiment with training five different models (one for each length) and training a single model on ciphers of mixed lengths.",
"In the latter case, we also use approx.",
"512M characters, divided equally among different lengths.",
"The results in Table 2 show that our model achieves comparable results to the state-of-the-art model of Kambhatla et al. (2018) on longer ciphers, including perfect decipherment for ciphers of length 256.",
"The table also shows that our method is more accurate than Kambhatla et al. (2018) for shorter, more difficult ciphers of lengths 16 and 32.",
"In addition, our method provides the ability to train on multilingual data, which we use to attack ciphers with an unknown plaintext language as described in Section 5.3.",
"The inclusion of white space between words makes decipherment easier because word boundaries can give a strong clue to the cryptanalyst.",
"In many historical ciphers, however, spaces are hidden.",
"For example, in the Copiale cipher (Figure 1a), spaces are enciphered with special symbols just like other alphabetic characters (Knight et al., 2011).",
"In other ciphers, spaces might be omitted from the plain text before enciphering, as was done in the Zodiac-408 cipher (Nuhn et al., 2013).",
"We test our method in four scenarios:",
"1. Ciphers with spaces (comparable to Kambhatla et al. (2018)).",
"2. Ciphers with enciphered spaces.",
"In this case, we treat space like other cipher characters during frequency encoding as described in Section 3.2.",
"3. No-space ciphers.",
"We omit spaces in both (source and target) sides.",
"4. No-space ciphers with space recovery.",
"We omit spaces from source but keep them on the target side.",
"The goal here is to train the model to restore spaces along with the decipherment.",
"Table 3 shows results for each of the four scenarios on ciphers of length 256.",
"During decoding, we force the model to generate tokens to match source length.",
"Results show that the method is robust to both enciphered and omitted spaces.",
"In scenario 4, where the model is expected to generate spaces and thus the output length differs from the input length, we limit the output to exactly 256 characters, but we allow the model freedom to insert spaces where it sees fit.",
"The model generates spaces in accurate positions overall, leading to a TER of 1.88%.",
"While combing through libraries and archives, researchers have found many ciphers that are not accompanied with any cleartext or keys, leaving the plaintext language of the cipher unknown (Megyesi",
"et al., 2020).",
"To solve that problem, we train a single multilingual model on the 14 different languages described in Section",
"4. We train on a total of 2.1M random ciphers of length 256 (divided equally among all languages).",
"We report results as the number of training languages increases while keeping the total number of 2.1M training examples fixed (Table 4).",
"Increasing the number of languages negatively affects performance, as we expected.",
"However, our experiments show that the 14-language model is still able to decipher 700 total test ciphers with an average SER of 0.68%.",
"Since we are testing on 256-character ciphers, this translates to no more than two errors per cipher on average.",
"Real historical ciphers can have a lot of noise.",
"This noise can come from the natural degradation of historical documents, human mistakes during a manual transcription process, or misspelled words by the author, as in the Zodiac-408 cipher.",
"Noise can also come from automatically transcribing historical ciphers using Optical Character Recognition (OCR) techniques (Yin et al., 2019).",
"It is thus crucial to have a robust decipherment model that can still crack ciphers despite the noise.",
"Hauer et al. (2014) test their proposed method on noisy ciphers created by randomly corrupting log 2 ( N ) of the ciphertext characters.",
"However, automatic transcription of historical documents is very challenging and can introduce more types of noise, including the addition and deletion of some characters during character segmentation (Yin et al., 2019).",
"We test our model on three types of random noise: insertion, deletion, and substitution.",
"We experiment with different noise percentages for ciphers of length 256 (Table 5).",
"We report the results of training (and testing) on ciphers with only substitution noise and ciphers that have all three types of noise (divided equally).",
"We experimentally find that training the models with 10% noise gives the best overall accuracy, and we use those models to get the results in Table",
"5. Our method is able to decipher with up to 84% accuracy on ciphers with 20% of random insertion, deletion, and substitution noise.",
"Figure 3 shows an example output for a cipher with 15% noise.",
"The model recovers most of the errors, resulting in a TER of 5.86%.",
"One of the most challenging noise scenarios, for example, is the deletion of the last two characters from the word its.",
"The model output the word i, which is a valid English word.",
"Of course, the more noise there is, the harder it is for the model to recover due to error accumulation.",
"The Borg cipher is a 400-page book digitized by the Biblioteca Apostolica Vaticana (Figure 1b).",
"6 The first page of the book is written in Arabic script, while the rest of the book is enciphered using astrological symbols.",
"The Borg cipher was first automatically cracked by Aldarrab (2017) using the noisy-channel framework described in Knight et al. (2006).",
"The plaintext language of the book is Latin.",
"The deciphered book reveals pharmacological knowledge and other information about that time.",
"the first 256 characters of the Borg cipher to test our model.",
"Our model is able to decipher the text with an SER of 3.91% (Figure 4).",
"We also try our 14-language multilingual model on this cipher, and obtain an SER of 5.47%.",
"While we cannot directly compare to Aldarrab (2017), who do not report SER, this is a readable decipherment and can be easily corrected by Latin scholars who would be interested in such a text.",
"To further test the capacity of our model, we experiment with a special type of noise.",
"In this section, we address the challenging problem of solving substitution ciphers in which letters within each word have been randomly shuffled.",
"Anagramming is a technique that can be used to further disguise substitution ciphers by permuting characters.",
"Various theories about the mysterious Voynich Manuscript, for example, suggest that some anagramming scheme was used to encode the manuscript (Reddy and Knight, 2011).",
"Hauer and Kondrak (2016) propose a two-step approach to solve this problem.",
"First, they use their 1:1 substitution cipher solver (Hauer et al., 2014) to decipher the text.",
"The solver is based on tree search for the key, guided by character-level and word-level n-gram language models.",
"They adapt the solver by relaxing the letter order constraint in the key mutation component of the solver.",
"They then re-arrange the resulting deciphered characters using a word trigram language model.",
"We try a one-step, end-to-end anagram decryption model.",
"In our sequence-to-sequence formulation, randomly shuffled characters can confuse the training.",
"We thus represent an input cipher as a bag of frequency-mapped characters, nominally presented in frequency rank order (Figure 5).",
"We use the English Gigaword dataset to train a 256 character model on the sorted frequencies and test on the aforementioned test set of 50 ciphers (after applying random anagramming).",
"Following Hauer and Kondrak (2016), we report word accuracy on this task.",
"Our model achieves a word accuracy of 95.82% on the 50 Wikipedia ciphers.",
"Hauer and Kondrak (2016) report results on a test set of 10 long ciphers extracted from 10 Wikipedia articles about art, Earth, Europe, film, history, language, music, science, technology, and Wikipedia.",
"Ciphers have an average length of 522 characters.",
"They use English Europarl to train their language models (Koehn, 2005).",
"To get comparable results, we trained a model on ciphers of length 525 created from the English side of the Spanish-English Europarl dataset.",
"Our model achieved a word accuracy of 96.05% on Hauer and Kondrak's test set.",
"Training on English Gigaword gave a word accuracy of 97.16%, comparable to the 97.72% word accuracy reported by Hauer and Kondrak (2016).",
"This shows that our simple model can crack randomly anagrammed ciphers, which hopefully inspires future work on other cipher types.",
"Deciphering substitution ciphers is a well-studied problem in the natural language processing community, e.g., (Hart, 1994; Olson, 2007; Ravi and Knight, 2008; Corlett and Penn, 2010; Nuhn et al., 2013, 2014; Hauer et al., 2014; Aldarrab, 2017).",
"Many of the recent proposed methods search for the substitution table (i.e. cipher key) that leads to a likely target plaintext according to a character n-gram language model.",
"The current state-of-the-art method uses beam search and a neural language model to score candidate plaintext hypotheses from the search space for each cipher, along with a frequency matching heuristic incorporated into the scoring function (Kambhatla et al., 2018).",
"This method, which is comparable in results to our method on longer ciphers and slightly weaker on shorter ciphers, assumes prior knowledge of the target plaintext language.",
"Our method, by contrast, can solve substitution ciphers from different languages without explicit language identification.",
"Recent research has looked at applying other neural models to different decipherment problems.",
"Greydanus (2017) find an LSTM model can learn the decryption function of polyalphabetic substitution ciphers when trained on a concatenation of <key + ciphertext> as input and plaintext as output.",
"Our work looks at a different problem.",
"We target a ciphertext-only-attack for short 1:1 substitution ciphers.",
"Gomez et al. (2018) propose Ci-pherGAN, which uses a Generative Adversarial Network to find a mapping between the character embedding distributions of plaintext and ciphertext.",
"This method assumes the availability of plenty of ciphertext.",
"Our method, by contrast, does not require a large amount of ciphertext.",
"In fact, all of our experiments were evaluated on ciphers of 256 characters or shorter.",
"Early work on language identification from ciphertext uses the noisy-channel decipherment model (Knight et al., 2006).",
"Specifically, the expectation-maximization algorithm is used to learn mapping probabilities, guided by a pre-trained n-gram language model.",
"This decipherment process is repeated for all candidate languages.",
"The resulting decipherments are ranked based on the probability of the ciphertext using the learned model, requiring a brute-force guess-and-check approach that does not scale well as more languages are considered.",
"Hauer and Kondrak (2016) use techniques similar to ours, incorporating character (1) t h e _ i n v e n t i o n _ o f _ w r i t i n g _ s y s t e m s (2) j c z _ m r b z r j m k r _ k f _ w u m j m r e _ a o a j z g a (3) c j z _ k z m r b r j m r _ f k _ e w u j m m r _ z g o a j a a (4) 6 0 3 _ 5 3 1 2 7 2 0 1 2 _ 8 5 _ 11 9 10 0 1 1 2 _ 3 13 12 4 0 4 4 (5) 0 3 6 _ 0 1 1 2 2 2 3 5 7 _ 5 8 _ 0 1 1 2 9 10 11 _ 0 3 4 4 4 12 13 (6) t h e _ i n v e n t i o n _ o f _ b r i t a i n _ s y s t e m s Figure 5: Example anagram encryption and decryption process: (1) original plaintext (2) after applying a 1:1 substitution key (3) after anagramming (this is the ciphertext) (4) after frequency encoding (5) after sorting frequencies.",
"In this work, we present an end-to-end decipherment model that is capable of solving simple substitution ciphers without the need for explicit language identification.",
"We use frequency analysis to make it possible to train a multilingual Transformer model for decipherment.",
"Our method is able to decipher 700 ciphers from 14 different languages with less than 1% SER.",
"We apply our method on the Borg cipher and achieve 5.47% SER using the multilingual model and 3.91% SER using a monolingual Latin model.",
"In addition, our experiments show that these models are robust to different types of noise, and can even recover from many of them.",
"To the best of our knowledge, this is the first application of sequence-to-sequence neural models for decipherment.",
"We hope that this work drives more research in the application of contextual neural models to the decipherment problem.",
"It would be interesting to develop other techniques for solving more complex ciphers, e.g. homophonic and polyalphabetic ciphers.",
"This research is based upon work supported by the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), via AFRL Contract FA8650-17-C-9116.",
"The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of the ODNI, IARPA, or the U.S. Government.",
"The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright annotation thereon.",
"This work, like all decipherment work, is concerned with the decoding of encrypted communications, and thus the methods it describes are designed to reveal information that has been deliberately obfuscated and thus violate the privacy of the authors.",
"However, the class of problems it addresses, 1:1 substitution ciphers, are known to be relatively weak forms of encryption, once popular, but long considered obsolete.",
"Thus, the major practical use of this work as a decryption tool is in the ability to quickly decode ancient ciphertexts, such as the Borg cipher, the contents of which are interesting for historical purposes but are not in danger of revealing secrets of any living person.",
"Modern encryption schemes such as RSA, Blow-fish, or AES cannot be defeated by the methods presented here.",
"We have demonstrated our work's effectiveness on ciphers of 14 alphabetic languages.",
"The approaches presented here may be less effective on other orthographic systems such as abjads (which have fewer explicit symbols and more inherent am-biguity), abugidas (which have more explicit symbols and thus are conceivably less tractable), or logographic systems (which have many more explicit symbols).",
"We caution that more exploration needs to be done before relying on the methods presented here when decoding ancient historical ciphertexts that are not encodings of alphabetic plaintext.",
"It is possible, though unlikely, that incorrect conclusions can be drawn if the approaches presented in this work yield false results.",
"For instance, in Figure 1b, the word decoded as peniculi (towels) should in fact be decoded as feniculi (fennel); similar examples can be seen in Figure",
"3. The translation seed of towels being far less likely than seed of fennel in context, we would expect easy detection of this kind of error.",
"We recommend that these methods not be trusted exclusively, but rather that they be used as one tool in a cryptologist's kit, alongside language expertise and common sense, such that incoherent decodings may be given a careful look and correction."
] | [
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"objective",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"abstain",
"objective",
"other",
"other",
"abstain",
"abstain",
"other",
"other",
"objective",
"method",
"other",
"other",
"other",
"other",
"other",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method"
] |
[
"The masked language model has received remarkable attention due to its effectiveness on various natural language processing tasks.",
"However, few works have adopted this technique in the sequence-to-sequence models.",
"In this work, we introduce a jointly masked sequence-to-sequence model and explore its application on non-autoregressive neural machine translation (NAT).",
"Specifically, we first empirically study the functionalities of the encoder and the decoder in NAT models, and find that the encoder takes a more important role than the decoder regarding the translation quality.",
"Therefore, we propose to train the encoder more rigorously by masking the encoder input while training.",
"As for the decoder, we propose to train it based on the consecutive masking of the decoder input with an n gram loss function to alleviate the problem of translating duplicate words.",
"The two types of masks are applied to the model jointly at the training stage.",
"We conduct experiments on five benchmark machine translation tasks, and our model can achieve 27 .",
"69 / 32 .",
"24 BLEU scores on WMT14 English-German/German-English tasks with 5+ times speed up compared with an autoregressive model.",
"The encoder-decoder based sequence-to-sequence framework (Sutskever et al., 2014; Bahdanau et al., 2014) has achieved great success on the task of Neural Machine Translation (NMT) (Wu et al., 2016; Gehring et al., 2017; Vaswani et al., 2017; Hassan et al., 2018; Sheng et al., 2020).",
"In this framework, the encoder takes the source sentence as input and extracts its hidden representation, based on which the decoder generates the target sentence word by word and from left to right, i.e., Corresponding author.",
"in an autoregressive manner, which is a natural bottleneck for the inference speed due to the sequential conditional dependence.",
"As the performance of NMT models have been substantially promoted, the translation effi-ciency is becoming a new research hotspot.",
"Non-autoregressive neural machine translation (NAT) models are proposed to reduce the translation latency while inference, by removing the conditional dependence between target tokens and predicting all tokens in parallel (Gu et al., 2017).",
"As the context dependency cannot be utilized while decoding, the inference speedup of NAT models comes at the cost of the degradation in performance.",
"As studied by previous works (Guo et al., 2019; Wang et al., 2019), the inferior accuracy of NAT models mainly occurs from two aspects: 1) the source-side information is not adequately encoded which results in incomplete translation; 2) the decoder cannot handle the task well which leads to repeated translations and poor performance on long sentences.",
"To tackle these problems and promote the performance of NAT models, in this paper, we empirically conduct a thorough study on the functionalities of the encoder and decoder in NAT models, and conclude that the encoder has a more direct influence on the final translation performance, and is harder to train than the decoder.",
"Therefore, we propose a jointly masked sequence-to-sequence model which is inspired by the idea of masked language modeling (Devlin et al., 2018).",
"Specifically, for the encoder, we follow the masking strategy of BERT (Devlin et al., 2018) and randomly mask a number of tokens of the source sentence.",
"This strategy trains the encoder more rigorously by forcing it to encode the complete information with residual input.",
"For the decoder, we mask the consecutive fragment of the target sentence to make the decoder concentrate more on predicting adjacent tokens, and propose an n -gram based loss function to learn the consecutive tokens as a whole objective.",
"In this way, we can alleviate the problem of repeated translations of NAT models.",
"During inference, we adopt a mask-and-predict (Ghazvininejad et al., 2019) strategy to iteratively generate the translation result, which masks and predicts a subset of the current translation candidates in each iteration.",
"We verify the effectiveness of our model on five benchmark translation tasks including WMT14 English German, WMT16 English Romanian and IWSLT14 German English.",
"Our model outperforms all the NAT models in comparison, and can achieve comparative performance with its autoregressive counterpart while enhanced with 5+ times speedup on inference ( 27 . 69 / 32 . 24 BLEU scores and 5 . 73 times speedup on the WMT14 En-De/De-En tasks with an autoregressive teacher of 28 . 04 / 32 . 69 BLEU scores).",
"Our main contributions can be summarized as follows: While previous works only concentrate on manipulating the decoder, we illustrate and emphasize the importance of the encoder in NAT models and propose the encoder masking strategy to improve its training.",
"We propose the consecutive masking strategy of the decoder input and the n -gram loss function to alleviate the problem of repetitive translations of NAT models.",
"We integrate the two parts above in the jointly masked sequence-to-sequence model which shows strong performance on benchmark machine translation datasets.",
"Neural machine translation (NMT) models have achieved great success in recent years.",
"Traditional NMT models are based on the sequence-to-sequence framework (Bahdanau et al., 2014; Sutskever et al., 2014), taking the source sentence as input and generating the target sentence in an autoregressive manner.",
"Specifically, given the source sentence x = ( x 1 , x 2 , ..., x T x ) , the target sentence y = ( y 1 , y 2 , ..., y T y ) is generated as: P ( y | x ) = T y (cid:89) t =1 P ( y t | y <t , x ; enc , dec ) , (1) where y <t indicates the generated target tokens before timestep t , and enc and dec denote the parameters of the encoder and decoder respectively.",
"For a target sentence with length n , autoregressive models have to take O ( n ) iterations to generate it during inference.",
"To break the sequential conditional dependency and make the generation process parallelizable, non-autoregressive machine translation (NAT) models are proposed to generate all target tokens independently (Gu et al., 2017) and reduce the time complexity from O ( n ) to O ( k ) where k is a constant number: P ( y | x ) = P ( T y | x ) T y (cid:89) t =1 P ( y t | x ; enc , dec ) , (2) where P ( T y | x ) is the explicit length prediction process for NAT models.",
"Although the inference speed of NAT is significantly boosted, the translation accuracy is sacrificed due to the lack of context information at the target side.",
"Therefore, lots of works have been conducted to promote the performance of NAT models.",
"Specifically, Gu et al. (2017) takes a copy of the encoder input x as the decoder input and trains a fertility predictor to guide the copy procedure.",
"Lee et al. (2018) and Ghazvininejad et al. (2019) generate the target sentence by iteratively refining the current translation.",
"Other works enhance the performance of NAT models by utilizing auxiliary information, such as extra loss functions (Wang et al., 2019; Li et al., 2019; Sun et al., 2019; Wei et al., 2019; Shao et al., 2019), SMT components (Guo et al., 2019) and fine-tuning from an AT model (Guo et al., 2020).",
"Recently, some works (Stern et al., 2019; Welleck et al., 2019; Gu et al., 2019) propose to change the generation order from the traditional left-to-right manner to a tree-based manner, resulting in a time complexity of O (log n ) .",
"In this paper, we focus on the NAT model with O ( k ) generation complexity.",
"The masked language model proposed by BERT (Devlin et al., 2018) has become the essential component of the state-of-the-art pre-training methods (Song et al., 2019; Dong et al., 2019; Liu et al., 2019; Joshi et al., 2019; Lample and Conneau, 2019) in natural language understanding tasks.",
"The standard paradigm of masked language modeling is to substitute a subset of tokens in the input sentence by a special symbol [MASK] , and predict the missing tokens by the residual ones.",
"We denote the residual tokens as x r and the masked target tokens as x m .",
"As BERT is designed for language understanding tasks which can be handled with a single Transformer encoder, it is non-trivial to extend the paradigm into NMT tasks, where a sequence-to-sequence framework is utilized.",
"To address that, XLM (Lample and Conneau, 2019) concatenates the source sentence and the target sentence as the encoder input to let the model learn the cross-lingual information, but still using a single Transformer encoder.",
"MASS (Song et al., 2019) presents a sequence-to-sequence pre-training framework, which takes x r as the encoder input and takes x m as the decoder input as well as the target, still yielding a monolingual pre-training framework.",
"In this paper, we propose a jointly masked language modeling method to handle the cross-lingual challenge in a unified sequence-to-sequence framework, based on which the translation accuracy of AT models and the inference speedup of NAT models can both be preserved.",
"To explore the functionalities of the encoder and decoder in NAT models, we conduct a thorough empirical study.",
"We mainly follow the settings in (He et al., 2019).",
"We train a basic NAT model proposed by Gu et al. (2017), except that we remove the fertility predictor and keep the decoder input as a hard copy of the source sentence in a similar way with (Guo et al., 2019; Wang et al., 2019).",
"We conduct the following experiments on the IWSLT14 German to English dataset and train the model with the same number of training steps for each setting.",
"We study the importance of the encoder and decoder from three aspects.",
"Firstly, we vary the number of encoder and decoder layers respectively to see which will bring more performance gain.",
"Specifically, on a basic model with a 5 -layer encoder and a 5 -layer decoder, we increase the number of layers to the encoder and decoder separately.",
"Results are illustrated in Table 1, from which we 0 0.5M 1M 1.5M 2M Steps 5.0 7.5 10.0 12.5 15.0 17.5 20.0 22.5 25.0 BLEU Encoder Decoder",
"(b) Performance with Noisy Input Figure 1:",
"can conclude that adding the layers of the encoder can bring more performance gain than the decoder.",
"Secondly, we compare the convergence speed of the encoder and decoder by initializing the NAT model with a pretrained decoder/encoder and fix it during training, while randomly initialize a trainable encoder/decoder.",
"The convergence speed is illustrated by the BLEU score along with the training steps, as shown in Figure",
"1(a).",
"From the results, we can observe that the decoder converges faster than the encoder.",
"In conclusion, we find that the encoder is dealing with a more sophisticated task than the decoder, and the encoder is not adequately trained in the initial NAT model.",
"Thirdly, we further conduct an investigation on the encoder input, encoder output and decoder input to evaluate their importance in the inference stage.",
"During inference, we add random noise to the three types of inputs respectively, by randomly replacing the embeddings of some tokens with random noise.",
"This experiment is conducted on a basic 5 -layer encoder and decoder NAT model, and the results are illustrated in Figure",
"1(b).",
"Obviously, the encoder input and encoder output both largely influence the translation quality, which implies that the encoder plays an important role in the inference of NAT models, while the decoder input is the least important due to its conditional independence in nature.",
"In a word, the performance of NAT models rely more on the encoder rather than the decoder.",
"While most existing NAT works only focus on refining the decoder to obtain better performance, we have explored and shown the significance of the encoder in the previous section.",
"Therefore, we propose to improve the translation performance by further manipulating the encoder, and we will introduce the proposed framework to tackle the problems discussed above in this section.",
"We start with the problem definition.",
"Problem Definition Given a pair of source and target sentence ( x, y ) ( X , Y ) from the parallel training dataset X and Y , the negative log-likelihood objective function of an NMT model can be written as: L nll ( x, y ; enc , dec ) = log P ( y | x ; enc , dec ) , (3) where the conditional probability can be either Equation (1) or Equation (2) for AT or NAT models, and enc , dec represent the parameters of the encoder and decoder respectively.",
"As studied in Section 3, the encoder needs to handle a harder task than the decoder but is not adequately trained in previous works.",
"To maximize the functionality of the encoder, we propose to train it with masked language modeling.",
"The general masking strategy is as follows.",
"Given a source sentence x = ( x 1 , x 2 , ..., x T x ) , we randomly sample a subset from x , denoted as x m with T mx tokens, and substitute them with other tokens in position.",
"Specifically, we follow the similar substitution strategy as BERT (Devlin et al., 2018): we randomly select 10% of the tokens in x , of which 80% are substituted with a special symbol [ MASK ], 10% are substituted with a random token in the vocabulary, and 10% are kept unchanged.",
"And we denote the substituted result of the source sentence as x r .",
"Then the loss function on the encoder of predicting the missing source tokens can be written as: L enc ( x m | x r ) = T mx (cid:88) t =1 log P ( x mt | x r ) .",
"For the decoder, as it is shown that the repetitive translations mainly result from the non-autoregressive nature of NAT, we alleviate this problem by applying a consecutive masking strategy and proposing a tailored n -gram based loss function.",
"During training, given a target sentence y = ( y 1 , y 2 , ..., y T y ) , we randomly select multiple sets of consecutive tokens and mask them in a similar strategy as masking the encoder.",
"Each set contains n consecutive tokens, and we denote the masked target set as y m and the substituted result as y r , and their corresponding lengths as T my and T y .",
"Note that in the decoder, the total number of masking tokens is uniformly sampled from 1 to T y instead of being computed with a fixed ratio.",
"We provide an illustration of our framework in Figure 2, where n is set to 2 .",
"The loss function of predicting the masked target tokens can be written as: L nll ( y m | x r , y r ) = T my (cid:88) t =1 log P ( y mt | x r , y r ) .",
"We propose an n -gram based loss function, which has been applied to NMT models recently (Ma et al., 2018; Shao et al., 2018, 2019), to enhance the sentence-level information and alleviate the problem of repetitive translations of NAT models.",
"The loss function is tied with the consecutive masking where n equals to the number of the consecutive masked tokens in each set.",
"Specifically, given an n -gram g = ( g 1 , ..., g n ) , its occurrence count in the target sentence y can be written as: C y ( g ) = (cid:80) T y n t =0 (cid:81) ni =1 1 { g i = y t + i } .",
"As for the count in the masked sequence y m , we introduce the probabilistic variant of the n -gram count to make the objective differentiable (Shao et al., 2018) by representing each token with the prediction probability: C y m ( g ) = T my n (cid:88) t =0 n (cid:89) i =1 1 { g i = y mt + i } p ( y mt + i | x ) .",
"Considering all possible n -grams in y , the proposed",
"L gram ( y, y m | y r ,x r ) = (7) K (cid:88) g min( C y ( g ) , C y m ( g )) ,",
"where min( C y ( g ) , C y m ( g )) represents the matching count between y and y m w.r.t the n -gram g , and K is the upper bound of the total matching count which equals to the number of sets of consecutive masked tokens.",
"The n -gram loss function will encourage the model to treat the consecutive masked tokens as a whole objective to match the sequential fragments in the target sentence, thus reducing the occurrence of repetitive translations.",
"Based on the proposed framework, the objective function of our model contains three parts: the traditional negative log-likelihood loss function to predict the missing target tokens L nll ( ) , the prediction loss function on the encoder side L enc ( ) , and the n -gram loss function L gram ( ) .",
"By integrating the three loss functions, given a training pair ( x, y ) , the complete objective function of our model is: min L ( x, y ) = L nll ( y m | x r , y r ; enc , dec ) + 1 L enc ( x m | x r ; enc ) (8) + 2 L gram ( y, y m | y r , x r ; enc , dec ) , where = ( enc , dec ) , 1 and 2 are the hyper-parameters that control the weights of different loss functions.",
"In the proposed training framework, the importance of the encoder has been emphasized by masking the encoder input and introducing L enc ( ) .",
"The encoder is encouraged to produce better representations of other tokens in order to predict the missing tokens.",
"On the decoder side, the consecutive masking strategy augmented with the n -gram based loss function can help the model better capture the sentence-level information and alleviate the problem of repetitive translations.",
"For inference, we propose to iteratively re-fine the translation result in a mask-and-predict manner mainly following the strategy proposed in (Ghazvininejad et al., 2019), and details are introduced below.",
"During inference, the first step for NAT models is to determine the length of the target translation.",
"We follow (Ghazvininejad et al., 2019) and introduce an additional prediction process to estimate the length by the source sentence.",
"Specifically, we add a special token to the encoder and predict the target length with the output hidden vector of this token.",
"The negative log-likelihood loss function of this token is then added to the word prediction loss in Equation (8) as the final loss.",
"In experiments, we also consider selecting the translation with highest probability over multiple translation candidates with different target lengths to obtain better results.",
"Thereafter, based on the mask-and-predict paradigm, we design our decoding algorithm as follows.",
"Given the target length T y , we initiate the target sentence with [MASK] at all positions, and take it as the decoder input followed by conducting translation.",
"Next, for each iteration, we apply consecutive masking to the translation candidates as we have done in the training stage.",
"Specifically, we select several tokens with the lowest probabilities from the current translation candidates, and mask these tokens as well as their adjacent ones.",
"The number of tokens to mask at each iteration follows a linear decay function utilized in (Ghazvinine-jad et al., 2019).",
"As for the stop condition, the final translation is taken either when a pre-defined number of iterations is reached, or the translation candidates do not change between two iterations.",
"5.1.1 Datasets We evaluate our method on five widely used benchmark tasks: IWSLT14 German English translation (IWSLT14 De-En) 1 , WMT16 English Romanian translation (WMT16 En-Ro/Ro-En) 2 , and WMT14 English German translation (WMT14 En-De/De-En) 3 .",
"We strictly follow the dataset configurations of previous works.",
"For the IWSLT14 De-En task, we train the model on its training set with 157 k training samples, and evaluate on its test set.",
"For the WMT14 En-De/De-En task, we train the model on the training set with 4 .",
"5 M training samples, where newstest2013 and newstest2014 are used as the validation and test set respectively.",
"As for the WMT16 En-Ro task which has 610 k training pairs, we utilize newsdev2016 and newstest2016 as the validation and test set.",
"For each dataset, we tokenize the sentences by Moses (Koehn et al., 2007) and segment each word into subwords using Byte-Pair Encoding (BPE) (Sennrich et al., 2015), resulting in a 32 k vocabulary shared by source and target languages.",
"We strictly follow the previous works to set the configurations of models.",
"Our model is based on the Transformer (Vaswani et al., 2017) architecture, with multi-head positional attention proposed in (Gu et al., 2017).",
"We utilize the small Transformer ( d model = d hidden = 256 , n head = 4 ) with 5 -layer encoder and decoder for the IWSLT14 De-En task, and the base Transformer ( d model = d hidden = 512 , n layer = 6 , n head = 8 ) for the WMT14 and WMT16 tasks.",
"We set n = 2 for all tasks, i.e., we consider two-gram matchings when calculating L gram .",
"The hyper-parameters 1 and 2 are both set to 0 .",
"01 for all tasks.",
"We consider seven recent works as our baselines, including five NAT works: NAT with fertility (NAT-FT) (Gu et al., 2017), NAT with Imitation Learning (Imitate-NAT) (Wei et al., 2019), NAT with Regularizations (NAT-Reg) (Wang et al., 2019),",
"1 https://wit3.fbk.eu/ 2 https://www.statmt.org/wmt16/ translation-task 3 https://www.statmt.org/wmt14/ translation-task",
"NAT with Curriculum Learning (FCL-NAT) (Guo et al., 2020), NAT with Dynamic Conditional Random Field (NAT-DCRF) (Sun et al., 2019); and two iterative decoding based works: NAT with Iterative Refinement (NAT-IR) (Lee et al., 2018) and Conditional Masked NAT (CM-NAT) (Ghazvinine-jad et al., 2019).",
"The first five models are purely non-autoregressive, whose time complexities during inference are all O (1) .",
"The other two models are based on iteratively refining the translation results by k iterations, where k is a constant number, yielding O ( k ) complexity.",
"In the experiments, we also compare with them in terms of the inference latency on clock.",
"We adopt sequence-level knowledge distillation (Kim and Rush, 2016) on the training set of each task, which has been proved by previous NAT models that it can produce less noisy and more deterministic training data (Gu et al., 2017).",
"As stated by Wang et al. (2019), the performance of the AT teacher will affect the final performance of the NAT student model.",
"While AT teachers used in previous works have various performance, we utilize the teacher model which has similar performance with the one used in our main baseline CM-NAT (Ghazvininejad et al., 2019) to construct a fair comparison.",
"In addition, we also provide the performance of our model trained by a weakened AT teacher (denoted as WT in Table 2) which has similar performance with the one used in (Wang et al., 2019) to compare with them.",
"We train the model with 8 / 1 Nvidia 1080Ti GPUs on the WMT datasets and IWSLT14 dataset respectively, and we utilize the Adam optimizer while following the same settings used in the original Transformer.",
"During inference, we generate multiple translation candidates by taking the top B length predictions into consideration, and select the translation with the highest probability as the final result.",
"We set B = 3 on WMT tasks and B = 4 on IWSLT14 tasks.",
"We also report the clock time of inference latency on a single Nvidia 1080Ti GPU in our experiments, where we set the batch size to 1 and calculate the average per sentence translation time on newstest2014 for the WMT14 En-De task to keep consistence with previous works.",
"report the tokenized case-sensitive scores for the WMT datasets, as well as the tokenized case-insensitive scores for the IWSLT14 dataset.",
"Our implementation is based on fairseq (Ott et al., 2019) and is avaliable at https://github.com/ lemmonation/jm-nat .",
"The main results are listed in Table",
"2. We denote our model as Jointly Masked NAT (JM-NAT), and show the results when the upper bound of iterations k is set to 4 and 10 .",
"As can be observed from Table 2, our model achieves comparable performance with its AT teacher on all datasets (only 0 . 5 BLEU score behind in average), while achieving 5+ times speedup on the inference latency.",
"Compared with the pure NAT models with O (1) time complexity, with similar inference latency by setting k = 4 , our model outperforms all baselines with a consistent margin on different tasks.",
"Compared with the models based on iterative refinement, JM-NAT also shows consistent superiority with the same time complexity.",
"Our model outperforms CM-NAT (Ghazvininejad et al., 2019) with margins from 0 .",
"41 to 1 .",
"71 on different tasks, illustrating the boosted performance brought by the jointly masked model as well as the proposed loss functions.",
"It is worth noting that CM-NAT utilizes a much stronger AT teacher on the WMT14 En-De task (using the large configuration of Transformer and achieving 28 . 65 BLEU score).",
"Our model, even with less iterations or a weaker AT teacher, still outperforms CM-NAT in most cases, and it is straightforward to further improve our performance with a stronger teacher.",
"As there does not exist a clear metric (such as the perplexity in language generation tasks) to evaluate the quality of the encoder in a sequence-to-sequence model, we adopt a naive version of the adversarial attack on text (Belinkov and Bisk, 2017) to the encoder input to test the robustness of the encoder.",
"Specifically, during inference, we follow the same strategy used in Section 3 to add noise to the source sentence x .",
"Given the noise ratio (0 , 1) , we randomly select (cid:98) T x (cid:99) (where (cid:98)(cid:99) stands for the rounding function) source tokens and either drop or replace them with other tokens in the vocabulary.",
"We increase from 0 to 10% and test the performance of each model on the validation set of the IWSLT14 De-En task, and show the results in Figure",
"3. We compare our model with baselines NAT-FT NAT-Reg CM-NAT JM-NAT 2 .",
"including NAT-FT and CM-NAT.",
"According to the results, compared with CM-NAT, which is also an iterative decoding based method, our model shows more robust performance with regard to the noise on the encoder input, showing the efficacy of the proposed masking strategy and the better quality of our encoder.",
"As studied by Wang et al. (2019), the tendency of producing repetitive words in translation is a major drawback of NAT models.",
"We propose to alleviate this problem by training the decoder with the consecutive masking strategy as well as the n -gram loss function.",
"We compute the average number of consecutive repetitive tokens per sentence in the translation results on the validation set of the IWSLT14 De-En task.",
"Results are shown in Table",
"3. Without introducing explicit regularizations (Wang et al., 2019), our method is still able to alleviate the problem of repetitive words.",
"Compared with CM-NAT who also utilizes an iterative decoding method, the superiority of our method demonstrates the proposed consecutive masking strategy better solves the problem than random masking.",
"We conduct the ablation study on the validation set of the IWSLT14 De-En task to illustrate the contribution of different components in our model.",
"Results are shown in Table",
"4. For the encoder, both encoder masking and the objective function L enc contribute to the final performance, and encoder masking provides the most prominent performance promotion.",
"On the decoder side, both of the consecutive masking strategy and the n -gram loss function are indispensable to produce solid performance as they are tied together through the hyper-parameter n .",
"In addition, all the proposed components are effective in alleviating the repetitive translations, and the n -gram loss function contributes the most.",
"Results are listed in Table 5.",
"As we discussed in Section 1, repetitive translations and missing translations are two stubborn problems of NAT models.",
"In Table 5, both NAT-FT and CM-NAT tend to generate repetitive words (such as eliminate diabetes diabetes and reduce cancer risk risk) as well as incomplete translations (both of them miss the word eliminate in the second clause), while our model achieves better results.",
"In this paper, we propose a jointly masked sequence-to-sequence model for non-autoregressive neural machine translation.",
"We first empirically investigate the functionalities of the non-autoregressive translation model, and Source: was ware , wenn sie die genetischen veranderungen machen konnten , um diabetes oder alzheimer zu beseitigen oder das reduzieren des krebsrisikos oder schlaganfalle zu eliminieren ?",
"improve the training of the encoder by masking its input and introducing a prediction based loss function.",
"For the decoder, we propose to utilize consecutive masking and introduce an n -gram based loss function to alleviate the problem of repetitive translations.",
"Our model outperforms all compared NAT baselines and achieves comparable performance with autoregressive models on five benchmark tasks with 5+ times speed up on the inference latency.",
"In the future, we will extend the investigation on the functionalities of the encoder and decoder to other sequence-to-sequence tasks such as text summarization and text style transfer to explore more applications of our model.",
"This research was supported by the National Natural Science Foundation of China (No. 61673364, No. U1605251) and the Fundamental Research Funds for the Central Universities (WK2150110008).",
"We would like to thank the Information Science Laboratory Center of USTC for the hardware and software services.",
"We thank the anonymous reviewers as well as Zhirui Zhang and Tianyu He for helpful feedback on early versions of this work."
] | [
"abstain",
"abstain",
"objective",
"objective",
"objective",
"objective",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"objective",
"abstain",
"result",
"result",
"result",
"objective",
"objective",
"result",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"abstain",
"other",
"other",
"other",
"objective",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"result",
"result",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"objective",
"objective",
"abstain",
"objective",
"result",
"objective",
"other",
"other",
"other"
] |
[
"The advent of context-aware NMT has resulted in promising improvements in the overall translation quality and specifically in the translation of discourse phenomena such as pronouns.",
"Previous works have mainly focused on the use of past sentences as context with a focus on anaphora translation.",
"In this work, we investigate the effect of future sentences as context by comparing the performance of a contextual NMT model trained with the future context to the one trained with the past context.",
"Our experiments and evaluation, using generic and pronoun-focused automatic metrics, show that the use of future context not only achieves significant improvements over the context-agnostic Transformer, but also demonstrates comparable and in some cases improved performance over its counterpart trained on past context.",
"We also perform an evaluation on a targeted cataphora test suite and report significant gains over the context-agnostic Transformer in terms of BLEU.",
"Standard machine translation (MT) systems typically translate sentences in isolation, ignoring essential contextual information, where a word in a sentence may reference other ideas or expressions within a piece of text.",
"This locality assumption hinders the accurate translation of referential pronouns, which rely on surrounding contextual information to resolve cross-sentence references.",
"The issue is further exacerbated by differences in pronoun rules between source and target languages, often resulting in morphological disagreement in the quantity and gender of the subject being referred to (Van-massenhove et al., 2018).",
"Rapid improvements in NMT have led to it replacing SMT as the dominant paradigm.",
"With this, context-dependent NMT has gained traction, overcoming the locality assumption in SMT through the use of additional contextual information.",
"This has led to improvements in not only the overall translation quality but also pronoun translation (Jean et al., 2017; Bawden et al., 2018; Voita et al., 2018; Miculicich et al., 2018).",
"However, all these works have neglected the context from future sentences, with Voita et al. (2018) reporting it to have a negative effect on the overall translation quality.",
"In this work, we investigate the effect of future context in improving NMT performance.",
"We particularly focus on pronouns and analyse corpora from different domains to discern if the future context could actually aid in their resolution.",
"We find that for the Subtitles domain roughly 16% of the pronouns are cataphoric.",
"This finding motivates us to investigate the performance of a context-dependent NMT model (Miculicich et al., 2018) trained on the future context in comparison to its counterpart trained on the past context.",
"We evaluate our models in terms of overall translation quality (BLEU) and also employ three types of automatic pronoun-targeted evaluation metrics.",
"We demonstrate strong improvements for all metrics, with the model using future context showing comparable or in some cases even better performance than the one using only past context.",
"We also extract a targeted cataphora test set and report significant gains on it with the future context model over the baseline.",
"Pronoun-focused SMT Early work in the translation of pronouns in SMT attempted to exploit coreference links as additional context to improve the translation of anaphoric pronouns (Le Nagard and Koehn 2010; Hardmeier and Federico 2010).",
"These works yielded mixed results which were attributed to the limitations of the coreference resolution systems used in the process (Guillou, 2012).",
"Context-Aware NMT Multiple works have successfully demonstrated the advantages of using larger context in NMT, where the context comprises few previous source sentences (Wang et al., 2017; Zhang et al., 2018), few previous source and target sentences (Miculicich et al., 2018), or both past and future source and target sentences (Maruf and Haffari, 2018; Maruf et al., 2018, 2019).",
"Further, context-aware NMT has demonstrated improvements in pronoun translation using past context, through concatenating source sentences (Tiedemann and Scherrer, 2017) or through an additional context encoder (Jean et al., 2017; Bawden et al., 2018; Voita et al., 2018).",
"Miculicich et al. (2018) observed reasonable improvements in generic and pronoun-focused translation using three previous sentences as context.",
"Voita et al. (2018) observed improvements using the previous sentence as context, but report decreased BLEU when using the following sentence.",
"We, on the other hand, observe significant gains in BLEU when using the following sentence as context on the same data domain.",
"To motivate our use of the future context for improving the translation of cataphoric pronouns in particular and NMT in general, we first analyse the distribution of coreferences for anaphoric and cataphoric pronouns over three different corpora OpenSubtitles2018 1 (Lison and Tiedemann, 2016), Europarl (Koehn, 2005) and TED Talks (Cettolo et al., 2012) for English-German.",
"For Europarl and TED Talks, we use the publicly available document-aligned version of the corpora (Maruf et al., 2019).",
"For Subtitles, we align the English and German subtitles at the document-level using publicly available alignment links.",
"2 To control for the length and coherency of documents, we keep 1 http://www.opensubtitles.org/ 2 http://opus.nlpl.eu/OpenSubtitles2018.php Pronoun Subtitles Europarl TED Talks Intrasentential 30.1 75.6 64.1 Anaphora ( < 0) 54.3 19.6 28.5 Cataphora ( > 0) 15.6 4.7 7.4 Table 2: Percentage of different pronoun types.",
"subtitles with a run-time less than 50 minutes (for English) and those with number of sentences in the hundreds.",
"The corpus is then randomly split into training, development and test sets in the ratio 100:1:1.5.",
"Table 1 presents the corpora statistics.",
"Analysis of Coreferences We find the smallest window within which a referential English pronoun is resolved by an antecedent or postcedent using NeuralCoref .",
"3 Table 2 shows that the majority of pronouns in Europarl and TED Talks corpora are resolved intrasententially, while the Subtitles corpus demonstrates a greater proportion of intersentential coreferences.",
"Further, anaphoric pronouns are much more frequent compared to cataphoric ones across all three corpora.",
"For Subtitles, we also note that a good number of pronouns (15.6%) are cataphoric, 37% of which are resolved within the following sentence (Figure 1).",
"This finding motivates us to investigate the performance of a context-aware NMT model (trained on Subtitles) for the translation of cataphoric pronouns.",
"Datasets We experiment with the Subtitles corpus on English-German and English-Portuguese language-pairs.",
"To obtain English-Portuguese data, we employ the same pre-processing steps as reported in 3 (corpus statistics are in Table 1).",
"We use 80% of the training data to train our models and the rest is held-out for further evaluation as discussed later in 4.2.",
"4 The data is truecased using 3 https://github.com/huggingface/neuralcoref 4 Due to resource contraints, we use about two-thirds of the final training set ( 8M sentence-pairs) for En-Pt.",
"the Moses toolkit (Koehn et al., 2007) and split into subword units using a joint BPE model with 30K merge operations (Sennrich et al., 2016).",
"5 Description of the NMT systems As our baseline, we use the DyNet (Neubig et al., 2017) implementation of Transformer (Vaswani et al., 2017).",
"6 For the context-dependent NMT model, we choose the Transformer-HAN encoder (Miculicich et al., 2018), which has demonstrated reasonable performance for anaphoric pronoun translation on Subtitles.",
"We extend its DyNet implementation (Maruf et al., 2019) to a single context sentence.",
"78 For training, Transformer-HAN is initialised with the baseline Transformer and then the parameters of the whole network are optimised in a second stage as in Miculicich et al. (2018) (details of model configuration are in Appendix A.1).",
"For evaluation, we compute BLEU (Papineni et al., 2002) on tokenised truecased text and measure statistical significance with p < 0.005 (Clark et al., 2011).",
"We consider two versions of the Transformer-HAN respectively trained with the following and previous source sentence as context.",
"From Table 3, we note both context-dependent models to significantly outperform the Transformer across all language-pairs in terms of BLEU.",
"Further, HAN ( k = +1) demonstrates statistically significant improvements over the HAN ( k = -1) when translating to English.",
"These results are quite surprising as Voita et al. (2018) report decreased translation quality in terms of BLEU when using the following sentence for English Russian Subtitles.",
"To 5 Tokenisation is provided by the original corpus.",
"7 Where in the original architecture, k sentence-context vectors were summarised into a document-context vector, we omit this step when using only one sentence in context.",
"8 The code and data are available at https://github.",
"com/sameenmaruf/acl2020-contextnmt-cataphora .",
"identify if this discrepancy is due to the language-pair or the model, we conduct experiments with English Russian in the same data setting as Voita et al. (2018) and find that HAN ( k = +1) still significantly outperforms the Transformer and is comparable to HAN ( k = -1) (more details in Appendix A.2).",
"Pronoun-Focused Automatic Evaluation For the models in Table 3, we employ three types of pronoun-focused automatic evaluation:",
"1. Accuracy of Pronoun Translation (APT) (Mi-culicich Werlen and Popescu-Belis, 2017) 9 .",
"This measures the degree of overlapping pronouns between the output and reference translations obtained via word-alignments.",
"2. Precision, Recall and F1 scores .",
"We use a variation of AutoPRF (Hardmeier and Federico, 2010) to calculate precision, recall and F1-scores.",
"For each source pronoun, we compute the clipped count (Papineni et al., 2002) of overlap between candidate and reference translations.",
"To eliminate word alignment errors, we compare this overlap over the set of dictionary-matched target pronouns, in contrast to the set of target words aligned to a given source pronoun as done by AutoPRF and APT.",
"9 https://github.com/idiap/APT",
"two measures which rely on computing pronoun overlap between the target and reference translation, we employ an ELMo-based (Peters et al., 2018) evaluation framework that distinguishes between a good and a bad translation via pairwise ranking (Jwalapuram et al., 2019).",
"We use the CRC setting of this metric which considers the same reference context (one previous and one next sentence) for both reference and system translations.",
"However, this measure is limited to evaluation only on the English target-side.",
"10 The results using the aforementioned pronoun evaluation metrics are reported in Table",
"4. We observe improvements for all metrics with both HAN models in comparison to the baseline.",
"Further, we observe that the HAN ( k = +1) is either comparable to or outperforms HAN ( k = -1) on APT and F1 for De En and Pt En, suggesting that for these cases, the use of following sentence as context is at least as beneficial as using the previous sentence.",
"For En De, we note comparable performance for the HAN variants in terms of F1, while for En Pt, the past context appears to be more beneficial.",
"11 In terms of CRC, we note HAN ( k = -1) to be comparable to (De En) or better than HAN ( k = +1) (Pt En).",
"We attribute this to the way the metric is trained to disambiguate pronoun translations based on only the previous context and thus may have a bias for such scenarios.",
"Ablation Study We would like to investigate whether a context-aware NMT model trained on a wider context could perform well even if we do not have access to the same amount of context at decoding.",
"We thus perform an ablation study for 10 We use the same English pronoun list for all pronoun-focused metrics (provided by Jwalapuram et al. (2019) at https://github.com/ntunlp/eval-anaphora ).",
"All pronoun sets used in our evaluation are provided in Appendix A.4.",
"11 It should be noted that for Portuguese, adjectives and even verb forms can be marked by the gender of the noun and these are hard to account for in automatic pronoun-focused evaluations.",
"English German using the HAN model trained with two previous and next sentences as context and decoded with variant degrees of context.",
"From Table 5, we note that reducing the amount of context at decoding time does not have adverse effect on the model's performance.",
"However, when no context is used, there is a statistically significant drop in BLEU, while APT and F1-scores are equivalent to that of the baseline.",
"This suggests that the model does rely on the context to achieve the improvement in pronoun translation.",
"Further, we find that the future context is just as beneficial as the past context in improving general translation performance.",
"Cataphora-Focused Test Suite To gauge if the improvements in Table 3 for the HAN ( k = +1) model are coming from the correct translation of cataphoric pronouns, we perform an evaluation on a cataphoric pronoun test suite constructed from the held-out set mentioned earlier in",
"3. To this end, we apply NeuralCoref over the English side to extract sentence-pairs which have a cataphoric pronoun in one sentence and the postcedent in the next sentence.",
"This is further segmented into subsets based on the part-of-speech of the postcedent, that is, determiner (DET), proper noun (PROPN) or all nouns (NOUN) (more details in the appendix).",
"12 From Table 6, we observe HAN ( k = +1) to outperform the baseline for all language-pairs when evaluated on the cataphora test suite.",
"In general, we observe greater improvements in BLEU when trans-12 We note that there may be some overlap between the three pronoun subsets as a test sentence may contain more than one type of pronoun.",
"lating to English, which we attribute to the simplifi-cation of cross-lingual pronoun rules when translating from German or Portuguese to English.",
"13 We also observe fairly similar gains in BLEU across the different pronoun subsets, which we hypothesise to be due to potential overlap in test sentences between different subsets.",
"Nevertheless, we note optimum translation quality over the noun subsets (PROPN and NOUN), while seeing the greatest percentage improvement on the DET subset.",
"For the latter, we surmise that the model is able to more easily link pronouns in a sentence to subjects prefixed with possessive determiners, for example, his son or their child.",
"We also perform an auxiliary evaluation for Transformer-HAN ( k = -1) trained with the previous sentence as context on the cataphora test suite and find that the BLEU improvements still hold.",
"Thus, we conclude that Transformer-HAN (a context-aware NMT model) is able to make better use of coreference information to improve translation of pronouns (detailed results in Appendix A.3).",
"Qualitative Analysis We analyse the distribution of attention to the context sentence for a few test cases.",
"14 Figure 2 shows an example in which a source pronoun he attends to its corresponding postcedent in context.",
"This is consistent with our hypothesis that the HAN ( k = +1) is capable of exploiting contextual information for the resolution of cataphoric pronouns.",
"In this paper, we have investigated the use of future context for NMT and particularly for pronoun translation.",
"While previous works have focused on the 13 It should be noted that the cataphora test set is extracted based on the existence of cataphoric-pairs in the English-side, which may have biased the evaluation when English was in the target.",
"14 Attention is average of the per-head attention weights.",
"use of past context, we demonstrate through rigorous experiments that using future context does not deteriorate translation performance over a baseline.",
"Further, it shows comparable and in some cases better performance as compared to using the previous sentence in terms of both generic and pronoun-focused evaluation.",
"In future work, we plan to investigate translation of other discourse phenomena that may benefit from the use of future context.",
"The authors are grateful to the anonymous reviewers for their helpful comments and feedback and to George Foster for fruitful discussions.",
"This work is supported by a Google Faculty Research Award to G.H. It is further supported by the Multi-modal Australian ScienceS Imaging and Visualisation Environment (MASSIVE) ( www.massive.org.au )."
] | [
"abstain",
"abstain",
"objective",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"result",
"abstain",
"method",
"objective",
"result",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"method",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"result",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"result",
"result",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"objective",
"abstain",
"objective",
"other",
"other"
] |
[
"The cross-lingual language models are typically pretrained with masked language modeling on multilingual text or parallel sentences.",
"In this paper, we introduce denoising word alignment as a new cross-lingual pre-training task.",
"Specifically, the model first self-labels word alignments for parallel sentences.",
"Then we randomly mask tokens in a bitext pair.",
"Given a masked token, the model uses a pointer network to predict the aligned token in the other language.",
"We alternately perform the above two steps in an expectation-maximization manner.",
"Experimental results show that our method improves cross-lingual transferability on various datasets, especially on the token-level tasks, such as question answering, and structured prediction.",
"Moreover, the model can serve as a pretrained word aligner, which achieves reasonably low error rates on the alignment benchmarks.",
"The code and pretrained parameters are available at github.com/CZWin32768/XLM-Align .",
"Despite the current advances in NLP, most applications and resources are still English-centric, making non-English users hard to access.",
"Therefore, it is essential to build cross-lingual transferable models that can learn from the training data in high-resource languages and generalize on low-resource languages.",
"Recently, pretrained cross-lingual language models have shown their effectiveness for cross-lingual transfer.",
"By pre-training on monolingual text and parallel sentences, the models provide significant improvements on a wide range of cross-lingual end tasks (Conneau and Lample, 2019; Conneau et al., 2020; Liu et al., 2020; Chi et al., 2021b).",
"monolingual and parallel corpora.",
"By simply learning masked language modeling (MLM; Devlin et al. 2019) on monolingual text of multiple languages, the models surprisingly achieve competitive results on cross-lingual tasks (Wu and Dredze, 2019; K et al., 2020).",
"Besides, several pretext tasks are proposed to utilize parallel corpora to learn better sentence-level cross-lingual representations (Con-neau and Lample, 2019; Chi et al., 2021b; Hu et al., 2020a).",
"For example, the translation language modeling (TLM; Conneau and Lample 2019) task performs MLM on the concatenated parallel sentences, which implicitly enhances cross-lingual transferability.",
"However, most pretext tasks either learn alignment at the sentence level or implicitly encourage cross-lingual alignment, leaving explicit fine-grained alignment task not fully explored.",
"In this paper, we introduce a new cross-lingual pre-training task, named as denoising word alignment .",
"Rather than relying on external word aligners trained on parallel corpora (Cao et al., 2020; Zhao et al., 2020; Wu and Dredze, 2020), we utilize self-labeled alignments in our task.",
"During pretraining, we alternately self-label word alignments and conduct the denoising word alignment task in an expectation-maximization manner.",
"Specifically, the model first self-labels word alignments for a translation pair.",
"Then we randomly mask tokens in the bitext sentence, which is used as the perturbed input for denosing word alignment.",
"For each masked token, the model learns a pointer network to predict the self-labeled alignments in the other language.",
"We repeat the above two steps to iteratively boost the bitext alignment knowledge for cross-lingual pre-training.",
"We conduct extensive experiments on a wide range of cross-lingual understanding tasks.",
"Experimental results show that our model outperforms the baseline models on various datasets, particularly on the token-level tasks such as question answering and structured prediction.",
"Moreover, our model can also serve as a multilingual word aligner, which achieves reasonable low error rates on the bitext alignment benchmarks.",
"Our contributions are summarized as follows: We present a cross-lingual pre-training paradigm that alternately self-labels and predicts word alignments.",
"We introduce a pre-training task, denoising word alignment, which predicts word alignments from perturbed translation pairs.",
"We propose a word alignment algorithm that formulates the word alignment problem as optimal transport.",
"We demonstrate that our explicit alignment objective is effective for cross-lingual transfer.",
"Cross-lingual LM pre-training Pretrained with masked language modeling (MLM; Devlin et al. 2019) on monolingual text, multilingual BERT (mBERT; Devlin et al. 2019) and XLM-R (Con-neau et al., 2020) produce promising results on cross-lingual transfer benchmarks (Hu et al., 2020b).",
"mT5 (Xue et al., 2020) learns a multilingual version of T5 (Raffel et al., 2020) with text-to-text tasks.",
"In addition to monolingual text, several methods utilize parallel corpora to improve cross-lingual transferability.",
"XLM (Conneau and Lample, 2019) presents the translation language modeling (TLM) task that performs MLM on concatenated translation pairs.",
"ALM (Yang et al., 2020) introduces code-switched sequences into cross-lingual LM pre-training.",
"Unicoder (Huang et al., 2019) employs three cross-lingual tasks to learn mappings among languages.",
"From an information-theoretic perspective, InfoXLM (Chi et al., 2021b) proposes the cross-lingual contrastive learning task to align sentence-level representations.",
"Additionally, AM-BER (Hu et al., 2020a) introduces an alignment objective that minimizes the distance between the forward and backward attention matrices.",
"More recently, Ernie-M (Ouyang et al., 2020) presents the back-translation masked language modeling task that generates pseudo parallel sentence pairs for learning TLM, which provides better utilization of monolingual corpus.",
"VECO (Luo et al., 2020) pretrains a unified cross-lingual language model for both NLU and NLG.",
"mT6 (Chi et al., 2021a) improves the multilingual text-to-text transformer with translation pairs.",
"Notably, Word-aligned BERT models (Cao et al., 2020; Zhao et al., 2020) finetune mBERT by an explicit alignment objective that minimizes the distance between aligned tokens.",
"Wu and Dredze (2020) exploit contrastive learning to improve the explicit alignment objectives.",
"However, Wu and Dredze (2020) show that these explicit alignment objectives do not improve cross-lingual representations under a more extensive evaluation.",
"Moreover, these models are restricted to stay close to their original pretrained values, which is not applicable for large-scale pre-training.",
"On the contrary, we demonstrate that employing our explicit alignment objective in large-scale pre-training can provide consistent improvements over baseline models.",
"Word alignment The IBM models (Brown et al., 1993) are statistical models for modeling the translation process that can extract word alignments between sentence pairs.",
"A large number of word alignment models are based on the IBM models (Och and Ney, 2003; Mermer and Saraclar, 2011; Dyer et al., 2013; Ostling and Tiedemann, 2016).",
"Recent studies have shown that word alignments can be extracted from neural machine translation models (Ghader and Monz, 2017; Koehn and Knowles, 2017; Li et al., 2019) or from pretrained cross-lingual LMs (Jalili Sabet et al., 2020; Nagata et al., 2020).",
"Figure 1 illustrates an overview of our method for pre-training our cross-lingual LM, which is called XLM-ALIGN .",
"XLM-ALIGN is pretrained in an expectation-maximization manner with two alternating steps, which are word alignment self-labeling and denoising word alignment.",
"We first formulate word alignment as an optimal transport problem, and self-label word alignments of the input translation pair on-the-fly.",
"Then, we update the model parameters with the denoising word alignment task, where the model uses a pointer network (Vinyals et al., 2015) to predict the aligned tokens from the perturbed translation pair.",
"The goal of word alignment self-labeling is to estimate the word alignments of the input translation pair on-the-fly, given the current XLM-ALIGN model.",
"Given a source sentence",
"S = s 1 . . . s i . . . s n and a target sentence T = t 1 . . . t j . . . t m , we model the word alignment between S and T as a doubly stochastic matrix A R n m + such that the rows and the columns all sum to 1 , where A ij stands for the probability of the alignment between s i and t j .",
"The rows and the columns of A represent probability distributions of the forward alignment and the backward alignment, respectively.",
"To measure the similarity between two tokens from S and T , we define a metric function f sim by using cross-lingual representations produced by XLM-ALIGN : f sim ( s i , t j ) = log max( (cid:15), h (cid:62) i h j ) (1) where (cid:15) is a constant to avoid negative values in the log function, and h i is the hidden vector of the i -th token by encoding the concatenated sequence of S and T with XLM-ALIGN .",
"Empirically, the metric function produces a high similarity score if the two input tokens are semantically similar.",
"We can find that Eq.",
"(2) is identical to the regularized optimal transport problem (Peyre et al., 2019), if we add an entropic regularization to A : max A n (cid:88) i =1 m (cid:88) j =1 A ij f sim ( s i , t j ) A ij log A ij (3) Eq.",
"(3) has a unique solution A such that A = diag( u ) K diag( v ) (4) K ij = e f sim ( s i ,t j ) / (5) where u R n + , v R m + , K R n m + .",
"According to Sinkhorn's algorithm (Peyre et al., 2019), the variables u and v can be calculated by the following iterations: u t +1 = 1 n K v t , v t +1 = 1 m K (cid:62) u t +1 (6) where v t can be initialized by v t =0 = 1 m .",
"Similarly, the backward word alignments A can be computed by applying argmax over columns.",
"To obtain high-precision alignment labels, we adopt an iterative alignment filtering operation.",
"We initialize the alignment labels A as .",
"In each iteration, we follow the procedure of Itermax (Jalili Sabet et al., 2020) that first computes A and A by Eq.",
"(7).",
"Then, the alignment labels are updated by: A A ( A A ) (8) Finally, A is updated by: A ij 0 , ( i, j ) A A ij , k ( i, k ) A ( k, j ) A A ij , others (9) where is a discount factor.",
"After several iterations, we obtain the final self-labeled word alignments A .",
"After self-labeling word alignments, we update the model parameters with the denoising word alignment (DWA) task.",
"The goal of DWA is to predict the word alignments from the perturbed version of the input translation pair.",
"Consider the perturbed version of the input translation pair ( S , T ) constructed by randomly replacing the tokens with masks.",
"We first encode the translation pair into hidden vectors h with the XLM-ALIGN encoder: h 1 . . . h n + m = encoder ([ S , T ]) (10) where [ S , T ] is the concatenated sequence of S and T with the length of n + m .",
"Then, we build a pointer network upon the XLM-ALIGN encoder that predicts the word alignments.",
"Specifically, for the i -th source token, we use h i as the query vector and h n +1 , . . . , h n + m as the key vectors.",
"Given the query and key vectors, the forward alignment probability a i is computed by the scaled dot-product attention (Vaswani et al., 2017): a i = softmax( q (cid:62) i K d h ) (11) q i = linear( h i ) (12) K = linear([ h n +1 . . . h n + m ]) (13) where d h is the dimension of the hidden vectors.",
"Similarly, the backward alignment probability can be computed by above equations if we use target tokens as the query vectors and h 1 . . . h n as key vectors.",
"Notice that we only consider the self-labeled and masked positions as queries.",
"Formally, we use the following query positions in the pointer network: P = { i | ( i, ) A ( , i ) A} M (14) where M is the set of masked positions.",
"The training objective is to minimize the cross-entropy between the alignment probabilities and the self-labeled word alignments: LDWA = (cid:88) i P CE ( a i , A ( i )) (15) where CE ( , ) stands for the cross-entropy loss, and A ( i ) is the self-labeled aligned position of the i -th token.",
"We illustrate the pre-training procedure of XLM-ALIGN in Algorithm 1.",
"In addition to DWA, we also include MLM and TLM for pre-training XLM-ALIGN , which implicitly encourage the cross-lingual alignment.",
"The overall loss function is defined as: LMLM ( X ) + LTLM ( S , T ) + LDWA ( S , T , A ) In each iteration, we first sample monolingual text X , and parallel text ( S , T ) .",
"Then, we self-label word alignments and update the model parameters by learning pretext tasks.",
"Notice that the model parameters are initialized by a cold-start pre-training to avoid producing low-quality alignment labels.",
"The cold-start pre-training can be accomplished by using a pretrained LM as the model initialization.",
"Following previous cross-lingual pretrained models (Conneau and Lample, 2019; Conneau et al.,",
"2020; Chi et al., 2021b), we use raw sentences from the Wikipedia dump and CCNet (Wenzek et al., 2019) for MLM, including 94 languages.",
"For TLM and DWA, we use parallel corpora from MultiUN (Ziemski et al., 2016), IIT Bombay (Kunchukuttan et al., 2018), OPUS (Tiede-mann, 2012), and WikiMatrix (Schwenk et al., 2019), including 14 English-centric language pairs.",
"We pretrain a Transformer with 12 layers and the hidden size of 768 , where the parameters are initialized with XLM-R (Conneau et al., 2020).",
"The model is optimized with the Adam optimizer (Kingma and Ba, 2015) for 150 K steps with batch size of 2 , 048 .",
"Notice that TLM and DWA share the same forward procedure for encoding the perturbed sentence pair.",
"The pre-training of XLM-ALIGN takes about six days with two Nvidia DGX-2 stations.",
"More details of the training data and the hyperparameters are in supplementary document.",
"XTREME is a multilingual benchmark for evaluating cross-lingual generalization.",
"We evaluate our model on 7 cross-lingual downstream tasks included by XTREME, which can be grouped into 3 categories: (1) Structured prediction: part-of-speech tagging on the Universal Dependencies v2.5 (Zeman et al., 2019), and named entity recognition on the WikiAnn (Pan et al., 2017; Rahimi et al., 2019) dataset; (2) Question answering: cross-lingual question answering on MLQA (Lewis et al., 2020) and XQuAD (Artetxe et al., 2020), and gold passage of typologically diverse question answering (TyDiQA-GoldP; Clark et al. 2020); (3) Sentence classification: cross-lingual natural language inference (XNLI; Conneau et al. 2018), and cross-lingual paraphrase adversaries from word scrambling (PAWS-X; Yang et al. 2019).",
"Baselines We use the following pretrained cross-lingual LMs as baselines.",
"(1) Multilingual BERT ( MBERT ; Devlin et al. 2019) is pretrained with masked language modeling (MLM) and next sentence prediction on Wikipedia of 104 languages; (2) XLM (Conneau and Lample, 2019) is jointly pretrained with MLM on 100 languages and translation language modeling (TLM) on 14 language pairs; (3) M T5 (Xue et al., 2020) is the multilingual version of T5 pretrained with text-to-text tasks; (4) XLM-R (Conneau et al., 2020) is pretrained with MLM on large-scale CC-100 dataset with long training steps.",
"Fine-tuning Following Hu et al. (2020b), we adopt the zero-shot transfer setting for evaluation, where the models are only fine-tuned on English training data but evaluated on all target languages.",
"Besides, we only use one model for evaluation on all target languages, rather than selecting different models for each language.",
"The detailed fine-tuning hyperparameters can be found in supplementary document.",
"Results In Table 1, we present the evaluation results on XTREME structured prediction, question answering, and sentence classification tasks.",
"It can be observed that our XLM-ALIGN obtains the best average score over all the baseline models, improving the previous score from 66.4 to 68.9.",
"It demonstrates that our model learns more transferable representations for the cross-lingual tasks, which is beneficial for building more accessible multilingual NLP applications.",
"It is worth mentioning that our method brings noticeable improvements on the question answering and the structured prediction tasks.",
"Compared with XLM-R base , XLM-ALIGN provides 6 .",
"7% and 1 .",
"9% F1 improvements on TyDiQA and NER.",
"The improvements show that the Alignment Method Pretrained Alignment Error Rate Avg Model en-de en-fr en-hi en-ro fast align (Dyer et al., 2013) -32.14 19.46 59.90 -SimAlign Argmax (Jalili Sabet et al., 2020) XLM-R 19.",
"pretrained XLM-ALIGN benefits from the explicit word alignment objective, particularly on the structured prediction and question answering tasks that require token-level cross-lingual transfer.",
"In terms of sentence classification tasks, XLM-ALIGN also consistently outperforms XLM-R base .",
"Word alignment is the task of finding corresponding word pairs in a parallel sentence.",
"We conduct evaluations with golden alignments of four language pairs from EuroParl 1 , WPT2003 2 , and WPT2005 3 , containing 1,244 annotated sentence pairs in total.",
"We use alignment error rate (AER; Och and Ney 1 www-i6.informatik.rwth-aachen.de/ goldAlignment/ 2 web.eecs.umich.edu/mihalcea/wpt/ 3 web.eecs.umich.edu/mihalcea/wpt05/ 2003) as the evaluation metrics.",
"Results We first explore whether our word alignment self-labeling method is effective for generating high-quality alignment labels.",
"Thus, we compare our method with (1) fast align (Dyer et al., 2013), a widely-used implementation of IBM Model 2 (Och and Ney, 2003); (2) SimAlign (Jalili Sabet et al., 2020), state-of-the-art unsupervised word alignment method.",
"For a fair comparison, we use the same pretrained LM and hidden layer as in SimAlign to produce sentence representations.",
"In specific, we take the hidden vectors from the 8 -th layer of XLM-R base or XLM-ALIGN , and obtain the alignments following the procedure as described in Section 3.1.",
"Since the produced alignments are subword-level, we convert the alignments into word-level by the following rule that if two subwords are aligned, the words they belong to are also aligned.",
"As shown in Table 2, we report the AER scores on the four language pairs.",
"It can be observed that our optimal-transport method outperforms fast align and SimAlign , demonstrating that our method can produce high-quality alignment labels, which is helpful for the DWA task.",
"Moreover, our method consistently outperforms SimAlign when using hidden vectors from both XLM-R base and XLM-ALIGN .",
"Then, we compare our XLM-ALIGN with XLM-R base on the word alignment task.",
"Empirically, a lower AER indicates that the model learns better cross-lingual representations.",
"From Table 2, XLM-ALIGN obtains the best AER results over all the four language pairs, reducing the averaged AER from 22 .",
"64 to 21 .",
"05 .",
"Besides, un-Models XNLI POS NER MLQA Avg XLM-R* 74.6 75.7 61.6 65.7 69.4 XLM-ALIGN 75.2 75.6 62.6 66.7 70.0 DWA 75.1 75.2 62.0 65.8 69.5 TLM 74.4 76.0 60.4 66.0 69.2 Table 3: Ablation studies on the components of XLM-ALIGN .",
"der both SimAlign and our optimal-transport method, XLM-ALIGN provides consistent reduction of AER, demonstrating the effectiveness of our method for learning fine-grained cross-lingual representations.",
"We also compare XLM-ALIGN with XLM-R base using the hidden vectors from the 3 -th layer to the 12 -th layer.",
"We illustrate the averaged AER scores in Figure",
"2. Notice that the results on the first two layers are not presented in the figure because of the high AER.",
"It can be observed that XLM-ALIGN consistently improves the results over XLM-R base across these layers.",
"Moreover, it shows a parabolic trend across the layers of XLM-R base , which is consistent with the results in (Jalili Sabet et al., 2020).",
"In contrast to XLM-R base , XLM-ALIGN alleviates this trend and greatly reduces AER in the last few layers.",
"We believe this property of XLM-ALIGN brings better cross-lingual transferability on the end tasks.",
"In this section, we conduct comprehensive ablation studies for a better understanding of our XLM-ALIGN .",
"To reduce the computational cost, we reduce the batch size to 256 , and pretrain models with 50 K steps in the following experiments.",
"We perform ablation studies to understand the components of XLM-ALIGN , by removing the denoising word alignment loss ( DWA), the TLM loss ( TLM), or removing both (XLM-R*), which is identical to continue-training XLM-R base with MLM.",
"We evaluate the models on XNLI, POS, NER, and MLQA, and present the results in Table",
"3. Comparing TLM with DWA, we find that DWA is more effective for POS and MLQA, while TLM performs better on XNLI and NER.",
"Comparing TLM with XLM-R*, it shows that directly learning DWA slightly harms the perfor-Layer XNLI POS NER MLQA Avg Layer-8 75.1 75.3 61.9 66.7 69.8 Layer-10 75.2 75.6 62.6 66.7 70.0 Layer-12 75.2 75.8 62.3 67.0 70.1 Table 4: Results of XLM-ALIGN with different layers used for word alignment self-labeling during pretraining.",
"mance.",
"However, jointly learning DWA with TLM provides remarkable improvements over DWA, especially on the question answering and the structure prediction tasks that requires token-level cross-lingual transfer.",
"This indicates that TLM potentially improves the quality of self-labeled word alignments, making DWA more effective for cross-lingual transfer.",
"It has been shown that the word alignment performance has a parabolic trend across the layers of mBERT and XLM-R (Jalili Sabet et al., 2020).",
"It indicates that the middle layers produce higher-quality word alignments than the bottom and the top layers.",
"To explore which layer produces better alignment labels for pre-training, we pretrain three variants of XLM-ALIGN , where we use the hidden vectors from three different layers for word alignment self-labeling.",
"We use the 8 -th, 10 -th, and 12 -th layers for word alignment self-labeling during the pre-training.",
"We present the evaluation results in Table",
"4. Surprisingly, although Layer-8 produces higher-quality alignment labels at the beginning of the pre-training, using the alignment labels from the 12 -th layer learns a more transferable XLM-ALIGN model for cross-lingual end tasks.",
"Beyond the self-labeling layer, we also investigate which layer is better for learning the denoising word alignment task.",
"Recent studies have shown Filtering XNLI POS NER MLQA Avg Enable 75.2 75.6 62.6 66.7 70.0 Disable 74.2 75.3 61.6 65.3 69.1 Table 6: Effects of alignment filtering in word alignment self-labeling.",
"that it is beneficial to learn sentence-level cross-lingual alignment at a middle layer (Chi et al., 2021b).",
"Therefore, we pretrain XLM-ALIGN models by using three different layers for DWA, that is, using the hidden vectors of middle layers as the input of the pointer network.",
"We compare the evaluation results of the three models in Table",
"5. It can be found that learning DWA at Layer8 improves XNLI while learning DWA at higher layers produces better performance on the other three tasks.",
"It suggests that, compared with sentence-level pretext tasks that prefers middle layers, the DWA task should be applied at top layers.",
"Although our self-labeling method produces high-quality alignment labels, the alignment filtering operation can potentially make some of the tokens unaligned, which reduces the example efficiency.",
"Thus, we explore whether the alignment filtering is beneficial for pre-training XLM-ALIGN .",
"To this end, we pretrain an XLM-ALIGN model without alignment filtering.",
"In specific, we use the union set of the forward and backward alignments as the self-labeled alignments so that all tokens are aligned at least once.",
"The forward and backward alignments are obtained by applying the argmax function over rows and columns of A , respectively.",
"Empirically, the alignment filtering operation generates high-precision yet fewer labels, while removing the filtering promises more labels but introduces low-confident labels.",
"In Table 6, we compare the results of the models with or without alignment filtering.",
"It can be observed that the alignment filtering operation improves the performance on the end tasks.",
"This demonstrates that it is necessary to use high-precision labels for learning the denoising word alignment task.",
"On the contrary, using perturbed alignment labels in pre-training harms the performance on the end tasks.",
"as the query vectors in the pointer network.",
"To explore the impact of the DWA query positions, we compare three different query positions in Table 7: (1) masked : only using the masked tokens as queries; (2) unmasked : randomly using 15 % of the unmasked tokens as queries; (3) all-aligned : for each self-labeled aligned pair, randomly using one of the two tokens as a query.",
"Also, we include the no-query baseline that does not use any queries, which is identical to removing DWA.",
"It can be observed that using all the three query positions improves the performance over the no-query baseline.",
"Moreover, using the masked positions as queries achieves better results than the other two positions, demonstrating the effectiveness of the masked query positions.",
"In this paper, we introduce denoising word alignment as a new cross-lingual pre-training task.",
"By alternately self-labeling and predicting word alignments, our XLM-ALIGN model learns transferable cross-lingual representations.",
"Experimental results show that our method improves the cross-lingual transferability on a wide range of tasks, particularly on the token-level tasks such as question answering and structured prediction.",
"Despite the effectiveness for learning cross-lingual transferable representations, our method also has the limitation that requires a cold-start pre-training to prevent the model from producing low-quality alignment labels.",
"In our experiments, we also try to pretrain XLM-ALIGN from scratch, i.e., without cold-start pre-training.",
"However, the DWA task does not work very well due to the low-quality of self-labeled alignments.",
"Thus, we recommend continue-training XLM-ALIGN on the basis of other pretrained cross-lingual language models.",
"For future work, we would like to research on removing this restriction so that the model can learn word alignments from scratch.",
"Despite the current advances in NLP, most NLP research works and applications are English-centric, making none-English users hard to access to NLP-related services.",
"Our method aims to pretrain cross-lingual language models that transfer supervision signals from high-resource languages to low-resource languages, which makes the NLP services and applications more accessible for low-resource-language speakers.",
"Furthermore, our method can build multilingual models that serve on different languages at the same time, reducing the computational resources for building multilingual models separately for each language.",
"Heyan Huang is the corresponding author.",
"The work is supported by National Key R&D Plan (No. 2018YFB1005100), National Natural Science Foundation of China (No. 61751201, 61602197 and 61772076), Natural Science Fund of Beijing (No. Z181100008918002), and the funds of Beijing Advanced Innovation Center for Language Resources (No. TYZ19005)."
] | [
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"method",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"result",
"result",
"objective",
"method",
"objective",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"other",
"other",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"other",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"result",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"other",
"other"
] |
[
"Most studies on abstractive summarization report ROUGE scores between system and reference summaries.",
"However, we have a concern about the truthfulness of generated summaries: whether all facts of a generated summary are mentioned in the source text.",
"This paper explores improving the truthfulness in headline generation on two popular datasets.",
"Analyzing headlines generated by the state-of-the-art encoder-decoder model, we show that the model sometimes generates untruthful headlines.",
"We conjecture that one of the reasons lies in untruthful supervision data used for training the model.",
"In order to quantify the truthfulness of article-headline pairs, we consider the textual entailment of whether an article entails its headline.",
"After confirming quite a few untruthful instances in the datasets, this study hypothesizes that removing untruthful instances from the supervision data may remedy the problem of the untruthful behaviors of the model.",
"Building a binary classifier that predicts an entailment relation between an article and its headline, we filter out untruthful instances from the supervision data.",
"Experimental results demonstrate that the headline generation model trained on filtered supervision data shows no clear difference in ROUGE scores but remarkable improvements in automatic and manual evaluations of the generated headlines.",
"Automatic text summarization aims at condensing a text into a shorter version while maintaining the essential information (Mani, 2001).",
"Methods on summarization are broadly categorized into two approaches: extractive and abstractive .",
"The former extracts important words, phrases, or sentences from a source text to compile a summary (Gold-stein et al., 2000; Erkan and Radev, 2004; Mihalcea, 2004; Lin and Bilmes, 2011).",
"In contrast, the latter involves more complex linguistic operations (e.g., abstraction, paraphrasing, and compression) to generate a new text (Knight and Marcu, 2000; Clarke and Lapata, 2008).",
"Until 2014, abstractive summarization had been less popular than extractive one because of the difficulty of generating a natural text.",
"However, research on abstractive summarization has attracted a lot of attentions recently with the advances on encoder-decoder models (Rush et al., 2015; Takase et al., 2016; Zhou et al., 2017; Cao et al., 2018a; Song et al., 2019; Wang et al., 2019).",
"English Gigaword (Graff and Cieri, 2003; Napoles et al., 2012) is a representative dataset for abstractive summarization.",
"Rush et al. (2015) regarded Gigaword as a corpus containing a large number of article-headline pairs for training an encoder-decoder model.",
"Their work assumed a task setting where the first sentence of an article is a source text and its corresponding headline is a target text (summary).",
"Since then, it has been a common practice to use the Gigaword dataset with this task setting and to measure the quality of generated headlines with ROUGE scores (Lin and Hovy, 2003) between system-generated and reference headlines.",
"Apparently, a summarization method is desirable to achieve a ROUGE score of 100 , i.e., a system output is identical to the reference.",
"However, this is an unrealistic goal for the task setting on the Gigaword dataset.",
"The summarization task is underconstrained in that the importance of a piece of information highly depends on the expectations and prior knowledge of a reader (Kryscinski et al., 2019).",
"In addition, the Gigaword dataset (as well as other widely-used datasets) was noisy for summarization research because it was not created for the research objective but other professional activities (e.g., news production and distribution).",
"Thus, the state-of-the-art method could only reach ROUGE-1 scores less than 40 on the dataset.",
"While a number of methods compete with each other for the underconstrained task on the noisy data, we have another concern about the truthfulness of generated summaries: whether all facts of a generated summary are mentioned in the source text.",
"Unlike extractive summarization, abstractive summarization has no guarantee of truthfulness.",
"This may result in a serious concern of practical applications of abstractive summarization when a generated summary includes fake facts that are not mentioned in the source document.",
"In this paper, we explore improving the truthfulness in abstractive summarization on two datasets, English Gigaword and JApanese MUlti-Length Headline Corpus (JAMUL) (Hitomi et al., 2019).",
"In Section 2, we analyze headlines generated by the state-of-the-art encoder-decoder model and show that the model sometimes generates unexpected words.",
"In order to estimate the truthfulness to the original text, we measure the recall-oriented ROUGE-1 scores between the source text and the generated headlines.",
"This analysis reveals that a high ROUGE score between a reference and headline does not necessarily mean a high truthfulness to the source and that there is only a weak correlation between the two.",
"In Section 3, we conjecture that one of the reasons why the model sometimes exhibits such an untruthful behavior lies in untruthful article-headline pairs, which are used for training the model.",
"In order to quantify the truthfulness of article-headline pairs, we consider the textual entailment of whether an article (source document) entails its headline.",
"We will show that about 3040% of source documents do not entail their headlines under the widely-used experimental settings.",
"In other words, the current task setting is inappropriate for abstractive summarization.",
"We release the annotations of textual entailment for both English Gigaword and JAMUL 1 .",
"After confirming the untruthfulness of article-headline pairs in the datasets, we hypothesize that removing untruthful instances from the training data may remedy the problem of the untruthful behavior of the model.",
"In Section 4, we build a binary classifier that predicts an entailment relation between an article and its headline and use the classifier to filter out untruthful instances in the training data.",
"We train a model on the filtered supervision 1 https://github.com/nlp-titech/ headline-entailment data in Section 5.",
"Experimental results demonstrate that the filtering procedure shows no clear difference in ROUGE scores but remarkable improvements when we manually and automatically evaluate the truthfulness of the generated headlines.",
"These results suggest the importance of evaluating truthfulness in addition to relevance.",
"Although the current state-of-the-art method for abstractive summarization could only achieve a ROUGE-1 score of less than 40 on the Gigaword dataset, generated headlines actually look very flu-ent.",
"This is probably because the encoder-decoder model acquired a strong language model from the vast amount of supervision data.",
"However, some studies reported that the generated headlines often deviate from the content of the original document (Cao et al., 2018b; Kryscinski et al., 2019).",
"They addressed the problem where an abstractive model made mistakes in facts (e.g., tuples of subjects, predicates, and objects).",
"However, we also regularly see examples where the abstractive model generates unexpected words.",
"This is true even for the state-of-the-art model.",
"Table 1 shows examples of unexpected outputs from UniLM (Dong et al., 2019), which shows the highest ROUGE scores 2 on English Gigaword.",
"In the first example, the output includes in Novem-ber whereas the input did not mention the exact month.",
"In fact, this article was published in August 2009; however, the model probably guessed the month from the expression this fall.",
"The second example also exhibits a similar problem where the model incorrectly supplemented the news source the Detroit News.",
"The third and fourth examples are more problematic in that the generated headlines do not summarize the input sentences at all.",
"In order to quantify the problem of outputs that are untruthful to source documents, we measure the word overlap between the input and output of the UniLM model on the test set of English Gigaword (Rush et al., 2015).",
"Here, we calculate the recall-oriented ROUGE-1 score 3 , regarding an out-2 UniLM model fine-tuned on Gigaword dataset achieved 38 .",
"We used SumEval: https://github.com/chakki-works/sumeval",
"put (generated headline) as a gold standard and an input (source document) as a target to be evaluated 4 .",
"Although this use of the ROUGE metric is unconventional, the intention here is to measure how many words in a generated headline originate from the input document.",
"In other words, if all words in a generated headline are covered by its source document (truthful), the score is 100 ; if none of the words in a generated headline originate from its source document (untruthful), the score is 0 .",
"We call this ROUGE score support score hereafter to avoid naming conflicts with conventional ROUGE scores between system and reference summaries.",
"We mention that we can find a similar method to the support score in several studies; for example, Zhang et al. (2018) measured the abstractiveness of an output.",
"Our support score is roughly a reverse 4 We ignore instances whose source documents are less than ten characters long.",
"The total number of instances after this treatment is 1,936.",
"version of abstractiveness because the abstractiveness measures the number of words in an output that do not appear in the input.",
"Figure 1 reports the histogram of the support scores.",
"A certain amount of instances receive relatively high support scores: 50.10% of the instances obtain scores larger than 80 .",
"At the same time, a non-negligible amount (9.14%) of instances have support scores less than 40 .",
"Note that the support scores present rough estimations of the truthfulness of the model; a lower score may imply that a headline includes paraphrased or shortened words from its source document.",
"Having said that, Figure 1 indicates that the state-of-the-art model sometimes generates untruthful headlines.",
"Here, another interesting question comes into our mind: how do the widely-used benchmarking performance values (measured by ROUGE scores between system and reference headlines) reflect the truthfulness (measured by the support scores)?",
"Figure 2 depicts the correlation between the two: the X-axis presents the ROUGE-1 score between system and reference headlines, and Y-axis presents support score.",
"Unfortunately, we cannot observe a strong correlation between the two scores: Pear-son's correlation coefficient between the two scores is 0.189, which suggests no correlation.",
"This result supports that the conventional ROUGE scores tell us little about the truthfulness of generated summaries.",
"Why does a headline generation model exhibit untruthful behavior as we saw in the previous section?",
"Before discussing the reason behind this, we need to understand how the datasets and task settings were established.",
"The Annotated English Gigaword corpus 5 is one of the most popular corpora in abstractive summarization research.",
"Rush et al. (2015) converted this corpus into a dataset for abstractive summarization.",
"They assumed the lead (first) sentence of an article as a source document and its corresponding headline as a target output.",
"They did not explain the reason why they did not use a full-length article but only a lead sentence as a source document for headline generation.",
"We infer that the reason for this treatment is that: a lead sentence provides a strong baseline for extractive summarization; their intention was to explore the capability of abstractive summarization from a lead sentence to a headline; using full text was time-consuming for encoder-decoder models.",
"Moreover, Rush et al. (2015) introduced some heuristics to remove some noisy instances.",
"They discarded an instance if: (1) the source and target documents have no non-stop word in common; (2) the headline contains a byline or other extraneous editing marks; and (3) a headline includes a question mark or colon.",
"JApanese MUlti-Length Headline Corpus (JA-MUL) 6 is a dataset specially designed for evaluating summarization methods.",
"JAMUL consists of 1,524 Japanese full-text articles and their print headlines (used for newspapers).",
"Although JAMUL 5 https://catalog.ldc.upenn.edu/ LDC2012T21 6 https://cl.asahi.com/api_data/ jnc-jamul-en.html is distributed for free of charge, JAMUL alone is insufficient for training an encoder-decoder model.",
"Hitomi et al. (2019) also released Japanese News Corpus (JNC), which is a large-scale dataset consisting of 1,831,812 pairs of newspaper articles and their print headlines.",
"JNC includes only the first three sentences of each article 7 .",
"Table 2 summarizes the datasets and task settings.",
"As we can see from the rows of Rush et al. (2015) and JNC, these task settings do not use full-text articles but only lead (6.6% of words in full articles, Gigaword) and lead three sentences (25.9% of words in full articles, JNC) as source documents for abstractive summarization.",
"Hence, we hypothesize that the source documents under these task settings contain insufficient information for generating headlines.",
"In other words, headline generation models might be faced with supervision data where headlines cannot be generated from source documents and learned to be untruthful, i.e., producing pieces of information that are not mentioned in source documents.",
"This section explores the hypothesis: do source documents include sufficient information to produce headlines?",
"We examine this hypothesis by considering textual entailment between a source document and its headline.",
"More specifically, we would like to know whether a source document entails its headline, i.e., whether we can infer that a headline is true based on the information in the source document.",
"We asked three human subjects to judge entailment relations for 1,000 pairs of source documents and headlines of each dataset.",
"We randomly selected 1,000 pairs from the test set of the English Gigaword dataset and 1,000 pairs from JAMUL.",
"The labels include entail , non-entail , and other (see Appendix for the definition of the labels and the treatment).",
"Table 4 reports the ratio of document-headline pairs for which two or three human subjects voted yes' for the entailment relation ( entail ).",
"Only 70.3% of lead-headline pairs in the Gigaword dataset hold the entailment relation.",
"For reference, we did the same analysis by using full-text articles as source documents and found that the ratio 7 This is because the price of the dataset would be much higher if it included full-text articles.",
"rises to 92.8%.",
"Similarly, only 61.4% of lead three sentences (lead-3) and headline pairs in JAMUL hold the entailment relation.",
"When using full-text articles, the entailment ratio rises to 94.2%.",
"These results support our hypothesis that source documents contain insufficient information under the current task settings.",
"Based on the analysis in the previous section, we can consider two strategies to improve the task setting: using full-text articles as source documents instead of leading sentences; and removing non-entailment instances from the dataset.",
"Although the former strategy reduces the ratio of non-entailment pair to 7.2% (English Gigaword) and 5.8% (JA-MUL), we must consider the trade-off: the use of full-text articles increases the cost for training, and may decrease the quality of headlines because of longer inputs to encoder-decoder models.",
"Furthermore, JNC does not provide full-text articles but only lead three sentences.",
"Therefore, we take the latter strategy, removing non-entailment pairs from the supervision data for headline generation.",
"In order to find non-entailment pairs in the dataset, we build a binary classifier that judges whether a source document entails its headline or not.",
"Recently, pretrained language models such as BERT (Devlin et al., 2019) show remarkable advances in the task of recognizing textual entailment (RTE) 8 .",
"Thus, we fine-tune pretrained models on the supervision data for entailment relation between source documents and their headlines.",
"For English Gigaword dataset, we use the pretrained RoBERTa large (Liu et al., 2019) fine-tuned on Multi-Genre Natural Language Inference (MultiNLI) (Williams et al., 2018).",
"We further fine-8 https://gluebenchmark.com/leaderboard tuned the model on the supervision data of the lead-headline pairs with entailment labels (acquired in Section 3).",
"Here, the supervision data include lead-headline pairs where two or three human subjects labeled either entail or non-entail ; other pairs were excluded from the supervision data.",
"In this way, we obtained a binary classifier for entailment relation of 91.7% accuracy on a hold-out evaluation (761 training and 179 test instances) after running 10 epoch of fine-tuning on the RoBERTa model.",
"For JNC, we use the pretrained BERT model for Japanese text (Kikuta, 2019).",
"However, no large-scale Japanese corpus for semantic inference (counterpart to MultiNLI) is available.",
"Thus, we created supervision data for entailment relation between lead three sentences and headlines ( lead3-headline , hereafter) on JNC.",
"We extracted 12,000 lead3-headline pairs from JNC, and collected entailment labels using crowdsourcing.",
"Each pair had five entailment labels assigned by five crowd workers.",
"We used lead3-headline pairs where four or five crowd workers labeled either entail or non-entail ; other pairs were unused in the supervision data.",
"The entailment classifier fine-tuned on the supervision data achieved 83.9% accuracy on a hold-out evaluation with 5,033 training and 1,678 test instances.",
"Applying the entailment classifiers to the training and development sets of English Gigaword dataset and JNC, we removed instances of non-entailment pairs judged by the classifiers.",
"Eventually, we obtained 2,695,325 instances (71% of the original training instances) on the English Gigaword dataset and 841,640 instances (49% of the original training instances) on JNC.",
"In this section, we examine whether the supervision data built in the previous section reduces untruthful headlines.",
"We use fairseq 9 (Ott et al., 2019) as an implementation of the Transformer architecture (Vaswani et al., 2017) throughout the experiments.",
"Hyper-parameter configurations are: 6 layers both in the encoder and decoder; 8 attention heads; the dimension of hidden states is 512; the dimension of hidden states of the feed forward network is 2048; the smoothing rate, dropout rate, and label smoothing 9 https://github.com/pytorch/fairseq were set to 0.1; Adam optimizer with = 0 .",
"98 , the learning rate of 0.0005, and 4,000 warm-up steps.",
"We train the Transformer models on the supervision data with and without non-entailment instances.",
"Because removing non-entailment instances decreases the number of training instances, we also apply the self-training strategy (Murao et al., 2019) to obtain the same amount of training instances to the full supervision data.",
"More specifically, we generated headlines for the source documents discarded in Section 4.1, and added pairs of source documents and generated headlines as pseudo supervision data.",
"The experiments compare models trained on the full supervision data ( full ), the one filtered by the entailment classifier ( filtered ), and the one filtered but augmented by the self-training ( filtered+pseudo ).",
"The experiments use the same data split of training (3.8M instances), development (390k instances), and test (380k instances) sets to Rush et al. (2015).",
"In this study, we used 10,000 instances for evaluation that were sampled from the test set and unused in the analysis in Section 3.",
"We do not apply any replace operations for the English Gigaword dataset: digit masking, rare word to UNK, and lower-casing.",
"The dataset is tokenized by WordPiece (Wu et al., 2016) with the same vocabulary used in UniLM.",
"Splitting JNC into 1.7M training and 3k development instances, we evaluate the model on the JAMUL dataset.",
"We use SentencePiece 10 (Kudo and Richardson, 2018) for tokenization.",
"We evaluate the quality of generated headlines by using full-length F1 ROUGE scores 11 , following the previous work.",
"However, Kryscinski et al. (2019) reported that ROUGE scores between system and reference summaries had only a weak correlation with human judgments.",
"Furthermore, we would like to confirm whether the filtering strategy can improve the truthfulness of the model.",
"Therefore, we also report the support score, the ratio of entailment relation between source documents and generated headlines measured by the entailment classifiers (explained in Section 4.1), and human evaluation about the truthfulness.",
"10 https://github.com/google/ sentencepiece 11 ROUGE scores were computed by SumEval.We used MeCab (Kudo et al., 2004) for Japanese tokenization.",
"Table 5 shows the main results.",
"The baseline model with full training data obtained 35.80 ROUGE-1 score on the English Gigaword dataset and 48.08 ROUGE-1 score on JAMUL.",
"The entailment filter lowered ROUGE scores on both of the datasets probably because of the smaller number of training instances, but the self-training strategy improved ROUGE scores on the Gigaword dataset, outperforming the baseline model.",
"In contrast, the self-training strategy could not show an improvement for ROUGE scores on JAMUL.",
"Although it is difficult to find the exact cause of this result, we suspect that the filtering step reduced the training instances too much (0.8M instances) for the self-training method to be effective.",
"Another possibility is that the writing style of articles of non-entailment pairs in JNC/JAMUL is so distant that the self-training method generated headlines that are too different from reference ones.",
"The column Sup presents the support score computed by the recall-oriented ROUGE-1 between source documents and generated headlines (explained in Section 2.2).",
"The table indicates that the filtering and self-training strategies obtain higher support scores than the baseline.",
"Figures 3 and 4 depict histograms of the support scores for the baseline and filtering+pseudo settings on Gigaword and JAMUL, respectively.",
"We could confirm that the filtering+pseudo strategy increased the number of headlines with high support scores.",
"The column Entail shows the entailment ratio measured by the entailment classifier.",
"Again, the filtering+pseudo strategy obtained the highest entailment ratio on both the Gigaword dataset and JAMUL.",
"as natural because we selected training instances based on the same entailment classifier, it is interesting to see that we can control the entailment ratio without changing the model.",
"In order to examine whether the filtering strategy can deliver noticeable improvements for human readers, we asked a human subject to judge the truthfulness of the headlines generated by the baseline setting and filtering+pseudo strategy.",
"Presented with both a source document and a headline generated by the model, the human subject judged whether the headline was truthful , untruthful , or incomprehensible .",
"We conduct this evaluation for 109 instances randomly sampled from the test sets of Gigaword and JAMUL.",
"The Truthful column in Table 5 reports the ratio of truthful headlines.",
"Consistently with the entailment ratio, we could confirm that the fil-tering+pseudo strategy generated truthful headlines more than the baseline setting on both of the datasets.",
"During the human evaluation, one instance in both full and filtered+pseudo settings from the Gigaword dataset judged as incomprehensible.",
"To sum up the results, improving the truthfulness of the supervision data does help improving the truthfulness of generated headlines.",
"We could confirm the improvements from the support scores, entailment ratio, and human judgments.",
"However, the ROUGE scores between system and reference headlines did not indicate a clear difference.",
"The ROUGE metric was proposed to measure the relevance of a summary when extractive summarization was the central approach (in the early 2000s).",
"Obviously, the truthfulness of summaries Dataset Training data (amount) R-1 R-2 R-L Sup Entail Truthful Full (3.8 M) 35.80 17.63 33.69 75.38 85.78% 77.06% Gigaword Filtered (2.7 M) 35.24 17.29 33.14 77.61 91.50% Filtered+pseudo (3.8 M) 35.85 17.94 33.72 79.91 93.56% 85.32% Full (1.7 M) 48.08 22.21 40.02 89.10 90.29% 89.91% JAMUL Filtered (0.8 M) 46.08 20.81 38.07 90.14 95.67% Filtered+pseudo (1.7 M) 45.62 20.55 38.10 90.65 96.26% 92.66% Table 5: Results on the test set.",
"is out of the scope of ROUGE.",
"The experimental results in this paper suggest that we should consider both relevance and truthfulness when evaluating the quality of abstractive summarization.",
"Rush et al. (2015) first applied the neural sequence-to-sequence (seq2seq) architecture (Sutskever et al., 2014; Bahdanau et al., 2015) to abstractive summarization.",
"They obtained a dataset for abstractive summarization from the English Gigaword (Graff and Cieri, 2003; Napoles et al., 2012).",
"After this work, a large number of studies followed the task setting (Takase et al., 2016; Zhou et al., 2017; Cao et al., 2018a; Song et al., 2019; Wang et al., 2019).",
"Some researchers pointed out that abstractive summarization models based on seq2seq sometimes generate summaries with inaccurate facts.",
"Cao et al. (2018b) reported that 30% of the summaries generated by a seq2seq model include different facts from source articles.",
"In addition, Kryscinski et al. (2019) reported that ROUGE scores have only a weak correlation with human judgments in abstractive summarization and that the current evaluation protocol is inappropriate for factual consistency.",
"Several studies approach the problem of inconsistency between input and output by improving the model architecture or learning method.",
"Cao et al. (2018b) applied an information extraction tool to extract tuples of subject, predicate, and object from source documents and utilized them as an additional input to the model.",
"Pasunuru and Bansal (2018) incorporated an entailment classifier as a reward in reinforcement learning.",
"Guo et al. (2018) presented a multi-task learning method between summarization and entailment generation where hypotheses entailed by a given document (as a premise) are generated.",
"Li et al. (2018) introduced an entailment-aware encoder-decoder model to ensure the correctness of the summary.",
"Kiy-ono et al. (2018) reduced incorrect generations by modeling token-wise correspondences between input and output.",
"Falke et al. (2019) proposed a re-ranking method of beam search based on factual correctness from a classifier of textual entailment.",
"As another direction, Kryscinski et al. (2019) evaluated the factual consistency of a source document and the generated summary with a weakly-supervised model.",
"A few studies raised concerns about the data set and task setting.",
"Tan et al. (2017) argued that lead sentences do not provide an adequate source for the headline generation task.",
"The researchers reported that making use of multiple summaries as well as the lead sentence of an articles improved the performance of headline generation on the New York Times corpus.",
"In contrast, our paper is the first to analyze the truthfulness of existing datasets and generated headlines, provide a remedy to the supervision data, and demonstrate the importance of truthfulness in headline generation.",
"In this paper, we showed that the current headline generation model yields unexpected words.",
"We conjectured that one of the reasons lies in the defect in the task setting and data set, where generating a headline from the source document is impossible because of the insufficiency of the source information.",
"We presented an approach for removing from the supervision data headlines that are not entailed by their source documents.",
"Experimental results demonstrated that the headline generation model trained on filtered supervision data showed no clear difference in ROUGE scores but remarkable improvements in automatic and manual evaluations of the truthfulness of the generated headlines.",
"We also presented the importance of evaluating truthfulness in abstractive summarization.",
"In the future, we explore a more sophisticated method to improve the relevance and truthfulness of generated headlines, for example, removing only deviated spans in untruthful headlines rather than removing untruthful headlines entirely from the supervision data.",
"Other directions include an extensive evaluation of relevance and truthfulness of abstractive summarization and an establishment of an automatic evaluation metric for truthfulness.",
"Moreover, it will be also interesting to see whether the same issue occurs in other related tasks such as data-to-text generation.",
"We believe that the concern raised in this paper is beneficial to other tasks.",
"The research results have been achieved by Re-search and Development of Deep Learning Technology for Advanced Multilingual Speech Trans-lation, the Commissioned Research of National Institute of Information and Communications Technology (NICT), Japan."
] | [
"abstain",
"method",
"objective",
"result",
"method",
"method",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"result",
"method",
"abstain",
"method",
"method",
"result",
"abstain",
"abstain",
"objective",
"method",
"other",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"other",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"result",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"result",
"method",
"method",
"abstain",
"method",
"objective",
"abstain",
"abstain",
"method",
"other"
] |
[
"Sequence-to-sequence models for open-domain dialogue generation tend to favor generic, uninformative responses.",
"Past work has focused on word frequency-based approaches to improving specificity, such as penalizing responses with only common words.",
"In this work, we examine whether specificity is solely a frequency-related notion and find that more linguistically-driven specificity measures are better suited to improving response informativeness.",
"However, we find that forcing a sequence-to-sequence model to be more specific can expose a host of other problems in the responses, including flawed discourse and implausible semantics.",
"We rerank our model's outputs using externally-trained classifiers targeting each of these identified factors.",
"Experiments show that our final model using linguistically motivated specificity and plausibility reranking improves the informativeness, reasonableness, and grammatically of responses.",
"Since the pioneering work in machine translation (Sutskever et al., 2014), sequence-to-sequence ( SEQ 2 SEQ ) models have led much recent progress in open-domain dialogue generation, especially single-turn generation where the input is a prompt and the output is a response.",
"However, SEQ 2 SEQ methods are known to favor universal responses, e.g., I don't know what you are talking about (Sordoni et al., 2015; Serban et al., 2016; Li et al., 2016a).",
"These responses tend to be safe responses to many input queries, yet they usually fail to provide useful information.",
"One promising line of research tackling this issue is to improve the specificity of responses, building on the intuition that generic responses frequently appear in the training data or consist of frequent words (Yao et al., 2016; Zhang et al., 2018b; Liu et al., 2018).",
"However, past work in sentence specificitythe quality of belonging or relating uniquely to a particular subject 1 has shown that word frequency is only one aspect of specificity, and that specificity involves a wide range of phenomena including word usage, sentence structure (Louis and Nenkova, 2011; Li and Nenkova, 2015; Lugini and Litman, 2017) and discourse context (Dixon, 1987; Lassonde and O'Brien, 2009).",
"Frequency-based specificity also does not exactly capture the amount of in-formation as an information-theoretic concept.",
"Hence, in dialogue generation, we can potentially make progress by incorporating more linguistically driven measures of specificity, as opposed to relying solely on frequency.",
"We present a sequence-to-sequence dialogue model that factors out specificity and explicitly conditions on it when generating a response.",
"The decoder takes as input categorized values of several specificity metrics, embeds them, and uses them at each stage of decoding.",
"During training, the model can learn to associate different specificity levels with different types of responses.",
"At test time, we set the specificity level to its maximum value to force specific responses, which we found to be most beneficial.",
"We integrate linguistic (Ko et al., 2019), information-theoretic, and frequency-based specificity metrics to better understand their roles in guiding response generation.",
"The second component of our model is designed to make the more specific responses more semantically plausible .",
"In particular, we found that forcing a SEQ 2 SEQ model to be more specific exposes problems with plausibility as illustrated in Table 1.",
"As sentences become more specific and contain more information, intra-response consistency 1 Definition from the Oxford Dictionary Conflicting i understand.",
"problems become evident, making the overall response implausible or unreasonable in real life.",
"Our inspection discovered that 30% of specific responses suffer from a range of problems from semantic incompatibility to flawed discourse.",
"To improve the plausibility of responses, we propose a reranking method based on four external classifiers, each targeting a separate aspect of linguistic plausibility.",
"These classifiers are learned on synthetically generated examples, and at test time their responses are used to rerank proposed responses and mitigate the targeted issues.",
"Using both automatic and human evaluation, we find that linguistic-based specificity is more suitable than frequency-based specificity for generating informative and topically relevant responses, and learning from different types of specificity metrics leads to further improvement.",
"Our plausibility reranking method not only successfully improved the semantic plausibility of responses, but also improved their informativeness, relevance, and grammaticality.",
"Our system is available at https://git.",
"io/fjkDd .",
"Generic responses is a recognized problem in dialogue generation.",
"Li et al. (2016a) maximized mutual information in decoding or reranking, which practically looks like penalizing responses that are common under a language model.",
"Zhou et al. (2017) promoted diversity by training latent embeddings to represent different response mechanisms.",
"Shao et al. (2017) trained and reranked responses segment by segment with a glimpse model to inject diversity.",
"Another angle is to promote prompt-response coherence using techniques such as LDA (Baheti et al., 2018; Xing et al., 2017).",
"Cosine similarity between prompt and response has also been used for coherence (Xu et al., 2018b; Baheti et al., 2018).",
"Wu et al. (2018) learn a small vocabulary of words that may be relevant during decoding and generates responses with this vocabulary.",
"Several works tackle the problem by directly controlling response specificity in terms of word and response frequency.",
"IDF and response frequency have been used as rewards in reinforcement learning (Yao et al., 2016; Li et al., 2016d).",
"Some methods adjusted sample weights in the training data, using a dual encoding model (Li-son and Bibauw, 2017) or sentence length and frequency in the corpus (Liu et al., 2018).",
"Zhang et al. (2018b) proposed a Gaussian mixture model using frequency-based specificity values.",
"Their approach involves ensembling the context probability and a specificity probability, whereas our approach conditions on both in a single model.",
"Prediction of sentence specificity following the dictionary definition and pragmatically cast as level of detail was first proposed by Louis and Nenkova (2011), who related specificity to discourse relations.",
"Sentence specificity predictors have since been developed (Louis and Nenkova, 2011; Li and Nenkova, 2015; Lugini and Litman, 2017; Ko et al., 2019).",
"Insights from these feature-rich systems and hand-code analysis (Li et al., 2016e) showed that sentence specificity encompasses multiple phenomena, including referring expressions, concreteness of concepts, gradable adjectives, subjectivity and syntactic structure.",
"Researchers have noticed that distributional semantics largely fail to capture semantic plausibility , especially in terms of discrete properties (e.g., negation) (Kruszewski et al., 2016) and physical properties (Wang et al., 2018).",
"Kruszewski et al. (2016) created a dataset building on synthetically generated sentences for negation plausibility.",
"Methodology-wise , Li et al. (2016b) trained embeddings for different speakers jointly with the dialogue context.",
"Huang et al. (2018) learned embeddings of emotions; we learn embeddings of specificity metrics.",
"Targeting multiple factors this way is broadly similar to the approach of Holtz-man et al. (2018), who used multiple cooperative discriminators to model repetition, entailment, relevance, and lexical style in generation.",
"Our approach additionally leverages synthetic synthetic sentences targeting a range of plausibility issues and trains discriminators for reranking.",
"Our main framework (Figure",
"1) is an attention-based SEQ 2 SEQ model (Section 3.1) augmented with the ability to jointly learn embeddings from a target metric (e.g., specificity) with the response (Section 3.2).",
"We then integrate frequency-based, information-theoretic and linguistic notions of specificity (Section 3.3) as well as coherence (Section 3.4).",
"Our model is based on a SEQ 2 SEQ model (Sutskever et al., 2014) consisting of an encoder and decoder, both of which are LSTMs (Hochre-iter and Schmidhuber, 1997).",
"We apply attention (Bahdanau et al., 2015) on the decoder.",
"The encoder LSTM takes word embeddings x i in the prompt sentence as input.",
"The hidden layer and cell state of the decoder are initialized with the final encoder states.",
"During training, the decoder takes the embedding of the previous word in the gold response as input; during testing, it uses the previous generated word.",
"We denote both as y i 1 : h di , c di = LST M ( y i 1 , [ h di ; c di 1 ]) (1) where h di is the output of the attention mechanism, given the decoder hidden state.",
"During training, we minimize the negative log likelihood of responses Y given prompts X .",
"In the base model, uninformative responses are preferred partially because these are common in the training data.",
"We want to be able to fit the training data while at the same time recognizing that we do not want to generate such responses at test time.",
"Our approach, shown in Figure 1, involves conditioning on an explicit specificity level during both training and test time.",
"This explicit conditioning allows us to model specificity orthogonally to response content, so we can control it at test time.",
"We represent specificity as a collection of real valued metrics that can be estimated for each sentence independently of the dialogue system.",
"To direct the model to generate more specific responses from multiple specificity metrics, Figure 1: Structure of our model.",
"In particular, for each metric m , we rank the responses in the training data according to that metric and divide it into K = 5 levels of equal size.",
"For each level, we learn an embedding e mk , k { 1 , 2 , ...K } .",
"During training, for each sentence pair in the training set, the response is clas-sified to level l m for metric m .",
"We take the sum of embeddings across all metrics e = (cid:80) Nm =1 e ml m and feed it into the decoder at every time step, where N is the number of metrics.",
"The decoder becomes h di , c di = LST M ( y i 1 , [ h di ; c di 1 ; e ]) (2) During testing, we specify a level for each metric and calculate e based on those levels.",
"In practice, the level of specificity varies with the larger context of dialogue discourse, however for the purpose of avoiding generic responses and improving specificity in single-turn dialogue generation, and examining various metrics of specificity, we use the level that maximizes specificity at test time (which we show in Section 5.3 is better the uninformative median level).",
"2 2 For the purposes of this work, we want an agent that is highly specific and keeps the conversation going.",
"Learning the ideal specificity for a given response is something we leave for future work.",
"Normalized inverse word frequency (NIWF) Used in Zhang et al. (2018b), NIWF is the maximum of the Inverse Word Frequency (IWF) of all the words in a response, normalized to 0-1:",
"where f w denotes the number of responses in the corpus that contain the word w , and | Y | is the number of responses in the corpus.",
"Taking a maximum reflects the assumption that a response is specific as long as it has at least some infrequent word.",
"Perplexity per word (PPW) Perplexity is the exponentiation of the entropy, which estimates the expected number of bits required to encode the sentence (Brown et al., 1992; Goodman, 2001).",
"Thus perplexity is a direct measure of the amount of information in the sentence in information theory; it has also been used as a measure of linguistic complexity (Gorin et al., 2000).",
"To compute perplexity, we train a neural language model (Mikolov et al., 2011) on all gold responses and calculate cross-entropy of each sentence.",
"To represent the amount of information per-token and to prevent the model to simply generate long sentences, we normalize perplexity by sentence length.",
"Linguistically-informed specificity We use the system developed by Ko et al. (2019), which estimates specificity as a real value.",
"This system adopts a pragmatic notion of specificitylevel of details in textthat is originally derived using sentence pairs connected via the INSTANTIATION discourse relation (Louis and Nenkova, 2011).",
"With this relation, one sentence explains in further detail of the content in the other; the explanatory sentence is shown to demonstrate properties of specificity towards particular concepts, entities and objects, while the other sentence is much more general (Li and Nenkova, 2016).",
"We use this particular system since other specificity predictors are trained on news with binary specificity labels (Li and Nenkova, 2015).",
"Ko et al. (2019) is an unsupervised domain adaptation system that predicts continuous specificity values, and was evaluated to be close to human judgments across several domains.",
"We retrain their system using the gold responses in our data as unlabeled sentences in the unsupervised domain adaptation component.",
"Prior work has shown that the universal response problem can be mitigated by improving the coherence between prompt and response (Zhang et al., 2018a; Xu et al., 2018b; Baheti et al., 2018).",
"We introduce two methods to improve coherence upon the base model, and analyze specificity on top.",
"For better interactions between decoder embeddings and the prompt, we feed the final encoder state into every time step of the decoder, instead of only the first token.",
"Thus the decoder becomes h di , c di = LST M ( y i 1 , [ h di ; c di 1 ; e ; h f ]) .",
"(4) Furthermore, Zhang et al. (2018a) showed that responses ranked higher by humans are more similar to the prompt sentence vector.",
"Thus we compute the cosine similarity between input and response representations.",
"This is computed by the weighted average of all word embeddings in the sentence, where the weight of each word is its inverse document frequency.",
"Our model additionally conditions on an embedding of this measure so that coherence is factored out in our model as well as specificity.",
"During testing, we condition on the highest level of our similarity metric in order to generate maximally coherent responses (Xu et al., 2018b).",
"While injecting specificity encourages the model to generate more specific responses, we discovered that it exposes a series of issues that together, severely impact the semantic plausibility of generated responses.",
"This is the case even when responses are considered independently without the prompt context.",
"To have a better understanding of the problem, we first present manual analysis on generated responses with improved specificity.",
"We then present a reranking method to improve the semantic plausibility of responses.",
"We manually inspected 200 responses generated from our full model on the PersonaChat dataset (Zhang et al., 2018c).",
"We evaluated the responses independent of the input prompt and found that 33% of the sentences are semantically implausible; some of them shown in Table 1.",
"We found three major types of errors.",
"The most common type is a wrong word that is not compatible with the context, making the phrase unreasonable ( cool . i work at a non profit organization that sells the holocaust ), meaningless ( i like to dance battles ), or unnatural ( yeah , but i am more of a game worm . i am a pro ball player) .",
"These make up about 45% of the implausible cases.",
"About 30% of the problematic sentences contain incompatible phrases.",
"Different phrases in the response are contradictory ( i understand. i am not sure if i can afford a babysitter, i am a millionaire ) or repetitive ( my favorite food is italian, but i also love italian food, especially italian food. ).",
"The third problem ( 15%) is that phrases are connected by a wrong discourse connective ( i am an animal phobic, but i do not like animals ).",
"This and the previous problem reveal that even when the model generates sensible phrases, proper discourse relations between them are not captured.",
"Other notable errors include cohesion, such as wrong determiners or pronouns ( my mom was a social worker, he was an osteopath. ) and inappropriate prepositional phrases ( hello , i am winding down to the morning . ) This semantic implausibility may come from two sources.",
"First, since specific responses tend to be longer, it is easier to have internal consistency issues where parts of the sentence are incompatible with each other.",
"Second, regardless of the specificity metric, word frequency in specific responses tend to be lower than that in generic responses.",
"Learning meaningful representations for infrequent words is a known challenge (Gong et al., 2018) hence low-quality representations may increase the probability of the sentence being implausible.",
"To mitigate semantic plausibility issues, we propose a reranking method so that more plausible sentences are ranked higher among the candidates.",
"We use classifiers targeting various types of errors using synthetically generated data.",
"Specifi-cally, we train four classifiers that distinguish true response sentences from the dataset and negative sentences we create that reflect a specific type of semantic implausibility: Phrase compatibility : We split all the training data into phrases by splitting sentences on punctuation or discourse connectives.",
"To create a negative sentence given a gold response, we pick a random phrase in the true response and replace it with a random phrase in another random true response.",
"Content word plausibility : We replace a ran-Figure 2: Reranking models to encourage plausibility.",
"Four types of errors are synthetically applied to the data and classifiers are trained to differentiate each transformed sentence from the original.",
"The mean score under these classifiers is then used as a feature to rerank system outputs.",
"domly selected content word (noun, verb, adjective, adverb) in the gold response with another random word with the same part-of-speech in the training set.",
"Discourse connectives : We replace a discourse connective in the gold response (if one exists) with a random connective.",
"Cohesion and grammar : We replace a randomly selected function word in the gold response with another random function word of the same part-of-speech.",
"For pronouns and determiners, these negative sentences would likely be incohesive; with other word categories such as prepositions, this will target grammatically.",
"One word or phrase is replaced in each synthetic sentence.",
"We train one classifier j , j { 1,2,3,4 } for each of the categories above.",
"3 The classifiers take word embeddings as input and predict if the response is real or generated.",
"Each classifier consists of a bi-directional LSTM with a projection layer and max pooling (Conneau et al., 2017), followed by 3 fully connected layers.",
"The posterior probabilities of these classifiers reflect how con-fident the classifiers are that the sentence is synthetic and prone to be implausible, hence we prefer sentences with lower posterior probabilities.",
"During reranking, we feed each candidate sentence c into the classifiers and aggregate the posterior probabilities from these classifiers by taking the 3 We compare with using one classifier lumping all negative sentences in the experiments.",
"mean 14 (cid:80) 4 k =1 P ( synthetic | c, k ) .",
"At test time, to encourage diversity, we repeat inference multiple times to generate different candidate sentences, and each time dropout is applied to different nodes in the network.",
"Compared with diverse decoding (Li et al., 2016c), we observed during development that sentences generated by different dropouts tend to have diverse semantics (hence more likely to have different plausibility levels).",
"On the contrary, sentences from diversity decoding often have similar structure and phrases across candidates.",
"We also experimented with reinforcement learning, using policy gradient with the reranking scores as reward.",
"However, during development, we observed that this method produced shorter, less informative sentences compared to reranking.",
"Automatic evaluation of dialogue generation systems is a known challenge.",
"Prior work has shown that commonly used metrics for overall quality in other generation tasks such as BLEU (Pap-ineni et al., 2002), ROUGE (Lin, 2004), METEOR (Banerjee and Lavie, 2005) and perplexity have poor correlations with human judgment (Liu et al., 2016; Tao et al., 2018) 4 or are model-dependent (Liu et al., 2016).",
"Therefore, we adopt several metrics that evaluate multiple aspects of responses, and also conduct human evaluation for each result we present.",
"We use the following automatic evaluation metrics: (1) distinct-1 and distinct-2 (Li et al., 2016a), which evaluates response diversity .",
"They respectively calculate the number of distinct uni-grams and bigrams, divided by the total number of words in all responses; (2) linguistically-informed specificity ( spec ) (Ko et al., 2019); (3) cosine similarity between input and response representations, which captures coherence (Zhang et al., 2018a).",
"We follow standards from prior work for human evaluation (Li et al., 2017; Zhang et al., 2018a,b; Xu et al., 2018a).",
"We select 250 prompt-response pairs, and asked 5 judges from MechanicalTurk to rate the responses for each prompt.",
"We evaluate whether the responses are informative (Ko et al., 2019; Wu et al., 2018; Shao et al., 2017) and on topic with the prompt (Shen et al., 2018; Xu et al., 4 Although Tao et al. (2018) proposed an unspervised metric, their code is not available.",
"2018b; Xing et al., 2017), on a scale of 1-5.",
"Average scores are reported.",
"In addition, we evaluate plausibility by asking judges whether they think the given response sentence without the prompt can reasonably be uttered, following instructions from Kruszewski et al. (2016).",
"The percentage of plausible ratings are reported.",
"Data We use two datasets in this work: (1) OpenSubtitles (Tiedemann, 2009), a collection of movie subtitles widely used in open-domain dialogue generation.",
"We sample 4,173,678 pairs for training and 5,000 pairs for testing from the movie subtitles dataset.",
"Following Li et al. (2017), we remove all pairs with responses shorter than 5 words to improve the quality of the generated responses.",
"(2) PersonaChat (Zhang et al., 2018c), a chit-chat dataset collected via crowdsourcing.",
"This is a multi-turn dataset, but we only consider single turn generation in this work.",
"We don't use the personas and false candidate replies.",
"There are 122,458 prompt-response pairs for training and 14,602 pairs for testing.",
"For validation, for reasons described in Section 5.1, we opt for human evaluation of overall response quality on a validation set of 60 prompt-response pairs from PersonaChat.",
"Settings We use LSTMs with hidden layers of size 500, Adam optimizer (Kingma and Ba, 2015) with learning rate 0.001, 1 = 0 .",
"9 , 2 = 0 .",
"999 , dropout rate 0.2 for both training and testing, metric embedding dimension 300 and 5 training epochs.",
"We train randomly initialized word embeddings of size 500 for the dialog model and use 300 dimentional GloVe (Pennington et al., 2014) embeddings for reranking classifiers.",
"We generate 15 candidates for reranking per input sentence.",
"To train the 4 reranking classifiers, we use 375,996 positive sentences on Opensubtitles and 110,221 on PersonaChat.",
"We generate one negative sentence per word or phrase in the positive sentences.",
"Since specificity is the focus of this study, during testing, we use the embedding of the highest specificity level (5) for NIWF and the linguistically informed specificity predictor.",
"For PPW, we observe that the perplexity of generated sentences does not increase beyond the median level (3) during development, hence we use the median level.",
"For comparison, we also report results when all metric levels are set to be the median (level 3).",
"Overall architecture We evaluate our model against the base SEQ 2 SEQ for each component: coherence, specificity embeddings, and plausibility reranking (using the mean of all four classi-fiers).",
"We also benchmark with the MMI-Anti model using mutual information (Li et al., 2016a), as well as Zhang et al. (2018b)'s model that in-corproates a Gaussian kernel layer to control for specificity.",
"We ran Zhang's code on our data and set s = 1 for PersonaChat and s = 0 .",
"8 for Opensubtitles when testing.",
"5 Significance tests are done via Paired Bootstrap Resampling (Berg-Kirkpatrick et al., 2012).",
"Table 2 shows that for both datasets, our full model with plausibility reranking (according to average posterior of the four classifiers) generates the most informative, relevant and plausible responses.",
"Examples from our full model and the baselines are shown in Table 3.",
"Incorporating specificity led to more interesting responses, with 6-10% improvement in informativeness and 3-7% improvement in topic relevance.",
"Since the system is trained without any semantics or common sense knowledge, this led to a drop in semantic plausibility.",
"Plausibility reranking successfully mitigates this issue by improving plausibility by 3.6-6.5%.",
"Although responses from MMI-Anti tend to be more plausible than directly using specificity, these responses are not useful if they are even less informative or relevant than the SEQ 2 SEQ baseline.",
"Zhang et al. (2018b)'s model performed reasonably on PersonaChat but failed on OpenSubtitles.",
"6 One reason may be that OpenSubtitles is much more diverse in terms of topic and vocabulary, which makes their approach of estimating specificity independent of dialogue 5 We observed that a higher s on Opensubtitles will result in many grammatical errors.",
"context less effective.",
"Indeed, we observe unstable word specificity learned across different training rounds and notable grammatical issues on OpenSubtitles.",
"On the contrary, our joint approach gave stable performance on both datasets.",
"On PersonaChat, our coherence component led to improvements in topic relevance and cosine similarity, while specificity improved topic relevance and diversity, which is an intuitive result.",
"On OpenSubtitles, coherence led to increased diversity while specificity led to a decrease.",
"We looked into this and found that length trade-off is at play since the Distinct measures normalize by length of all generated responses: coherence led to diverse but short responses while specificity increased length.",
"On human evaluation, they complement each other and using both gave better overall results.",
"While reranking clearly did improve plausibility, there is also notable improvement in informativeness.",
"This shows that informativeness is not only a frequency-only issue, or even a specificity-only issue, and that semantic plausibility plays an important role.",
"Since the automatic metrics do not capture plausibility information in the sentence, it is unsurprising that they did not improve with plausibility added in.",
"We also study the effect of maxing out specificity and coherence levels at test time vs. using an uninformative level (median).",
"Using median sig-nificantly improved informativeness and diversity (distinct-2) on PersonaChat by 0.90 and 0.53, and did not improve topic relevance.",
"Similar but in-significant improvements are observed on OpenSubtitles.",
"On the other hand, using the maximum levels led to significant improvements over the baseline or the median level on all metrics.",
"Specificity We now dive into a more detailed analysis for each specificity metric on PersonaChat.",
"Table 4 shows human evaluation of Model Reranking Inform.",
"informativeness, topic relevance and plausibility for the non-reranking model minus one specificity metric.",
"Notably, excluding the linguistic based metric resulted in the largest drop in informativeness and relevance.",
"Frequency based NIWF has the least impact on informativeness, indicating that specificity in dialogue is a multi-faceted issue and that the linguistically-informed notion is the most suitable.",
"If none of the specificity metrics are included, topic relevance scores improve.",
"This is because increasing specificity leads to fewer generic responses, yet they are more likely to be judged on topic by humans.",
"Plausibility We compare several different settings for plausibility reranking.",
"Table 5 shows three ways of using the synthetically generated sentences discussed in Section 4: (1) 1-classifier , which trains one classifier to distinguish true responses vs. all generated ones; (2) Max , which trains separate classifiers and take the maximum posterior probability (recall that higher posterior means less plausible responses); (3) Mean , which trains separate classifiers and averages the posterior probability.",
"For all classifiers, at least 72% of the responses ranked top 50% on a balanced test set are true responses.",
"All three reranking methods helped, however, using one classifier is less effective than training and aggregating separate classifiers for each type of semantic implausibility.",
"The latter not only improved plausibility but also informativeness and topic relevance.",
"Using Max vs Mean yields comparable results in terms of plausibility, although Max improves informativeness more while Mean improves topic relevance more.",
"the Corpus of Linguistic Acceptability (Warstadt et al., 2018), a dataset consisting of linguistically acceptable vs. unacceptable sentences.",
"However, looking at results from PersonaChat, reranking using CoLA did not improve plausibility although is of slight help for informativeness and topic relevance.",
"Combining CoLA with the other four classifiers decreased plausibility.",
"Grammaticality Finally, since the function word substitution aspect of our synthetic sentences is related to grammar, we also conduct human evaluation of grammaticality on OpenSubtitles.",
"We did not evaluate on PersonaChat because almost all generate responses of our model we inspected are grammatically correct.",
"Here annotators are asked to judge whether a sentence is grammatical vs. not.",
"Results are shown in Table 6.",
"Informative and interesting responses that are the result of increasing specificity also made the model more prone to grammatical errors, but adding reranking completely mitigated this issue and grammaticality results are the same as the base model that generates much shorter, canned universal responses.",
"MMI gave the best grammaticality; however, these response are not useful if they are even less informative or relevant than the SEQ 2 SEQ baseline.",
"Zhang et al. (2018b)'s model generated more complicated sentences, but has worse grammar.",
"Again we suspect that this is because of the lack of interaction between specificity estimates and dialogue context in their model.",
"We presented a new method to incorporate specificity information and semantic plausibility in SEQ 2 SEQ models.",
"We showed that apart from frequency-based specificity metrics explored in prior work, information-theoretic and linguistically informed specificity improve the specificity of the responses.",
"We proposed a reranking method aimed at improving the semantic plausibility of specific responses.",
"Results showed that our method improved human ratings on informativeness, plausibility and grammaticality on both open domain and chit-chat datasets.",
"This work was partially supported by the NSF Grant IIS-1850153, and an Amazon Alexa Graduate Fellowship.",
"We thank the anonymous reviewers for their helpful feedback."
] | [
"abstain",
"abstain",
"result",
"result",
"method",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"result",
"abstain",
"method",
"result",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"result",
"result",
"other",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"result",
"other",
"other"
] |
[
"Most of recent work in cross-lingual word embeddings is severely Anglocentric.",
"The vast majority of lexicon induction evaluation dictionaries are between English and another language, and the English embedding space is selected by default as the hub when learning in a multilingual setting.",
"With this work, however, we challenge these practices.",
"First, we show that the choice of hub language can significantly impact downstream lexicon induction and zero-shot POS tagging performance.",
"Second, we both expand a standard English-centered evaluation dictionary collection to include all language pairs using triangulation, and create new dictionaries for under-represented languages.",
"1 Evaluating established methods over all these language pairs sheds light into their suitability for aligning embeddings from distant languages and presents new challenges for the field.",
"Finally, in our analysis we identify general guidelines for strong cross-lingual embedding baselines, that extend to language pairs that do not include English.",
"Continuous vectors for representing words (embed-dings) (Turian et al., 2010) have become ubiquitous in modern, neural NLP.",
"Cross-lingual representations (Mikolov et al., 2013) additionally represent words from various languages in a shared continuous space, which in turn can be used for Bilingual Lexicon Induction (BLI).",
"BLI is often the first step towards several downstream tasks such as Part-Of-Speech (POS) tagging (Zhang et al., 2016), parsing (Ammar et al., 2016a), document classification (Klementiev et al., 2012), and machine translation (Irvine and Callison-Burch, 2013; Artetxe et al., 2018b; Lample et al., 2018).",
"Often, such shared representations are learned with a two-step process, whether under bilingual or multilingual settings (hereinafter BWE and MWE, respectively).",
"First, monolingual word embeddings are learned over 1 Available at https://github.com/antonisa/ embeddings .",
"large swaths of text.",
"Such pre-trained word embeddings, such as the fastText Wikipedia vectors (Grave et al., 2018), are available for many languages and are widely used.",
"Second, a mapping between the languages is learned in one of three ways: in a supervised manner if dictionaries or parallel data are available to be used for supervision (Zou et al., 2013), under minimal supervision e.g. using only identical strings (Smith et al., 2017), or even in an unsupervised fashion (Zhang et al., 2017; Conneau et al., 2018).",
"Both in bilingual and multilingual settings, it is common that one of the language embedding spaces is the target to which all other languages get aligned (hereinafter the hub \").",
"We outline the details in Section 2.",
"Despite all the recent progress in learning crosslingual embeddings, we identify a major shortcoming to previous work: it is by and large English-centric.",
"Notably, most MWE approaches essentially select English as the hub during training by default, aligning all other language spaces to the English one.",
"We argue and empirically show, however, that English is a poor hub language choice.",
"In BWE settings, on the other hand, it is fairly uncommon to denote which of the two languages is the hub (often this is implied to be the target language).",
"However, we experimentally find that this choice can greatly impact downstream performance, especially when aligning distant languages.",
"This Anglocentricity is even more evident at the evaluation stage.",
"The lexica most commonly used for evaluation are the MUSE lexica (Conneau et al., 2018) which cover 45 languages, but with translations only from and into English.",
"Alternative evaluation dictionaries are also very Englishand European-centric: (Dinu and Baroni, 2014) report results on EnglishItalian, (Artetxe et al., 2017) on EnglishGerman and EnglishFinnish, (Zhang et al., 2017) on SpanishEnglish and ItalianEnglish, and (Artetxe et al., 2018a) between English and Italian, German, Finnish, Spanish, and Turkish.",
"We argue that cross-lingual word embedding mapping methods should look beyond English for their evaluation benchmarks because, compared to all others, English is a language with disproportionately large available data and relatively poor inflectional morphology e.g., it lacks case, gender, and complex verbal inflection systems (Arono and Fudeman, 2011).",
"These two factors allow for an overly easy evaluation setting which does not necessarily generalize to other language pairs.",
"In light of this, equal focus should instead be devoted to evaluation over more diverse language pairs that also include morphologically rich and low-resource languages.",
"With this work, we attempt to address these shortcomings, providing the following contributions: We show that the choice of the hub when evaluating on diverse language pairs can lead to significantly di erent performance for iterative refinement methods that use a symbolic-based seed dictionary (e.g., by more than 10 percentage points for BWE over distant languages).",
"We also show that often English is a suboptimal hub for MWE.",
"We identify some general guidelines for choosing a hub language which could lead to stronger performance; less isometry between the hub and source and target embedding spaces mildly correlates with performance, as does typological distance (a measure of language similarity based on language family membership trees).",
"For distant languages, multilingual systems should be preferred over bilingual ones if the languages share alphabets, otherwise a bilingual system based on monolingual similarity dictionaries is preferable.",
"We provide resources for training and evaluation on language pairs that do not include English.",
"We outline a simple triangulation method with which we extend the MUSE dictionaries to an additional 4704 lexicons covering 50 languages (for a total of 4900 dictionaries, including the original English ones), and we present results on a subset of them.",
"We also create new evaluation lexica for under-resourced, under-represented languages using Azerbaijani, Belarusian, and Galician as our test cases.",
"Finally, we provide recipes for creating such dictionaries for any language pair with available parallel data.",
"Bilingual Word Embeddings In the supervised BWE setting of Mikolov et al. (2013), given two languages L = { l 1 , l 2 } and their pre-trained row-aligned embeddings X 1 , X 2 , respectively, a transformation matrix M is learned such that:",
"The set can potentially impose a constraint over M , such as the very popular constraint of restricting it to be orthogonal (Xing et al., 2015).",
"Previous work has empirically found that this simple formulation is competitive with other more complicated alternatives (Xing et al., 2015).",
"The orthogonality assumption ensures that there exists a closed-form solution through Singular Value Decomposition (SVD) of X 1 XT 2 .",
"2 Note that in this case only a single matrix M needs to be learned, because (cid:107)X 1 MX 2 (cid:107) = (cid:13)(cid:13)(cid:13) M 1 X 1 X 2 (cid:13)(cid:13)(cid:13) , while at the same time a model that minimizes (cid:107)X 1 MX 2 (cid:107) is as expressive as one minimizing (cid:107) M 1 X 1 M 2 X 2 (cid:107) , with half the parameters.",
"In the minimally supervised or even the unsupervised setting, Zhang et al. (2017) and Conneau et al. (2018) reframe the task as an adversarial game, with a generator aiming to produce a transformation that can fool a discriminator.",
"However, the most popular methods follow an iterative refinement approach (Artetxe et al., 2017).",
"Starting with a seed dictionary (e.g. from identical strings (Zhou et al., 2019) or numerals) an initial mapping is learned in the same manner as in the supervised setting.",
"The initial mapping, in turn, is used to expand the seed dictionary with high confidence word translation pairs.",
"The new dictionary is then used to learn a better mapping, and so forth the iterations continue until convergence.",
"The same iterative approach is followed by Artetxe et al. (2018a), with one important di erence that allows their model ( VecMap ) to handle language pairs with di erent alphabets: instead of identical strings, the seed dictionary is constructed based on the similarity of the monolingual similarity distributions over all words in the vocabulary.",
"3 Multilingual Word Embeddings In a multilingual setting, the simplest approach is to use BWE and align all languages into a target language (the hub ).",
"In this case, for N languages L = { l 1 , l 2 , . . . , l N } on has to learn N 1 bilingual mappings (Ammar et al., 2016b).",
"Rather than using a single hub space, Heyman et al. (2019) propose an incremental procedure that uses an Incremental Hub Space ( IHS ): each new language is included to the multilingual space by mapping it to all languages that have already been aligned (e.g. language l 3 would be mapped to the aligned space of { l 1 , l 2 } ).",
"Alternatively, all mappings could be learned jointly, taking advantage of the inter-dependencies between any two language pairs.",
"Importantly, though, there is no closed form solution for learning the joint mapping, hence a solution needs to be approximated with gradient-based methods.",
"The main approaches are: Multilingual adversarial training with pseudo-randomized refinement (Chen and Cardie, 2018, MAT+MPSR ): a generalization of the adversarial approach of Zhang et al. (2017); Conneau et al. (2018) to multiple languages, also combined with an iterative refinement procedure.",
"4 Unsupervised Multilingual Hyperalignment (Alaux et al., 2019, UMH ): an approach 2 We refer the reader to Mikolov et al. (2013) for details.",
"3 We refer the reader to Artetxe et al. (2018a) for details.",
"4 MAT+MPSR has the beneficial property of being as computationally e cient as learning O ( N ) mappings (instead of O ( N 2 )).",
"We refer the reader to Chen and Cardie (2018) for exact details.",
"which maps all languages to a single hub space, 5 but also enforces good alignments between all language pairs within this space.",
"Even though the architecture and modeling approach of all MWE methods are di erent, they share the same conceptual traits: one of the language spaces remains invariant and all other languages are e ectively mapped to it.",
"In all cases, English is by default selected to be the hub.",
"The only exception is the study of triplets alignments in (Alaux et al., 2019), where Spanish is used as the SpanishFrenchPortuguese triplet hub.",
"Lexicon Induction One of the most common downstream evaluation tasks for the learned cross-lingual word mappings is Lexicon Induction (LI), the task of retrieving the most appropriate word-level translation for a query word from the mapped embedding spaces.",
"Specialized evaluation (and training) dictionaries have been created for multiple language pairs.",
"Of these, the MUSE dictionaries (Conneau et al., 2018) are most often used, providing word translations between English (E n ) and 48 other highto mid-resource languages, as well as on all 30 pairs among 6 very similar Romance and Germanic languages (English, French, German, Spanish, Italian, Portuguese).",
"Given the mapped embedding spaces, the translations are retrieved using a distance metric, with Cross-Lingual Similarity Scaling (Conneau et al., 2018, CSLS) as the most commonly used in the literature.",
"Intuitively, CSLS decreases the scores of pairs that lie in dense areas, increasing the scores of rarer words (which are harder to align).",
"The retrieved pairs are compared to the gold standard and evaluated using precision at k (P@ k , evaluating how often the correct translation is within the k retrieved nearest neighbors of the query).",
"Throughout this work we report P@1, which is equivalent to accuracy; we provide P@5 and P@10 results in the Appendix.",
"The typically used evaluation dictionaries cover a narrow breadth of the possible language pairs, with the majority of them focusing in pairs with English (as with the MUSE or Dinu et al. (2015) dictionaries) or among high-resource European languages.",
"Glava et al. (2019), for instance, highlighted Anglocentricity as an issue, creating and evaluating on 28 dictionaries between 8 languages (Croatian, English, Finnish, French, German, Italian, Russian, Turkish) based on Google Translate.",
"In addition, Czarnowska et al. (2019) focused on the morphology dimension, creating morphologically complete dictionaries for 2 sets of 5 genetically related languages (Romance: French, Spanish, Italian, Portuguese, Catalan; and Slavic: Polish, Czech, Slovak, Russian, Ukrainian).",
"In contrast to these two (very valuable!) works, our method for creating dictionaries 5 Note that Alaux et al. (2019) use the term pivot to refer to what we refer to as the hub language.",
"for low-resource languages (3.1) leverages resources that are available for about 300 languages.",
"In addition, we propose a simple triangulation process (3.2), that makes it possible to create dictionaries for arbitrary language pairs, given that dictionaries into a pivot language (usually English) are available for both languages.",
"Our approach for constructing dictionaries is straightforward, inspired by phrase table extraction techniques from phrase-based MT (Koehn, 2009).",
"This is an automatic process, and introduces some degree of noise.",
"Rather than controlling this through manual inspection, which would be impossible for all language pairs, we rely on fairly simple heuristics for controlling the dic-tionaries' quality.",
"The first step is collecting publicly available parallel data between English and the low-resource language of interest.",
"We use data from the TED (Qi et al., 2018), OpenSubtitles (Lison and Tiedemann, 2016), WikiMatrix (Schwenk et al., 2019), bible (Malaviya et al., 2017), and JW300 (Agic and Vulic, 2019) datasets.",
"6 This results in 354k, 53k, and 623k English-to-X parallel sentences for Azerbaijani (A z ), Belarusian (B e ), and Galician (G l ) respectively.",
"7 We align the parallel sentences using fast_align (Dyer et al., 2013), and extract symmetrized alignments using the gdfa heuristic (Koehn et al., 2005).",
"In order to ensure that we do not extract highly domain-specific word pairs, we only use the TED, OpenSubtitles, and WikiMatrix parts for word-pair extraction.",
"Also, in order to control for quality, we only extract word pairs if they appear in the dataset more than 5 times, and if the symmetrized alignment probability is higher than 30% in both directions.",
"With this process, we end up with about 6k, 7k, and 38k word pairs for A z E n , B e E n , and G l E n respectively.",
"Following standard conventions, we sort the word pairs according to source-side frequency, and use the intermediate-frequency ones for evaluation, typically using the [50006500) rank boundaries.",
"The same process can be followed for any language pair with a su cient volume of parallel data (needed for training a reasonably accurate word alignment model).",
"8 6 Not all languages are available in all these datasets.",
"7 The anglocentricity in this step is by necessity it is hard to find a large volume of parallel data in a language pair excluding English.",
"8 In fact, we can produce similar dictionaries for a large number of languages, as the combination of the recently cre-Greek Italian Bridged GreekItalian Lexicon word tag word tag Match Greek Italian M;NOM;SG pacifico M;SG M;SG pacifico, pacifici, pacifica F;NOM;SG pacifici M;PL F;SG pacifica, pacifico, pacifici Neut;NOM;SG pacifica F;SG SG pacifica, pacifico, pacifici Neut;NOM;PL PL pacifici, pacifica, pacifico Table 1: Triangulation and filtering example on GreekItalian.",
"Our second method for creating new dictionaries is inspired by phrase table triangulation ideas from the pre-neural MT community (Wang et al., 2006; Levinboim and Chiang, 2015).",
"The concept can be easily explained with an example, visualized in Figure 1.",
"Consider the Portuguese (P t ) word trabalho which, according to the MUSE P t E n dictionary, has the words job and work as possible E n translations.",
"In turn, these two E n words can be translated to 4 and 5 Czech (C s ) words respectively.",
"By utilizing the transitive property (which translation should exhibit) we can identify the set of 5 possible C s translations for the P t word trabalho .",
"Following this simple triangulation approach, we create 4,704 new dictionaries over pairs between the 50 languages of the MUSE dictionaries.",
"9 For consistency, we keep the same train and test splits as with MUSE, so that the source-side types are equal across all dictionaries with the same source language.",
"Triangulating through English (which is unavoidable, due to the relative paucity of non-English-centric dictionaries) is suboptimal English is morphologically poor and lacks corresponding markings for gender, case, or other features that are explicitly marked in many languages.",
"As a result, several inflected forms in morphologically-rich languages map to the same English form.",
"Similarly, gendered nouns or adjectives in gendered languages map to English forms that lack gender information.",
"For example, the MUSE Greek English dictionary lists the word peaceful as the translation for all , , , , which are the male, female, and neutral (singular and plural) inflections of the same adjective.",
"Equivalently, the EnglishItalian dictionary translates peaceful into either pacifico , pacifici , or pacifica (male singular, male plural, and female singular, respectively; see Table 1).",
"When translating from or into English lacking context , all of those are reasonable translations.",
"When translating between Greek and Italian, though, one should at least take number into account (gram-ated JW300 and WikiMatrix datasets provide an average of more than 100k parallel sentences in 300 languages. Before publication, we plan to create these dictionaries and make them publicly available, along with the corresponding code. 9 Available at https://github.com/antonisa/ embeddings . matical gender is a more complicated matter: it is not uncommon for word translations to be of di erent grammatical gender across languages).",
"Hence, we devise a filtering method for removing blatant mistakes when triangulating morphologically rich languages.",
"We rely on automatic morphological tagging which we can obtain for most of the MUSE languages, using the StanfordNLP toolkit (Qi et al., 2020).",
"10 The morphological tagging uses the Universal Dependencies feature set (Nivre et al., 2016) making the tagging comparable across almost all languages.",
"Our filtering technique iterates through the bridged dictionaries: for a given source word, if we find a target word with the exact same morphological analysis, we filter out all other translations with the same lemma but di erent tags.",
"In the case of feature mismatch (for instance, Greek uses 2 numbers, 4 cases and 3 genders while Italian has 2 num-bers, 2 genders, and no cases) or if we only find a partial tag match over a feature subset, we filter out translations with disagreeing tags.",
"We ignore the grammatical gender and verb form features, as they are not directly comparable cross-lingually.",
"Coming back to our Greek Italian example, this means that for the form we would only keep pacifico as a candidate translation (we show more examples in Table 1).",
"Our filtering technique removes about 60.4% of the entries in 2964 of the 4900 dictionaries.",
"11 Unsurprisingly, we find that bridged dictionaries between morphologically rich languages require a lot more filtering.",
"For instance more than 80% of the entries of the Urdu-Greek dictionary get filtered out.",
"On average, the languages with more filtered entries are Urdu (62.4%), Turkish (61.1%), and German (58.6%).",
"On the other hand, much fewer entries are removed from dictionaries with languages like Dutch (36.2%) or English (38.1%).",
"Naturally, this filtering approach is restricted to languages for which a morphological analyzer is available.",
"Mitigating this limitation is beyond the scope of this work, although it is unfortunately a common issue.",
"For example, Kementchedjhieva et al. (2019) manually corrected five dictionaries (between English and German, Danish, Bulgarian, Arabic, Hindi) but one needs to rely 10 The toolkit has since been renamed to Stanza.",
"See https: //stanfordnlp.github.io/stanfordnlp/ .",
"11 Due to the lack of morphological analysis tools, we were unable to filter dictionaries in the following 11 languages: aze, bel, ben, bos, lit, mkd, msa, sqi, tam, tha, tel.",
"on automated annotations in order to scale to all languages.",
"Our method that uses automatically obtained morphological information combined with the guidelines proposed by Kementchedjhieva et al. (2019) (e.g. removing proper nouns from the evaluation set) scales easily to multiple languages, allowing us to create more than 4 thousand dictionaries.",
"The aim of our LI experiments is two-fold.",
"First, the di erences in LI performance show the importance of the hub language choice with respect to each evaluation pair.",
"Second, as part of our call for moving beyond Anglo-centric evaluation, we also present LI results on several new language pairs using our triangulated dictionaries.",
"We train and evaluate all models starting with pre-trained Wikipedia FastText embeddings for all languages (Grave et al., 2018).",
"We focus on the minimally supervised scenario which only uses similar character strings between any languages for supervision in order to mirror the hard, realistic scenario of not having annotated training dictionaries between the languages.",
"We learn MWE with the MAT+MPSR method using the publicly available code, 12 aligning several language subsets varying the hub language.",
"We decided against comparing to the incremental hub ( IHS ) method of Heyman et al. (2019), because the order in which the languages are added is an additional hyperparameter that would explode the experimental space.",
"13 We also do not compare to UMH , as we consider it conceptually similar to MAT+MPSR and no code is publicly available.",
"For BWE 12 https://github.com/ccsasuke/umwe 13 We refer the reader to Table 2 from Heyman et al. (2019) which compares to MAT+MPSR , and to Table 7 of their appendix which shows the dramatic influence of language order.",
"experiments, we use MUSEs 14 (MUSE, semisupervised) and VecMap 15 systems, and we additionally compare them to MAT+MPSR for completeness.",
"We compare the statistical significance of the performance di erence of two systems using paired bootstrap resampling (Koehn, 2004).",
"Generally, a di erence of 0.40.5 percentage points evaluated over our lexica is significant with p < 0 .",
"05.",
"Experiment 1 We first focus on 10 languages of varying morphological complexity and data availability (which a ects the quality of the pre-trained word embed-dings): Azerbaijani (A z ), Belarusian (B e ), Czech (C s ), English (E n ), Galician (G l ), Portuguese (P t ), Russian (R u ), Slovak (S k ), Spanish (E s ), and Turkish (T r ).",
"The choice of these languages additionally ensures that for our three low-resource languages (A z , B e , G l ) we include at least one related higher-resource language (T r , R u , P t / E s respectively), allowing for comparative analysis.",
"Table 2 summarizes the best post-hoc performing systems for this experiment.",
"Experiment 2 In the second setting, we use a set of 7 more distant languages: English, French (F r ), Hindi (H i ), Korean (K o ), Russian, Swedish (S v ), and Ukrainian (U k ).",
"This language subset has large variance in terms of typology and alphabet.",
"The best performing systems are presented in Table 3.",
"MWE: English is rarely the best hub language In multilingual settings, we conclude that the standard practice of choosing English as the hub language is sub-optimal.",
"Out of the 90 evaluation pairs from our 10-language experiment (Table",
"2) the best hub language is English in only 17 instances (less than 20% of the 14 https://github.com/facebookresearch/MUSE 15 https://github.com/artetxem/vecmap Source Target E n F r H i K o R u S v U k best E n E n 76.3 R u 23.9 U k 10.4 F r 42.0 U k 59.0 H i 28.3 R u 40.0 38.5 F r 74.0 U k 19.0 R u 7.5 S v 40.8 R u 51.8 E n 28.8 E n 37.0 36.4 H i 31.4 F r 26.9 R u 2.1 E n 14.6 U k 17.3 E n 10.5 F r 17.1 16.2 K o 17.7 S v 13.6 S v 2.4 F r 7.9 E n 7.2 R u 3.6 F r 8.8 7.9 R u 53.4 K o 51.7 K o 15.3 U k 5.2 E n 41.3 U k 56.3 K o 37.2 36.2 S v 52.7 U k 48.2 K o 17.7 R u 5.1 U k 33.2 F r 24.1 R u 30.2 29.2 U k 41.4 R u 44.0 H i 14.4 S v 2.6 E n 59.7 H i 36.8 K o 33.2 32.4 best 45.1 43.5 15.5 5.5 33.0 35.6 25.3 29.1 E n 42.7 42.5 14.5 5.1 32.4 34.9 24.5 28.1 Table 3: Lexicon Induction performance (P@1) over MWEs from 7 typologically distant languages (42 pairs).",
"time).",
"In fact, the average performance (over all evaluation pairs) when using E n as the hub (denoted as E n ) is 1.3 percentage points worse than the optimal ( best ).",
"In our distant-languages experiment (Table 3) English is the best choice only for 7 of the 42 evaluation pairs (again, less than 20% of the time).",
"As before, using E n as the hub leads to an average drop of one percentage point in performance aggregated over all pairs, compared to the averages of the optimal selection.",
"The rest of this section attempts to provide an explanation for these di erences.",
"Expected gain for a hub language choice As vividly outlined by the superscript annotations in Tables 2 and 3, there is not a single hub language that stands out as the best one.",
"Interestingly, all languages, across both experiments, are the best hub language for some evaluation language pair.",
"For example, in our 10-languages experiment, E s is the best choice for about 20% of the evaluation pairs, T r and E n are the best for about 17% each, while G l and B e are the best for only 5 and 3 language pairs respectively.",
"Clearly, not all languages are equally suited to be the hub language for many language pairs.",
"Hence, it would be interesting to quantify how much better one could do by selecting the best hub language compared to a random choice.",
"In order to achieve this, we define the expected gain G l of using language l as follows.",
"Assume that we are interested in mapping N languages into the shared space and p ml is the accuracy 16 over a specified evaluation pair m when using language l as the hub.",
"The random choice between N languages will have an expected accuracy equal to the average accuracy when using all languages as hub: E [ p m ] = (cid:80) l p ml N .",
"The gain for that evaluation dataset m when using language l as hub, then, is g ml = p ml E [ p m ].",
"Now, for a collection of M evaluation pairs we simply average their gains, in order to obtain the expected gain for using 16 This could be substituted with any evaluation metric.",
"The results of this computation for both sets of experiments are presented in Figure 2.",
"The bars marked overall' match our above definition, as they present the expected gain computed over all evaluation language pairs.",
"For good measure, we also present the average gain per language aggregated over the evaluation pairs where that language was indeed the best hub language ( when best' bars).",
"Perhaps unsurprisingly, A z seems to be the worst hub language choice among the 10 languages of the first experiment, with an expected loss (negative gain) of -0.4.",
"This can be attributed to how distant A z is from all other languages, as well as to the fact that the A z pre-trained embeddings are of lower quality compared to all other languages (as the A z Wikipedia dataset is significantly smaller than the others).",
"Similarly, H i and S v show expected loss for our second experiment.",
"Note that English is not a bad hub choice per se it exhibits a positive expected gain in both sets of experiments.",
"However, there are languages with larger expected gains, like E s and G l in the 10-languages experiment that have a twice-as-large expected gain, while R u has a 4 times larger expected gain in the distant-languages experiment.",
"Of course, the language subset composition of these experiments could possibly impact those numbers.",
"For example, there are three very related languages (E s , G l , P t ) in the 10 languages set, which might boost the expected gain for that subset; however, the trends stand even if we compute the expected gain over a subset of the evaluation pairs, removing all pairs that include G l or P t .",
"For example, after removing all G l results, E s has a slightly lower expected gain of 0 .",
"32, but is still the language with the largest expected gain.",
"Identifying the best hub language for a given evaluation set The next step is attempting to identify potential characteristics that will allow us make educated decisions with regards to choosing the hub language, given a specific evaluation set.",
"For example, should one choose a language typologically similar to the evaluation source, target, or both?",
"Or should they use the source or the target of the evaluation set as the hub?",
"Our first finding is that the best performing hub language will very likely be neither the source nor the target of the evaluation set.",
"In our 10-languages experiments, a language di erent than the source and the target yields the best accuracy for over 93% of the evaluation sets, with the di erence being statistically significant in more than half such cases.",
"Similarly, in the distant-languages experiment, there is only a single instance where the best performing hub language is either the source or the target evaluation language (for F r R u ), and for the other 97% of cases the best option is a third language.",
"This surprising pattern contradicts the mathematical intuition discussed in Section 2 according to which a model learning a single mapping (keeping another word embedding space fixed) is as expressive as a model that learns two mappings for each of the languages.",
"Instead, we find that in almost all cases, learning mappings for both language spaces of interest (hence rotating both spaces) leads to better BLI performance compared to when one of the spaces is fixed.",
"Our second finding is that the LI performance correlates with measures of distance between languages and language spaces.",
"The typological distance ( d gen ) between two languages can be approximated through their genealogical distance over hypothesized language family trees, which we obtain from the URIEL typological database (Littell et al., 2017).",
"Also, Patra et al. (2019) recently motivated the use of Gromov-Hausdro (GH) distance as an a priori estimation of how well two language embedding spaces can be aligned under an isometric transformation (an assumption most methods rely on).",
"The authors also note that vector space GH distance correlates with typological language distance.",
"We find that there is a positive correlation between LI performance and the genealogical distances between the sourcehub and targethub languages.",
"The average (over all evaluation pairs) Pearson's correlation coe cient between P@1 and d gen is 0 .",
"49 for the distant languages experiment and 0 .",
"38 for the 10-languages one.",
"A similar positive correlation of performance and the 0 .",
"sum of the GH distances between the sourcehub and targethub spaces.",
"On our distant languages experiment, the correlation coe cient between P@1 and GH is 0.45, while it is slightly lower (0.34) for our 10-languages experiment.",
"Figure 3 shows two high correlation examples, namely G l E n and E n H i .",
"BWE: The hub matters for distant languages MUSEs implements a provably direction-independent closed form solution of the Procrustes problem, and we confirm empirically that the hub choice does not a ect the outcome (we provide complete results on MUSEs in Table 7 in the Appendix).",
"Similarly, because VecMap uses symmetric re-weighting and produces bidirectional dictionaries at its final step, the results are not dependent on the training direction.",
"However, obtaining good performance with such methods requires the orthogonality assumption to hold, which for distant languages is rarely the case (Patra et al., 2019).",
"In fact, we find that the gradient-based MAT+MPSR method in a bilingual setting over typologically distant languages exhibits better performance than MUSEs or VecMap .",
"Across Table 2, in only a handful of examples (shaded cells) do VecMap or MUSEs systems outperform MAT+MPSR for BWE (with the majority being among E n , E s , G l , and P t , all related high-resource languages).",
"In the 7 distant languages setting, however, the results are di erent: VecMap outperforms MUSEs and the multilingual MAT+MPSR in the vast majority of the language pairs.",
"The di erence is more stark when the languages of the pair use completely di erent alphabets, where the same-character strings heuristic for bootstrapping the initial dictionary mapping fails.",
"Instead, the monolingual similarity approach employed by VecMap is defi-nitely more appropriate for settings such as those posed by languages like Korean or Hindi.",
"This highlights the importance of actually evaluating and reporting results on such language pairs.",
"On the one hand, we find that when aligning distant Results on A z C s Average Bilingual A z C s 25.8 with hub: 22.7 29.1 Trilingual A z , C s , + hub: B e E n E s G l 28.2 21.6 28.5 31.8 23.0 P t R u S k T r 29.6 27.4 30.4 32.9 Trilingual A z , hub:C s , + extra: E n E s P t R u T r 30.8 30.1 30.1 33.2 27.1 33.7 Multilingual (10 languages) A z B e C s E n E s 33.9 33.7 34.0 32.3 34.5 35.1 G l P t R u S k T r 34.0 34.8 34.5 32.9 33.7 Results on R u U k Average Bilingual R u U k 57.5 with hub: 58.0 57.0 Trilingual B e , R u , U k with hub: B e R u U k 58.8 59.2 58.9 58.4 Trilingual R u , U k , + hub: A z C s E n E s F r H i T r 57.8 57.4 58.5 58.4 58.3 58.0 57.0 57.2 Multilingual B e , R u , U k , + hub: C s E n E s G l K o P t S v 58.1 58.0 58.1 58.5 58.8 57.0 58.3 58.2 Multilingual R u , U k , E n , F r , H i , K o , S v , with hub: E n F r H i K o R u S v U k 55.6 55.3 56.1 55.8 56.3 55.3 55.3 54.9 Table 4: Comparison of bilingual, trilingual, and multilingual systems for distant (left) and related (right) languages.",
"Multilinguality boosts performance significantly on distant languages.",
"languages with MAT+MPSR , the di erence between hub choices can be significant in A z E n , for instance, using E n as the hub leads to more than 7 percentage points di erence compared to using A z .",
"We show some examples in Table 5.",
"On the other hand, when aligning typologically similar languages, the di erence is less pronounced.",
"For example, we obtain practically similar performance for G l P t , A z T r , or U k R u when using either the source or the target language as the hub.",
"Note, though, that non-negligible di erences could still occur, as in the case of P t G l .",
"In most cases, it is the case that the higher-resourced language is a better hub than the lower-resourced one, especially when the number of resources di er significantly (as in the case of A z and B e against any other language).",
"Since BWE settings are not our main focus, we leave an extensive analysis of this observation for future work.",
"Bi-, tri-, and multilingual systems This part of our analysis compares bilingual, trilingual, and multilingual systems, with a focus on the under-represented languages.",
"Through multiple experiments (complete evaluations are listed in the Appendix) we reach two main conclusions.",
"On one hand, when evaluating on typologically distant languages, one should use as many languages as possible.",
"In Table 4 we present one such example with results on A z C s under various settings.",
"On the other hand, when multiple related languages Transfer from E n Transfer from P t Hub E s P t G l Hub E s G l E n 38.7 21.8 19.4 E n 48.4 32.9 E s 26.5 16.1 28.5 E s 41.4 25.5 P t 28.1 25.7 15.6 P t 44.3 36.5 G l 35.4 22.8 23.1 G l 48.1 23.8 B e 35.6 30.5 13.2 R u 28.6 30.6 18.2 : best train-test hub S k 24.2 30.2 14.6 for LI.",
"are available, one can achieve higher performance with multilingual systems containing all related languages and one more hub language, rather than learning diverse multilingual mappings using more languages.",
"We confirm the latter observation with experiments on the Slavic (B e , R u , U k ) and Iberian (E s , G l , P t ) clusters, and present an example (R u U k ) in Table 4.",
"Di erences in BLI performance do not necessarily translate to di erences in other downstream tasks that use the aligned embeddings, so Glava et al. (2019) advocate for actual evaluation on such tasks.",
"We extend our analysis to an example downstream task of zero-shot POS tagging using the aligned embeddings for select language pairs.",
"We show that indeed the choice of the hub language can have dramatic impact.",
"Using Universal Dependencies data (Nivre et al., 2016) we train simple bi-LSTM POS taggers on E n and P t using the respective embeddings produced from each MAT+MPSR run, and evaluate the zero-shot performance on G l and E s .",
"17 Although all taggers achieve consistent accuracies > 95% on English and Portuguese regardless of the original E n or P t embeddings, the zero-shot performance on the test languages, as shown in Table 6, varies widely.",
"For instance, using the embeddings produced from using P t as a hub, we obtain the highest zero-shot accuracy on G l (36.5%), while using the ones from the G l hub lead to significantly worse performance (23.8%).",
"It should be noted that the best hub for POS-tagging does not always coincide with the best hub for LI, e.g. the best LI hub for P t G l is E s , which leads to 11 percentage points worse G l POS tagging performance than the best system.",
"In fact, for the language pairs that we studied we observe no correlation between the two tasks performance as we vary the hub (with an average Spearman's rank correlation = 0 . 08).",
"With this work we challenge the standard practice in learning cross-lingual word embeddings.",
"We empirically show that the choice of the hub language is an important parameter that a ects lexicon induction performance in both bilingual (between distant languages) and multilingual settings.",
"More importantly, we hope that by providing new dictionaries and baseline results on several language pairs, we will stir the community towards evaluating all methods in challenging scenarios that include under-represented language pairs.",
"Towards this end, our analysis provides insights and general directions for stronger baselines for non-Anglocentric crosslingual word embeddings.",
"The problem of identifying the best hub language, despite our analysis based on the use of typological distance, remains largely unsolved.",
"In the future, we will investigate a hub language ranking / selection model a la Lin et al. (2019).",
"The authors are grateful to the anonymous reviewers for their exceptionally constructive and insightful comments, and to Gabriela Weigel for her invaluable help with editing and proofreading the paper.",
"This material is based upon work generously supported by the National Science Foundation under grant 1761548."
] | [
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"objective",
"result",
"method",
"objective",
"method",
"objective",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"method",
"abstain",
"abstain",
"objective",
"abstain",
"objective",
"method",
"method",
"method",
"method",
"method",
"other",
"method",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"result",
"objective",
"method",
"method",
"abstain",
"other",
"other"
] |
[
"2 State Key Laboratory of Computer Sciences, Institute of Software, Chinese Academy of Sciences 3 University of Chinese Academy of Sciences, Beijing, China",
"Abstract In this paper, we focus on the problem of keyword and document matching by considering different relevance levels.",
"In our recommen003 dation system, different people follow differ004 ent hot keywords with interest.",
"We need to 005 attach documents to each keyword and then 006 distribute the documents to people who fol007 low these keywords.",
"The ideal documents 008 should have the same topic with the keyword, 009 which we call topic-aware relevance.",
"In other 010 words, topic-aware relevance documents are 011 better than partially-relevance ones in this ap012 plication.",
"However, previous tasks never de013 fine topic-aware relevance clearly.",
"To tackle 014 this problem, we define a three-level relevance 015 in keyword-document matching task: topic016 aware relevance, partially-relevance and irrel017 evance.",
"To capture the relevance between 018 the short keyword and the document at above019 mentioned three levels, we should not only 020 combine the latent topic of the document with 021 its deep neural representation, but also model 022 complex interactions between the keyword 023 and the document.",
"To this end, we propose 024 a Two-stage Interaction and Topic-Aware text 025 matching model (TITA).",
"In terms of topic026 aware, we introduce neural topic model to 027 analyze the topic of the document and then 028 use it to further encode the document.",
"In 029 terms of two-stage interaction, we propose 030 two successive stages to model complex in031 teractions between the keyword and the docu032 ment.",
"Extensive experiments reveal that TITA 033 outperforms other well-designed baselines and 034 shows excellent performance in our recom035 mendation system.",
"The keyword-document matching is mostly like the query-document matching task.",
"The query039 document matching task, aiming to calculate rele040 vance score between a query and a document, has 041 been extensively studied over the past few years.",
"It 042 is widely applicable in many real scenarios: (1) in 043 the information retrieval systems (Guo et al., 2016), 044 query-document matching is an important feature 045 in the ranking models.",
"(2) as for the task of ques-046 tion answering (Yang et al., 2016), query-document 047 matching method can be used to find document can-048 didates or to help predict the answer span.",
"(3) it 049 is also widely applied to recommendation systems 050 (Jiang et al., 2019).",
"In many scenarios, we need to distinguish dif-052 ferent keyword-document (query-document) rele-053 vance levels.",
"For instance, in our recommendation 054 system, we need to attach documents to some hot 055 keywords and then distribute the documents to the 056 people who follow the keywords.",
"In this circum-057 stance, the document and the keyword should better 058 have the same topic, which we call topic-aware rel-059 evance.",
"As shown in Table 1, for the hot keyword 060 cherry blossoms, the document (labeled",
"2) should 061 be the ideal document which should be attached be-062 cause it has the same topic with the keyword while 063 the document (labeled",
"1) should be a secondary 064 choice, because only several words or phrases in 065 this document match the keyword but the topics of 066 the document mismatch the keyword.",
"To tackle this problem, we define a three-068 level relevance: topic-aware relevance, partially-069 relevance and irrelevance.",
"The topic-aware rele-070 vance means the keyword and the document have 071 the same topic while the partially-relevance means 072 only part of the document matches with the key-073 word.",
"Our task is more challenging than previous 074 query-document matching tasks.",
"To capture the rel-075 evance between the keyword and the document at 076 above-mentioned three levels, we should not only 077 combine the latent topic of the document with its 078 deep representation, but also model complex inter-079 actions between the keyword and the document.",
"Previous neural query-document matching mod-081 els (similar as keyword-document matching) can be divided into two categories according to their model architectures (Guo et al., 2016).",
"One is the 084 Keyword : cherry blossoms Original Keyword : Label : 0 Irrelevance Case Translated Document : There was a flower shop which has opened for a few months.",
"representation-based models, in which representa085 tions for a query and a document are built indepen086 dently.",
"In other words, there are no word-level or 087 phrase-level interactions between the query and the 088 document.",
"For instance, the well-known DSSM 089 (Huang et al., 2013) has been verified effective in 090 query-document matching tasks.",
"However, these 091 representation-based series cannot model complex 092 interactive signals between a query and a document 093 effectively.",
"The other one we call interaction-based 094 models, in which word or phrase-level information 095 fusion occurs.",
"It has been verified more effective 096 to directly learn interactions than individual repre097 sentations.",
"Examples include ARC II (Hu et al., 098 2014), MatchPyramid (Pang et al., 2016).",
"Recently, 099 interaction-based methods are widely used in many 100 NLP tasks, like BIDAF (Seo et al., 2016) and R101 NET (Wang et al., 2017).",
"More recently, BERT (Devlin et al., 2018) has made great influence in the field of NLP.",
"It has 104 achieved state-of-the art results in many NLP ap105 plications.",
"The pre-trained language models can be 106 applied directly to this keyword-document match-107 ing task.",
"However, these above-mentioned types of keyword-document (query-document) matching models can be improved to be applied to our rec-111 ommendation system in the following aspects: (1) They do not analyse the topic of the document.",
"It is 113 expected that topic model can be used to solve this 114 problem.",
"(2) Previous interaction-based models 115 can still be improved to capture complex matching 116 signals between a query and a document.",
"To this 117 end, we propose the TITA model.",
"By topic-aware, 118 we introduce neural topic model (Miao et al., 2017) 119 to analyze the latent topic representation of the 120 document and then use this latent topic to further 121 encode the document.",
"By two-stage interaction, we 122 propose a two-stage interaction to model complex 123 interactions between a query and a document.",
"Our research contributions can be summarized as follows.",
"propose the TITA model to improve them.",
"129 Our model has two advantages: (1) it encodes 130 the latent topic embedding into the deep neu131 ral representation of the document, which can 132 aid the prediction of the topic-aware relevance.",
"133 (2) it can model more complex interactions be134 tween a keyword and a document through a 135 two-stage keyword-document interaction.",
"136 We perform extensive experiments on our 137 keyword-document matching dataset.",
"The re138 sults reveal that the proposed TITA model 139 outperforms the well-designed baselines.",
"140 From a real recommendation system, we 141 define a three-level relevance in keyword142 document matching task and construct a new 143 dataset.",
"144 Our model is applied in our recommendation 145 system and improves the click-through rate by 146 4.35%.",
"Depending on the model architectures, text matching models can be divided into two categories: representation-based and interaction-based.",
"The 151 former ones first transform every piece of text to a 152 representation with neural networks, such as Deep 153 Semantic Similarity Model(DSSM) (Huang et al., 154 2013), Convolutional Deep Semantic Similarity 155 Model(CDSSM) (Shen et al., 2014), LSTM-RNN 156 (Palangi et al., 2016), Bi-LSTM, etc.",
"Conversely, 157 the latter models focus on modeling the interac158 tion between a query and a document, such as 159 Arc-II(Hu et al., 2014), MatchPyramid (Pang et al., 160 2016), BIDAF(Seo et al., 2016) and RNET(Wang 161 et al., 2017).",
"Representation-based methods generate distributed representations from input texts through neural networks.",
"There are a number of works em165 ploying these methods, which differ mainly in the 166 procedure to construct the representations and the 167 way of calculating a matching score.",
"Huang et al. 168 (2013) propose DSSM, which is the first one to 169 apply a neural network.",
"In DSSM, each piece of 170 the query or the document is represented through 171 a multilayer perceptron and then a matching score 172 is calculated by the cosine similarity.",
"Compared 173 to traditional text matching models, DSSM shows 174 significant improvements.",
"Compared with representation-based methods, the interaction-based methods aim to capture di-",
"rect matching features: the degree and the struc-178 ture of matching.",
"The interaction-based model, 179 which means query-document interaction occurs 180 before matching, can somewhat solve the above-181 mentioned problem in the representation-based 182 models.",
"It has been verified more effective to di-183 rectly learn interactions than individual represen-184 tations.",
"Hu et al. (2014) propose ARC-II, which 185 first represents the query and the document by the 186 knowledge of each other, and adjusts the sliding 187 windows in the first convolution layer to focus on 188 adjacent word vectors.",
"Inspired by the success of 189 convolutional neural network in image recognition, 190 Pang et al. (2016) propose MatchPyramid to model 191 text matching as the problem of image recogni-192 tion.",
"Leveraging the attention mechanism, Seo 193 et al. (2016) and Wang et al. (2017) introduce at-194 tention mechanism to improve the matching degree 195 of the query and the document.",
"Recently BERT (Devlin et al., 2018) has caused a stir in the field of NLP.",
"It has achieved state-198 of-the-art results in many NLP applications.",
"The 199 pre-trained language model series can be applied 200 directly to this keyword-document matching task.",
"Topic models aim to discover the topics as well as the topic representations of documents in the document collection.",
"It learns latent topics from 204 documents in an unsupervised manner.",
"Topics are 205 captured as latent variables that have a word prob-206 ability distribution.",
"Topic models have a long tra-207 dition in this scenario area as well, such as biblio-208 metrics, translations and recommendations.",
"Hall et al. (2008) describe the flow of topics between papers.",
"Zhao and Xing (2006) enable 211 word alignment process to leverage topical contents 212 of document-pairs.",
"Jiang et al. (2015) use topic 213 model to enrich users' information for effective 214 inference.",
"In this section, we describe details of the TITA model.",
"As depicted in Figure 1, our TITA model 218 has three major components: (1) a two-stage 219 keyword-document interaction, see Part A; (2) a 220 neural topic model, see Part B; (3) a joint train-221 ing mechanism, see Part C. First, we introduce 222 the task definition.",
"Then, we elaborate the two-223 stage keyword-document interaction and neural 224 topic model in the TITA model respectively.",
"Fi-225 nally, a joint training mechanism is introduced to 226 incorporate latent topics to the deep representation 227 Figure 1: The architecture of the TITA model, which consists of three major components: (1) a two-stage keyword-document interaction, which combines the multi-head attention in BERT and a successive cross representation layer to link the keyword and the document; (2) a neural topic model, which calculate a latent topic of the document to further enrich the document representation; (3) a joint training mechanism to train the model in a joint process.",
"of the document and train the model in a joint pro228 cess.",
"Notably, we conduct experiments using both 229 Bi-LSTM and BERT as text encoders.",
"Here, we 230 only describe the proposed methods with BERT as 231 the encoder for simplicity.",
"In our keyword-document matching task, we explicitly model the relevance between a keyword and a document as a relevance level prediction task.",
"236 The input of the task is a keyword Q and a docu237 ment D .",
"The output r Q,D { 0 , 1 , 2 } indicates the 238 keyword-document relevance levels.",
"The keyword-document matching model is desired to capture the rich interactions between the keyword and the document in the matching process.",
"244 As show in Table 1, the keyword cherry blossoms 245 and the topic-aware relevance document have many 246 correlating signals, e.g., the phrase cherry blos247 soms in the keyword and the phrase flowering 248 period in the document.",
"The two-stage keyword-document interaction in the TITA model is to fuse the information of the document and the keyword.",
"In the first-stage in252 teraction, we employ BERT (Devlin et al., 2018) 253 as the encoder to simultaneously model the se254 quential information of the keyword and the doc255 ument along with their interactive relationship 256 by the multi-head self-attention mechanism.",
"In 257 the second-stage interaction, we perform a cross-258 attention between the representations of the key-259 word and the document to further capture their 260 interactive relationship.",
"First-stage Interaction As shown in Figure 1, in the first-stage interaction, we concatenate the key-263 word and the document by a separator [SEP] as input and then feed them into BERT.",
"The input con-265 sists of the keyword characters c Q = { c Qm } Mm =1 and 266 the document characters c D = { c Dn } Nn =1 , where M , 267 N indicate the length of the keyword characters and 268 the document characters respectively.",
"The states in 269 the last hidden layer of BERT can be regarded as 270 the encoding of the document, i.e., e D .",
"e D = BERT ([ c Q ; [SEP] ; c D ]) (1) where e D = { e Dn } Nn =1 RN d .",
"In each hidden 273 layer of BERT, the multi-head self-attention mech-274 anism is performed as the following equations: 275 Attention ( Q, K, V ) = softmax ( Q KT d k ) V (2) 276 MultiHead ( Q, K, V ) = Concat ( hd 1 , ..., hd h ) WO (3) 277 hd i = Attention ( QW Qi , KW Ki , V W Vi ) (4) 278 where Q , K and V are the output hidden states 279 of the former layer.",
"W Qi , W Ki and W Vi are the 280 parameters corresponding to each head.",
"WO is 281 the output projection parameter.",
"Second-stage Interaction Note that in the firststage interaction, the query and the document characters are concatenated as input.",
"The model learns 286 keyword-keyword, keyword-document, document287 document interactions simultaneously through 288 self-attention mechanism in transformer blocks 289 of BERT.",
"In our keyword-document matching 290 task, keyword-document interaction is more im291 portant than document-document and keyword292 keyword interactions.",
"Therefore, we introduce the 293 second-stage interaction layer to conduct keyword294 document contextualization independently.",
"Firstly, 295 we obtain the representation of the keyword e Q by 296 the BERT encoder.",
"where e Q = { e Q m } Mm =1 .",
"Then, we compute a simi299 larity matrix using the keyword embedding and the 300 document embedding.",
"S = ( s mn ) RM N s mn = (cid:10) e Qm , e Dn (cid:11) v T R",
"where (cid:104) e Qm , e Dn (cid:105) represents a element-wise multiplication, v R d is a trainable weight vector.",
"In this 305 similarity matrix, the value s mn indicates the link 306 between the m -th character embedding in the key307 word and the representation of the n -th character in 308 the document.",
"Then, we apply this similarity ma309 trix to further encode the keyword by calculating 310 attention over the document: 311 u Q = (cid:8) u Qm (cid:9) M m =1 (8) 312 u Q m = N (cid:88) n =1 a mn e Dn R d (9) 313 a m = softmax ( s m ) RN (10) 314 where s m = { s mn } Nn =1 and a m means which 315 characters in the document should be attended re316 garding the m -th character of the keyword.",
"We 317 then add the original keyword representation e Q 318 with u Q to get the keyword embedding: 319 u Q = u Q + e Q (11) 320 Similarly, we use this similarity matrix to get the 321 document representation u D RN d .",
"As show in Table 1, the topic-aware relevance case and the partially-relevance case both have some words relevant to cherry blossoms.",
"But the topic 326 of the topic-aware relevance document is more re-327 lated with the keyword cherry bollosoms.",
"By 328 contrast, the topic of the partial-relevance doc-329 ument is more likely to be a document about 330 a restaurant, which is not related to the key-331 word cherry blossom.",
"Following this direction, 332 analysing the topic of the document is a way to pro-333 mote keyword-document matching models.",
"Specif-334 ically, we introduce neural topic model to produce 335 the latent topic and then use it to update the up-336 stream representation of the document.",
"As shown in Figure 1, the input of the neural topic model is a word sequence of the document w D .",
"The bag-of-words (BOW) representation of 340 the document is x D R | V w | , where | V w | is the 341 size of the word vocabulary.",
"Assume that the latent 342 variable represents the topic distribution in the 343 document w D .",
"The probabilistic topic models, like 344 LDA(Blei et al., 2003), apply the Dirichlet distribu-345 tion as the prior of the latent variable Dir ( ) , 346 where is the parameter of the Dirichlet distribu-347 tion.",
"By contrast, in the neural topic model, Gaus-348 sian Softmax Construction (Miao et al., 2017) is 349 applied using a neural network to parameterise the 350 topic distribution GGSM ( 0 , 20 ) : 351 x N ( 0 , 20 ) (12) 352 = softmax ( WT 1 x ) (13) 353 where W 1 is a trainable parameter.",
"0 and 0 are 354 the parameters of the prior Gaussian distribution N .",
"355 Assuming there are K topics, if z n { 1 , ..., K } 356 is the topic assignment for the observed word w D n , 357 then: 358 z n Multi ( ) (14) 359 z n R | V w | is a topic distribution over the 360 words in the vocabulary given z n .",
"The topic distri-361 bution can be calculated by the similarity between 362 the topic and the words in the vocabulary: 363 z n = softmax ( v T t z n ) (15) 364 where t R d K is the topic vector which is a 365 parameter of the neural topic model, v R d | V w | 366 is the word vector.",
"K is the total topic number.",
"367 Then, the generative probability of each word w n 368 can be calculated by: 369 p ( w n | z n ) = Multi ( z n ) (16) 370 The neural topic model is implemented by an 371 Auto-Encoding Variational Bayes (AEVB) algo372 rithm (Kingma and Welling, 2013).",
"The encoder 373 is used to approximate the true posterior of the 374 latent variable p ( | x ) .",
"Specifically, the encoder 375 takes the BOW (Bag-of-Words) representation of 376 the document as the input and generates the poste377 rior Gaussian Softmax Construction parameters 378 and 2 through neural networks.",
"In practice, the la379 tent variable is sampled by the reparameterization 380 trick.",
"= f 1 ( x D ) , log = f 2 ( x D ) GGSM ( , 2 )",
"where f ( ) is a multi-layer perceptron.",
"The de384 coder is responsible for reconstructing the docu385 ment by maximizing the log likelihood of the input 386 document.",
"The latent variable z n can be integrated 387 out as follows.",
"Finally, the variational lower bound of the neural topic model is obtained by combining the reconstruction error term and the KL divergence term.",
"393 The parameters of neural topic model can be trained 394 by maximizing this function.",
"where p (cid:48) ( | D ) means the variational posterior distribution of document D , approximating the true posterior p ( | D ) .",
"It's expected that introducing topic model can benefit the model in the prediction of the abovementioned three levels.",
"In this subsection, we de403 sign a joint training mechanism to incorporate the 404 latent topic representation to further encode the 405 document and train the model in a joint process.",
"406 As described above, u D is the document repre-407 sentation after the two-stage keyword-document 408 interaction.",
"RK | V w | is the topic distribu-409 tion over the vocabulary, where ij means that the 410 weight between the i -th topic and the j -th word.",
"411 We are inspired from an end2end memory net-412 work(Sukhbaatar et al., 2015), which is used to 413 memorize multiple sentences in question answer-414 ing task.",
"Similarly, in TITA, we intend to embed 415 the topic-word weight into the deep representation 416 of the document.",
"417 As depicted in Figure 1 part C, the input of mem-418 ory network is and the deep document repre-419 sentation after the two-stage keyword-document 420 interaction u D .",
"is memorized in the memory of 421 the network, where k means the representation of 422 the k -th topic over the vocabulary of size | V w | .",
"The TITA model has two memory hops as shown in the Figure 1.",
"In the following, we describe the 425 model in a single memory hop operation for sim-426 plicity.",
"One hop has two major components: the 427 input memory and the output memory.",
"In the in-428 put memory representation, a matching score is 429 calculated taking and u D as input: 430 p k = softmax ( k V u D ) (22) 431 where V R | V w | d is a trainable weight vector.",
"In 432 the output memory representation part, we compute 433 the slot output vector using the output memory and 434 the matching score: 435 o D = K (cid:88) k =1 ( p k c k ) (23) 436 437 o D = W o ( o D + u D ) (24) 438 where c RK d is a trainable output memory.",
"439 We compute two relevance vectors r 1 and r 2 .",
"One 440 takes u D and u Q as input, while the other one using 441 u Q and o D .",
"We merge the two relevance vectors 442 and then apply softmax function to get the final 443 relevance level: 444 r 1 = WR 1 [ u D ; u Q ] + b R 1 (25) 445 r 2 = WR 2 [ o D ; u Q ] + b R 2 (26) 446 r Q,D = softmax ( WR [ r 1 ; r 2 ] + b R ) (27) 447 where [; ] is vector concatenation operation and 448 W , b are all trainable variables.",
"The TITA model integrates three different parts as shown in Figure 1: a two-stage keyworddocument interaction, a neural topic model and a joint training mechanism.",
"In the training pro453 cess, the neural topic model and the joint model are 454 trained alternatively to a convergent status.",
"We first 455 train the neural topic model for epochs to get a 456 topic distribution over vocabulary, i.e., .",
"Then in 457 the joint training process, the model takes the out458 put of the two-stage keyword-document interaction 459 and the output of the neural topic model to conduct 460 training the model parameters for classification.",
"In this section, we conduct experiments on our keyword-document matching dataset from our recommendation system and the results demonstrate the superiority of the TITA model compared to the baselines.",
"We apply accuracy as the evaluation metric.",
"In 468 this paper, we care mostly about rigidly distinguish469 ing the three keyword-document matching levels.",
"470 We believe that documents of different matching 471 levels have different usages.",
"For instance, in our 472 online recommendation system, the goal of our 473 model is to recall the topic-aware relevance docu474 ments and there is no need to rank documents of 475 each keyword.",
"Our keyword-document matching dataset is in Chinese, derived from our recommendation system.",
"479 The domains mainly lie in food (e.g., beef and 480 western food), sports (e.g., football and jogging), 481 entertainment (e.g., photography and comedy) and 482 so on.",
"For all the 8901 keywords, we get 10 doc483 uments for each keyword by users' behavior in 484 our recommendation system, e.g., click-through.",
"485 As for how to choose 10 documents for each key486 word in the baseline online recommendation sys487 tem, for a certain keyword, hundreds of documents 488 are recalled for different users, in which the topic489 aware relevance documents tend to have high click490 through rate while irrelevance ones tend to have 491 low click-through rate.",
"For each keyword, we se492 lect 6 documents which have high click-through 493 rate as well as 4 documents with low click-through 494 rate.",
"According to our analysis, this setting tends 495 to generate similar ratios of three-level relevance 496 documents for all the keywords.",
"As a result, each 497 keyword has 10 corresponding documents.",
"Each 498 keyword-document pair is manually annotated at 499 different relevance levels.",
"As shown in Table 1, 500 relevance level-2 means the document and the key-501 word have the same topic, while relevance level-0 502 means the keyword and the document are irrele-503 vant.",
"Relevance level-1 is an intermediate rele-504 vance level, which means only a small portion of 505 the document describes some useful information of 506 the keyword.",
"To make the ratios of level-2, level-1, 507 level-0 cases nearly the same, we randomly delete 508 some documents.",
"As a result, we have 8,901 key-509 words and 66,019 corresponding documents.",
"Fi-510 nally, the dataset is randomly split into 50% for 511 training, 25% for validation and 25% for testing.",
"In the experiment, we set the cutoff length of the document sequence as 512 characters and the cutoff length of the keyword as 16 characters in Chinese.",
"516 The size of the character vocabulary V c is 21128.",
"517 The size of the word vocabulary for neural topic 518 model V w is 5000, which contains top frequent 519 words after deleting stop words.",
"We use pre-trained 520 embeddings by BERT to initialize the character 521 embeddings.",
"We directly use BERT base model 522 released by Google with the hidden size of 768.",
"In 523 the neural topic model, we set the number of topics 524 # K = 50.",
"We use all the documents in the training 525 set to train the neural topic model for 50 epochs.",
"526 The topic embedding size d is set to 384 and we set 527 the word embedding to the same size.",
"The padding 528 is masked to avoid affecting the gradient.",
"We use 529 the optimization algorithm Adam (Kingma and Ba, 530 2014) with learning rate 5e-5 and batch size as 32.",
"531 As for the parameters of Adam, 1 and 2 are set 532 to 0.9 and 0.999 respectively.",
"As described in the Introduction Section, the keyword-document matching models can be di-536 vided into two categories: representation-based and interaction-based matching model.",
"As shown in 538 Table 2, many strong baselines are included in the 539 performance comparison.",
"Table 2 shows that the TITA model outperforms all the models evaluated by accuracy in this keyword-543 document matching task.",
"From this table, we have 544 the other observations: (1) The TITA model is more 545 competent in this task.",
"It outperforms ARC-II by 546 7.06% and outperforms BERT by 5.38%, which 547 Figure 2: The architecture of online deployment of the TITA model, which consists of two major components: an offline data processor module and an online data usage module.",
"strongly proves that topic model and two-stage in548 teraction can benefit this task.",
"(2) Most interaction549 based models behave better than representation550 based ones.",
"(3) Pre-trained word embeddings can 551 also aid this task.",
"To further examine the effectiveness of the neural topic model and the two-stage keyworddocument interaction, we make a detailed ablation analysis as shown in Table 3.",
"Bi-LSTM : The TITA model is based on BiLSTM, which encodes a query and a document independently before matching.",
"+ Neural Topic Model : Bi-LSTM plus neu-560 ral topic model outperforms the Bi-LSTM baseline by a large scale (i.e., 2.02%), which indicates that the keyword-document match-563 ing task can benefit from the latent topic rep-564 resentation of the document.",
"+ First-stage Keyword-document Interac-566 tion : After adding the first-stage keyword-567 document interaction, the model behaves bet-568 ter.",
"It proves that concatenating the query and 569 document to conduct interaction is effective.",
"+ Second-stage Keyword-document Inter-571 action : We add the second-stage interaction to make further improvement.",
"We infer that 573 the cross attention is more capable in captur-574 ing interactions between a keyword and a doc-575 ument.",
"",
"Replace Bi-LSTM with BERT : We apply BERT to initialize the word representation, whose parameters are to be finetuned.",
"We can 579 observe that the model performs even better 580 than the former one, which reveals that the 581 pre-trained word representations are useful in 582 the keyword-document matching task.",
"Because the model is heavy and the total numbers of keywords are limited (8901 in total), we gener-586 ate data in offline, as shown in Figure 2.",
"In offline 587 data processor, we first use BM25 to retrieve and 588 rank billions of document candidates and keep the 589 top10000 candidates for TITA model to further 590 conduct query-document relation prediction.",
"After 591 that we can get a ranked list of topic matching doc-592 uments and partially relevance documents for all 593 keywords, which will be stored in a KV database.",
"594 In the online data usage, we recall documents of all 595 the keywords, which the user follows, for further 596 re-ranking in our recommendation system.",
"As for the online gains, we attached more than one million topic-matching documents for the 8901 keywords.",
"These documents are all distributed 600 in our recommendation system with the number 601 of views about 1 .",
"9 e 6 /day .",
"We improve the click602 through rate by 4.35% (from 6.52% to 10.87%), 603 which is a great improvement.",
"We define a new keyword-document matching task with three relevance levels from a real recommendation system, to address the problem that different scenarios require documents of different relevance levels.",
"Further, we propose a TITA model to dis610 tinguish different relevance levels, which can cap611 ture latent topics of a document and hold complex 612 keyword-document interactions at the same time.",
"613 Extensive experiments reveal the superiority of our 614 model compared to other strong baselines.",
"Ab615 lation test shows that the model can improve the 616 keyword-document matching in the same way as 617 we think.",
"Moreover, our model shows excellent per618 formance in our recommendation system, in which 619 it improves the click-through rate by 4.35%."
] | [
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"objective",
"method",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"objective",
"method",
"objective",
"objective",
"abstain",
"method",
"abstain",
"method",
"objective",
"objective",
"result",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"result",
"result",
"result"
] |
[
"Adversarial attacks are a major challenge faced by current machine learning research.",
"These purposely crafted inputs fool even the most advanced models, precluding their deployment in safety-critical applications.",
"Extensive research in computer vision has been carried to develop reliable defense strategies.",
"However, the same issue remains less explored in natural language processing.",
"Our work presents a model-agnostic detector of adversarial text examples.",
"The approach identifies patterns in the logits of the target classifier when perturbing the input text.",
"The proposed detector improves the current state-of-the-art performance in recognizing adversarial inputs and exhibits strong generalization capabilities across different NLP models, datasets, and word-level attacks.",
"Despite recent advancements in Natural Language Processing (NLP), adversarial text attacks continue to be highly effective at fooling models into making incorrect predictions (Ren et al., 2019; Wang et al., 2019; Garg and Ramakrishnan, 2020).",
"In particular, syntactically and grammatically consistent attacks are a major challenge for current research as they do not alter the semantical information and are not detectable via spell checkers (Wang et al., 2019).",
"While some defense techniques addressing this issue can be found in the literature (Mozes et al., 2021; Zhou et al., 2019; Wang et al., 2019), results are still limited in performance and text attacks keep evolving.",
"This naturally raises concerns around the safe and ethical deployment of NLP systems in real-world processes.",
"Previous research showed that analyzing the model's logits leads to promising results in discriminating manipulated inputs (Wang et al., 2021; Aigrain and Detyniecki, 2019; Hendrycks and Gimpel, 2016).",
"However, logits-based adversarial detectors have been only studied on computer vision applications.",
"Our work transfers this type of methodology to the NLP domain and its contribution can be summarized as follows: (1) We introduce a logits-based metric called Word-level Differential Reaction (WDR) capturing words with a suspiciously high impact on the classifier.",
"The metric is model-agnostic and also independent from the number of output classes.",
"(2) Based on WDR scores, we train an adversarial detector that is able to distinguish original from adversarial input texts preserving syntactical correctness.",
"The approach substantially outperforms the current state of the art in NLP.",
"(3) We show our detector to have full transferability capabilities and to generalize across multiple datasets, attacks, and target models without needing to retrain.",
"Our test configurations include transformers and both contextual and genetic attacks.",
"(4) By applying a post-hoc explainability method, we further validate our initial hypothesisi.e. the detector identifies patterns in the WDR scores.",
"Furthermore, only a few of such scores carry strong signals for adversarial detection.",
"Given an input sample x and a target model f , an adversarial example x (cid:48) = x + x is generated by adding a perturbation x to x such that arg max f ( x ) = y (cid:54) = y (cid:48) = arg max f ( x (cid:48) ) .",
"Although this is not required by definition, in practice the perturbation x is often imperceptible to humans and x (cid:48) is misclassified with high confidence.",
"In the NLP field, x consists in adding, removing, or replacing a set of words or characters in the original text.",
"Unlike image attacksvastly studied in the literature (Zhang et al., 2020) and operating in high-dimensional continuous input spacestext perturbations need to be applied on a discrete input space.",
"Therefore, gradient methods used for images such as FGSM (Goodfellow et al., 2014) or BIM (Kurakin et al., 2017) are not useful since they require a continuous space to perturb x .",
"Based on the text perturbation introduced, text attacks can be distinguished into two broad categories.",
"Visual similarity: These NLP attacks generate adversarial samples x (cid:48) that look similar to their corresponding original x .",
"These perturbations usually create typos by introducing perturbations at the character level.",
"DeepWordBug (Gao et al., 2018), HotFlip (Ebrahimi et al., 2018) , and VIPER (Eger et al., 2019) are well-known techniques belonging to this category.",
"Semantic similarity: Attacks within this category create adversarial samples by designing sentences that are semantically coherent to the original input and also preserve syntactical correctness.",
"Typical word-level perturbations are deletion, insertion, and replacement by synonyms (Ren et al., 2019) or paraphrases (Iyyer et al., 2018).",
"Two main types of adversarial search have been proposed.",
"Greedy algorithms try each potential replacement until there is a change in the prediction (Li et al., 2020; Ren et al., 2019; Jin et al., 2020).",
"On the other hand, genetic algorithms such as Alzantot et al. (2018) and Wang et al. (2019) attempt to find the best replacements inspired by natural selection principles.",
"Defenses based on spell and syntax checkers are successful against character-level text attacks (Pruthi et al., 2019; Wang et al., 2019; Alshemali",
"and Kalita, 2019).",
"In contrast, these solutions are not effective against word-level attacks preserving language correctness (Wang et al., 2019).",
"We identify methods against word-level attacks belonging to two broad categories: Robustness enhancement: The targeted model is equipped with further processing steps to not be fooled by adversarial samples without identifying explicitly which samples are adversarial.",
"For instance, Adversarial Training (AT) (Goodfellow et al., 2014) consists in training the target model also on manipulated inputs.",
"The Synonym Encoding Method (SEM) (Wang et al., 2019) introduces an encoder step before the target model's input layer and trains it to eliminate potential perturbations.",
"Instead, Dirichlet Neighborhood Ensemble (DNE) (Zhou et al., 2020) and Adversarial Sparse Convex Combination (ASCC) (Dong et al., 2021) augment the training data by leveraging the convex hull spanned by a word and its synonyms.",
"Adversarial detection: Attacks are explicitly recognized to alert the model and its developers.",
"Adversarial detectors were first explored on image inputs via identifying patterns in their corresponding Shapley values (Fidel et al., 2020), activation of specific neurons (Tao et al., 2018), and saliency maps (Ye et al., 2020).",
"For text data, popular examples are Frequency-Guided Word Substitution (FGWS) (Mozes et al., 2021) and learning to DIScriminate Perturbation (DISP) (Zhou et al., 2019).",
"The former exploits frequency properties of replaced words, while the latter uses a discriminator to find suspicious tokens and uses a contextual embedding estimator to restore the original word.",
"Inspecting output logits has already led to promising results in discriminating between original and adversarial images (Hendrycks and Gimpel, 2016; Pang et al., 2018; Kannan et al., 2018; Roth et al., 2019).",
"For instance, Wang et al. (2021) trains a recurrent neural network that captures the difference in the logits distribution of manipulated samples.",
"Aigrain and Detyniecki (2019), instead, achieves good detection performance by feeding a simple three-layer neural network directly with the logit activations.",
"Our work adopts a similar methodology but focuses instead on the NLP domain and thus text attacks.",
"In this case (1) logits-based metrics to identify adversarial samples should be tailored to 7807 Target Model Adversarial Detector is original is adversarial : Run without and measure reaction Figure 1: Overview of the proposed method.",
"the new type of input and (2) detectors should be tested on currently used NLP models such as transformers (Devlin et al., 2019).",
"The defense approach proposed in this work belongs to the category of adversarial detection .",
"It defends the target model from attacks generated via word-level perturbations belonging to the semantic similarity category.",
"The intuition behind the method is that the model's reaction to original-and adversarial samples is going to differ even if the inputs are similar.",
"Hence, it relies on feature attribution explanations coupled with a machine learning model to learn such difference and thus identify artificially crafted inputs.",
"Figure 1 shows the overall pipeline of the approach.",
"Given a text classifier f trained on the task at hand, the pipeline's goal is to detect whether the currently fed input x is adversarial.",
"In 3.1, we explain in greater detail how we measure the model f 's reaction to a given input x .",
"This quantity later indicated with WDR ( x, f ) is then passed to the adversarial detector, whose training procedure is described in 3.2.",
"Finally, in 3.3, we provide detailed information about the setup of our experiments such as target models, datasets, and attacks.",
"Adversarial attacks based on semantic similarity replace the smallest number of words possible to change the target model's prediction (Alzantot et al., 2018).",
"Thus, we expect the replacements transforming x into x (cid:48) to play a big role for the output.",
"If not, we would not have f ( x (cid:48) ) substantially different from f ( x ) .",
"To assess the reaction of the target model f to a given input x , we measure the impact of a word via the Word-level Differential Reaction (WDR) metric.",
"Specifically, the effect of replacing a word x i on the prediction y = arg max y p ( y | x ) is quantified by WDR ( x i , f ) = f ( x \\ x i ) y max y (cid:54) = y f ( x \\ x i ) y where f ( x \\ x i ) y indicates the output logit for class y for the input sample x without the word x i .",
"Specifically, x i is replaced by an unknown word token .",
"If x is adversarial, we could expect to find perturbed words to have a negative WDR ( x i , f ) as without them the input text should recover its original prediction.",
"Table 1 shows an example pair of original and adversarial text together with their corresponding WDR ( x i , f ) scores.",
"The original class is recovered after removing a perturbed word in the adversarial sentence.",
"This switch results in a negative WDR.",
"However, even if the most important word is removed from the original sentence ( 'worst' ), the predicted class does not change and thus WDR ( x i , f ) > 0 .",
"Our adversarial detector takes as input WDR ( x, f ) , i.e. the sorted list of WDR scores WDR ( x i , f ) for all words x i in the input sentence.",
"As sentences vary in length, we pad the list with zeros to ensure a consistent input length for the detector.",
"The adversarial detector is a machine-learning classifier that takes the model's reaction WDR ( x, f ) as input and outputs whether the input x is adversarial or not.",
"To train the model, we adopt the following multi-step procedure: 7808 Original sentence: Neg.",
"This leads to a balanced dataset containing both normal and perturbed samples.",
"The labels used are original and adversarial respectively.",
"(S2)",
"WDR computation: For each element of the mixed dataset, we compute the WDR ( x, f ) scores as defined in Section 3.1.",
"Once more, this step creates a balanced dataset containing the WDR scores for both normal and adversarial samples.",
"(S3)",
"Detector training: The output of the second step (S2) is split into training and test data.",
"Then, the training data is fed to the detector for training along with the labels defined in step (S1) .",
"Please note that no assumption on f is made.",
"At the same time, the input of the adversarial detector i.e. the WDR scoresdoes not depend on the number of output classes of the task at hand.",
"Hence, the adversarial detector is model-agnostic w.r.t. the classification task and the classifier targeted by the attacks.",
"In our case, we do not pick any particular architecture for the adversarial detector.",
"Instead, we experiment with a variety of models to test their suitability for the task.",
"In the same spirit, we test our setting on different target classifiers, types of attacks, and datasets.",
"and Lee, 2005), Yelp Polarity (YELP) (Zhang et al., 2015), and AG News (Zhang et al., 2015).",
"The first three are binary sentiment analysis tasks in which reviews are classified in either positive or negative sentiment.",
"The last one, instead, is a classification task where news articles should be identified as one of four possible topics: World , Sports , Business , and Sci/Tech .",
"As main target model for the various tasks we use DistilBERT (Sanh et al., 2020) fine-tuned on IMDb.",
"We choose DistilBerta transformer language model (Vaswani et al., 2017)as transformer architectures are widely used in NLP applications, established as state of the art in several tasks, and generally quite resilient to adversarial attacks (Morris et al., 2020).",
"Furthermore, we employ a Convolutional Neural Network (CNN) (Zhang et al., 2015), a Long Short-Term Memory (LSTM) (Hochreiter and Schmidhuber, 1997), and a full BERT model (Devlin et al., 2019) to test transferability to different target architectures.",
"All models are provided by the TextAttack library (Morris et al., 2020) and are already trained 1 on the datasets used in the experiments.",
"We generate adversarial text attacks via four well-established word-substitution-based techniques: Probability Weighted Word Saliency (PWWS) (Ren et al., 2019), Improved Genetic Algorithm (IGA) (Jia et al., 2019), TextFooler (Jin et al., 2020), and BERT-based Adversarial Examples (BAE) (Garg and Ramakrishnan, 2020).",
"The first is a greedy algorithm that uses word saliency 1 textattack.readthedocs.io/en/latest/3recipes/models.html, released under MIT License 7809 and prediction probability to determine the word replacement order (Ren et al., 2019).",
"IGA, instead, crafts attacks via mutating sentences and promoting the new ones that are more likely to cause a change in the output.",
"TextFooler ranks words by importance and then replaces the ones with the highest ranks.",
"Finally, BAE, leverages a BERT language model to replace tokens based on their context (Garg and Ramakrishnan, 2020).",
"All attacks are generated using the TextAttack library (Morris et al., 2020).",
"We investigate several combinations of datasets, target models, and attacks to test our detector in a variety of configurations.",
"Because of its robustness and well-balanced behavior, we pick the average F1-score as our main metric for detection.",
"However, as in adversarial detection false negatives can have major consequences, we also report the recall on adversarial sentences.",
"Later on, in 4.3, we also compare performance with other metrics such as precision and original recall and observe how they are influenced by the chosen decision threshold.",
"In this section, we report the experimental results of our work.",
"In 4.1, we study various detector architectures to choose the best performing one for the remaining experiments.",
"In 4.2, we measure our pipeline's performance in several configurations (target model, dataset, attack) and we compare it to the current state-of-the-art adversarial detectors.",
"While doing so, we also assess transferability via observing the variation in performance when changing the dataset, the target model, and the attack source without retraining our detector.",
"Finally, in 4.3, we look at how different decision boundaries affect performance metrics.",
"The proposed method does not impose any constraint on which detector architecture should be used.",
"For this reason, no particular model has been specified in this work so far.",
"We study six different detector architectures in one common setting.",
"We do so in order to pick one to be utilized in the rest of the experiments.",
"Specifically, we compare XGBoost (Chen and Guestrin, 2016), AdaBoost (Schapire, 1999), LightGBM (Ke et al., 2017), SVM (Hearst et al., 1998), Random Forest (Breiman, 2001), and a Perceptron NN (Singh and Banerjee, 2019).",
"All models are compared on adversarial attacks generated with PWWS from IMDb samples and targeting a DistilBERT model fine-tuned on IMDb.",
"A balanced set of 3 , 000 instances 1 , 500 normal and 1 , 500 adversarial was used for training the detectors while the test set contains a total of 1360 samples following the same proportions.",
"As shown in Table 2, all architectures achieve competitive performance and none of them clearly appears superior to the others.",
"We pick XGBoost (Chen and Guestrin, 2016) as it exhibits the best F1-score.",
"The main hyperparameters utilized are 29 gradient boosted trees with a maximum depth of 3 and 0 .",
"34 as learning rate.",
"We utilize this detector architecture for all experiments in the following sections.",
"Tables 3a and 3b report the detection performance of our method in a variety of configurations.",
"In each table, the first row represents the settingi.e. combination of target model, dataset, and attack typein which the detector was trained.",
"The remaining rows, instead, are w.r.t. settings in which we tested the already trained detector without performing any kind of fine-tuning or retraining.",
"We utilize a balanced training set of size 3 , 000 and 2 , 400 samples respectively for the detectors trained on IMDb adversarial attacks (Table 3a) and on AG News attacks (Table 3b).",
"All results are obtained using balanced test sets containing 500 samples.",
"The only exceptions are the configurations (DistilBERT, RTMR, IGA) and (DistilBERT, AG News, IGA) which used test sets of size 480 and 446 respectively due to data availability.",
"able and was already proven to be better than DISP (Zhou et al., 2019) by its authors.",
"Hence, we utlize FGWS as baseline for comparison in all configurations.",
"Analogously to our method, FGWS is trained on the configuration in the first row of each table and then applied to all others.",
"More in detail, we fine-tune its frequency substitution threshold parameter (Mozes et al., 2021) until achieving a 7811 best fit value of = 0 .",
"9 in both training settings.",
"From what can be seen in both tables, the proposed method consistently shows very competitive results in terms of F1-score and outperforms the baseline in 22 configurations out of 28 (worse in 5 ) and is on average better by 8 .",
"96 percentage points.",
"At the same time, our methods exhibits a very high adversarial recall, showing a strong capability at identifying attacks and thus producing a small amount of false negatives.",
"Generalization to different target models: Starting from the training configurations, we vary the target model while maintaining the other components fixed (rows 2-4 of each table).",
"Here, the detector achieves state-of-the-art results in all test settings, occasionally dropping below the 90% F1-score on a few simpler models like LSTM and CNN while not exhibiting any decay on more complex models like BERT.",
"Generalization to different datasets: Analogous to the previous point, we systematically substitute the dataset component for evaluation (rows 5-6 of each table).",
"We notice a substantial decay in F1-score when testing with RTMR (74.1 75.8%) since samples are short and, therefore, may contain few words which are very relevant for the prediction, just like adversarial replacements.",
"Nevertheless, removing adversarial words still result in a change of prediction to the original class thereby preserving high adversarial",
"recall.\" Generalization to different attacks: Results highlight a good reaction to all other text attacks (rows 7-9 of each table) and even experiences a considerable boost in performance against TextFooler.",
"In contrast, the baseline FGWS significantly suffers against more complex attacks such as BAE, which generates context-aware perturbation.",
"Besides testing generalization properties via systematically varying one configuration component at the time, we also test on a few settings presenting changes in multiple ones (rows 10-14 of each table).",
"Also in these settings, the proposed method maintains a very competitive performance, with noticeable drops only on the RTMR dataset.",
"Depending on the application in which the detector is used to monitor the model and detect malicious input manipulations, different performance metrics can be taken into account to determine whether it",
"is safe to deploy the model.",
"For instance, in a very safety-critical application where successful attacks lead to harmful consequences, adversarial recall becomes considerably more relevant as a metric than the F1-score.",
"We examine how relevant metrics change in response to different choices for the discrimination threshold.",
"Please note that a lower value corresponds to more caution, i.e. we are more likely to output that a certain input is adversarial.",
"Figure 2 and Table 4 show performance results w.r.t. different threshold choices.",
"We notice that decreasing its value from 0.5 to 0.15 can increase the adversarial recall to over 98% at a small cost in terms of precision and F1-score ( < 2 percentage points).",
"Applications where missing attacks i.e. false negativeshave disastrous consequences could take advantage of this property and consider lowering the decision boundary.",
"This is particularly true if attacks are expected with a low frequency and an increase in false positive incurs only minor 7812 costs.",
"Section 4 discussed quantitative results and emphasized the competitive performance that the proposed approach achieves.",
"Here, instead, we focus on the qualitative aspects of our research findings.",
"For instance, we try to understand why our pipeline works while also discussing challenges, limitations, ethical concerns, and future work.",
"The proposed pipeline consists of a machine learning classifiere.g. XGBoostfed with the model's WDR scores.",
"The intuition behind the approach is that words replaced by adversarial attacks play a big role in altering the target model's decision.",
"Despite the competitive detection performance, the detector is itself a learning algorithm and we cannot determine with certainty what patterns it can identify.",
"To validate our original hypothesis, we apply a popular explainability techniqueSHAP (Lund-berg and Lee, 2017)to our detector.",
"This allows us to summarize the effect of each feature at the dataset level.",
"We use the official implementation 2 to estimate the importance of each WDR and use a beeswarm plot to visualize the results.",
"influencing the adversarial detector the most.",
"Since in our pipeline WDR scores are sorted based on their magnitude, this means that the largest WDR of each prediction are the most relevant for the detector.",
"This is consistent with our hypothesis that replaced words substantially change output logits and thus measuring their variation is effective for detecting input manipulations.",
"As expected, negative values for the WDR correspond to a higher likelihood of the input being adversarial.",
"We also notice that features after the first three do not appear in the naturally expected order.",
"We believe this is the case as for most sentences it is sufficient to replace two-three words to generate an adversarial sample.",
"Hence, in most cases, only a few WDR scores carry important signals for detection.",
"While WDR scores contain rich patterns to identify manipulated samples, they are also relatively expensive to compute.",
"Indeed, we need to run the model once for each featurei.e. each wordin the input text.",
"While this did not represent a limitation for our use-cases and experiments, we acknowledge that it could result in drawbacks when input texts are particularly long.",
"Our method is specifically designed against word-level attacks and it does not cover character-level ones.",
"However, the intuition seems to some extent applicable also to sentences with typos and similar artifacts as the words containing them will play a big role for the prediction.",
"This, like in the word-level case, needs to happen in order for the perturbations to result in a successful adversarial text attack and change the target model's prediction 5.3 Ethical Perspective and Future Work Detectingor in general defending against adversarial attacks is a fundamental pillar to deploy machine learning models ethically and safely.",
"However, while defense strategies increase model robustness, they can also inspire and stimulate new and improved attack techniques.",
"An example of this phenomenon is BAE (Garg and Ramakrish-nan, 2020), which leverages architectures more resilient to attacks such as BERT to craft highly-effective contextual attacks.",
"Analogously, defense approaches like ours could lead to new attacks that do not rely on a few words to substantially affect output logits.",
"Based on our current findings, we identify a few profitable directions for future research.",
"(1) First of all, the usage of logits-based metrics such as the WDR appears to be very promising for detecting adversarial inputs.",
"We believe that a broader exploration and comparison of other metrics previously used in computer vision could lead to further improvements.",
"(2) We encourage future researchers to draw inspiration from this work and also test their defenses in settings that involve mismatched attacks, datasets, and target models.",
"At the same time, we set as a priority for our future work to also evaluate the efficacy of adversarial detection methods on adaptive attacks (Tramer et al., 2020; Athalye et al., 2018).",
"(3) This work proves the efficacy of WDR in a variety of settings, which include a few different datasets and tasks.",
"However, it would be beneficial for current research to understand how these techniques would apply to high-stakes NLP applications such as hate speech detection (Mosca et al., 2021; Wich et al., 2021).",
"Adversarial text attacks are a major obstacle to the safe deployment of NLP models in high-stakes applications.",
"However, although manipulated and original samples appear indistinguishable, interpreting the model's reaction can uncover helpful signals for adversarial detection.",
"Our work utilizes logits of original and adversarial samples to train a simple machine learning detector.",
"WDR scores are an intuitive measure of word relevance and are effective for detecting text components having a suspiciously high impact on the output.",
"The detector does not make any assumption on the classifier targeted by the attacks and can be thus considered model-agnostic.",
"The proposed approach achieves very promising results, considerably outperforming the previous state-of-the-art in word-level adversarial detection.",
"Experimental results also show the detector to possess remarkable generalization capabilities across different target models, datasets, and text attacks without needing to retrain.",
"These include transformer architectures such as BERT and well-established attacks such as PWWS, genetic algorithms, and context-aware perturbations.",
"We believe our work sets a strong baseline on which future research can build to develop better defense strategies and thus promoting the safe deployment of NLP models in practice.",
"We release our code to the public to facilitate further research and development 3 ."
] | [
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"method",
"abstain",
"result",
"abstain",
"method",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain"
] |
[
"Structural heterogeneity between knowledge graphs is an outstanding challenge for entity alignment.",
"This paper presents Neighborhood Matching Network (NMN), a novel entity alignment framework for tackling the structural heterogeneity challenge.",
"NMN estimates the similarities between entities to capture both the topological structure and the neighborhood difference.",
"It provides two innovative components for better learning representations for entity alignment.",
"It first uses a novel graph sampling method to distill a discriminative neighborhood for each entity.",
"It then adopts a cross-graph neighborhood matching module to jointly encode the neighborhood difference for a given entity pair.",
"Such strategies allow NMN to effectively construct matching-oriented entity representations while ignoring noisy neighbors that have a negative impact on the alignment task.",
"Extensive experiments performed on three entity alignment datasets show that NMN can well estimate the neighborhood similarity in more tough cases and significantly outperforms 12 previous state-of-the-art methods.",
"By aligning entities from different knowledge graphs (KGs) to the same real-world identity, entity alignment is a powerful technique for knowledge integration.",
"Unfortunately, entity alignment is nontrivial because real-life KGs are often incomplete and different KGs typically have heterogeneous schemas.",
"Consequently, equivalent entities from two KGs could have distinct surface forms or dissimilar neighborhood structures.",
"In recent years, embedding-based methods have become the dominated approach for entity alignment (Zhu et al., 2017; Pei et al., 2019a; Cao et al., 2019; Xu et al., 2019; Li et al., 2019a; Sun et al., Corresponding author.",
"2020).",
"Such approaches have the advantage of not relying on manually constructed features or rules (Mahdisoltani et al., 2015).",
"Using a set of seed alignments, an embedding-based method models the KG structures to automatically learn how to map the equivalent entities among different KGs into a unified vector space where entity alignment can be performed by measuring the distance between the embeddings of two entities.",
"The vast majority of prior works in this direction build upon an important assumption entities and their counterparts from other KGs have similar neighborhood structures, and therefore, similar embeddings will be generated for equivalent entities.",
"Unfortunately, the assumption does not always hold for real-life scenarios due to the incompleteness and heterogeneities of KGs.",
"As an example, consider Figure 1",
"(a), which shows two equivalent entities from the Chinese and English versions of Wikipedia.",
"Here, both central entities refer to the same real-world identity, Brooklyn , a borough of New York City.",
"However, the two entities have different sizes of neighborhoods and distinct topological structures.",
"The problem of dissimilar neighborhoods between equivalent entities is ubiquitous.",
"Sun et al. (2020) reports that the majority of equivalent entity pairs have different neighbors in the benchmark datasets DBP15K, and the proportions of such entity pairs are over 86% (up to 90%) in different language versions of DBP15K.",
"Particularly, we find that the alignment accuracy of existing embedding-based methods decreases significantly as the gap of equivalent entities' neighborhood sizes increases.",
"For instance, RDGCN (Wu et al., 2019a), a state-of-the-art, delivers an accuracy of 59% on the Hits@1 score on entity pairs whose number of neighbors differs by no more than 10 on DBP15K ZH EN .",
"However, its performance drops to 42% when the difference for the number of neighbors increases to 20 and to 35% when the difference increases to be above 30.",
"The disparity of the neighborhood size and topological structures pose a significant challenge for entity alignment methods.",
"Even if we were able to set aside the difference in the neighborhood size, we still have another issue.",
"Since most of the common neighbors would be popular entities, they will be neighbors of many other entities.",
"As a result, it is still challenging to align such entities.",
"To elaborate on this point, let us now consider Figure 1",
"(b).",
"Here, the two central entities (both indicate the city Liverpool ) have similar sizes of neighborhoods and three common neighbors.",
"However, the three common neighbors (indi-cate United Kingdom , England and Labour Party (UK) , respectively) are not discriminative enough.",
"This is because there are many city entities for England which also have the three entities in their neighborhoods e.g., the entity Birmingham .",
"For such entity pairs, in addition to common neighbors, other informative neighbors like those closely contextually related to the central entities must be considered.",
"Because existing embedding-based methods are unable to choose the right neighbors, we need a better approach.",
"We present Neighborhood Matching Network (NMN), a novel sampling-based entity alignment framework.",
"NMN aims to capture the most informative neighbors and accurately estimate the similarities of neighborhoods between entities in different KGs.",
"NMN achieves these by leveraging the recent development in Graph Neural Networks (GNNs).",
"It first utilizes the Graph Convolutional Networks (GCNs) (Kipf and Welling, 2017) to model the topological connection information, and then selectively samples each entity's neighborhood, aiming at retaining the most informative neighbors towards entity alignment.",
"One of the key challenges here is how to accurately estimate the similarity of any two entities' sampled neighborhood.",
"NMN addresses this challenge by designing a discriminative neighbor matching module to jointly compute the neighbor differences between the sampled subgraph pairs through a cross-graph attention mechanism.",
"Note that we mainly focus on the neighbor relevance in the neighborhood sampling and matching modules, while the neighbor connections are modeled by GCNs.",
"We show that, by integrating the neighbor connection information and the neighbor relevance information, NMN can effectively align entities from real-world KGs with neighborhood heterogeneity.",
"We evaluate NMN by applying it to benchmark datasets DBP15K (Sun et al., 2017) and DWY100K (Sun et al., 2018), and a sparse variant of DBP15K.",
"Experimental results show that NMN achieves the best and more robust performance over state-of-the-arts.",
"This paper makes the following technical contributions.",
"It is the first to: employ a new graph sampling strategy for identifying the most informative neighbors towards entity alignment (Sec. 3.3).",
"exploit a cross-graph attention-based matching mechanism to jointly compare discriminative subgraphs of two entities for robust entity alignment (Sec. 3.4).",
"Embedding-based entity alignment.",
"In recent years, embedding-based methods have emerged as viable means for entity alignment.",
"Early works in the area utilize TransE (Bordes et al., 2013) to embed KG structures, including MTransE (Chen et al., 2017), JAPE (Sun et al., 2017), IPTransE (Zhu et al., 2017), BootEA (Sun et al., 2018), NAEA (Zhu et al., 2019) and OTEA (Pei et al., 2019b).",
"Some more recent studies use GNNs to model the structures of KGs, including GCN-Align (Wang et al., 2018), GMNN (Xu et al., 2019), RDGCN (Wu et al., 2019a), AVR-GCN (Ye et al., 2019), and HGCN-JE (Wu et al., 2019b).",
"Besides the structural information, some recent methods like KDCoE (Chen et al., 2018), AttrE (Trisedya et al., 2019), MultiKE (Zhang et al., 2019) and HMAN (Yang et al., 2019) also utilize additional information like Wikipedia entity descriptions and attributes to improve entity representations.",
"However, all the aforementioned methods ignore the neighborhood heterogeneity of KGs.",
"MuGNN (Cao et al., 2019) and AliNet (Sun et al., 2020) are two most recent efforts for addressing this issue.",
"While promising, both models still have drawbacks.",
"MuGNN requires both pre-aligned entities and relations as training data, which can have expensive overhead for training data labeling.",
"AliNet considers all one-hop neighbors of an entity to be equally important when aggregating information.",
"However, not all one-hop neighbors contribute positively to characterizing the target entity.",
"Thus, considering all of them without careful selection can introduce noise and degrade the performance.",
"NMN avoids these pitfalls.",
"With only a small set of pre-aligned entities as training data, NMN chooses the most informative neighbors for entity alignment.",
"Graph neural networks.",
"GNNs have recently been employed for various NLP tasks like semantic role labeling (Marcheggiani and Titov, 2017) and machine translation (Bastings et al., 2017).",
"GNNs learn node representations by recursively aggregating the representations of neighboring nodes.",
"There are a range of GNN variants, including the Graph Convolutional Network (GCN) (Kipf and Welling, 2017), the Relational Graph Convolutional Network (Schlichtkrull et al., 2018), the Graph Attention Network (Velickovic et al., 2018).",
"Giving the powerful capability for modeling graph structures, we also leverage GNNs to encode the structural information of KGs (Sec. 3.2).",
"Graph matching.",
"The similarity of two graphs can be measured by exact matching (graph isomorphism) (Yan et al., 2004) or through structural information like the graph editing distance (Ray-mond et al., 2002).",
"Most recently, the Graph Matching Network (GMN) (Li et al., 2019b) computes a similarity score between two graphs by jointly reasoning on the graph pair through cross-graph attention-based matching.",
"Inspired by GMN, we design a cross-graph neighborhood matching module (Sec. 3.4) to capture the neighbor differences between two entities' neighborhoods.",
"Graph sampling.",
"This technique samples a subset of vertices or edges from the original graph.",
"Some of the popular sampling approaches include vertex-, edgeand traversal-based sampling (Hu and Lau, 2013).",
"In our entity alignment framework, we propose a vertex sampling method to select informative neighbors and to construct a neighborhood subgraph for each entity.",
"Formally, we represent a KG as G = ( E, R, T ) , where E, R, T denote the sets of entities, relations and triples respectively.",
"Without loss of generality, we consider the task of entity alignment between two KGs, G 1 and G 2 , based on a set of pre-aligned equivalent entities.",
"The goal is to find pairs of equivalent entities between G 1 and G 2 .",
"As highlighted in Sec. 1, the neighborhood heterogeneity and noisy common neighbors of real-world KGs make it difficult to capture useful information for entity alignment.",
"To tackle these challenges, NMN first leverages GCNs to model the neighborhood topology information.",
"Next, it employs neighborhood sampling to select the more informative neighbors.",
"Then, it utilizes a cross-graph matching module to capture neighbor differences.",
"As depicted in Figure 2, NMN takes as input two KGs, G 1 and G 2 , and produces embeddings for each candidate pair of entities, e 1 and e 2 , so that entity alignment can be performed by measuring the distance, d ( e 1 , e 2 ) , of the learned embeddings.",
"It follows a four-stage processing pipeline: (1) KG structure embedding, (2) neighborhood sampling, (3) neighborhood matching, and (4) neighborhood aggregation for generating embeddings.",
"To learn the KG structure embeddings, NMN utilizes multi-layered GCNs to aggregate higher degree neighboring structural information for entities.",
"NMNs uses pre-trained word embeddings to initialize the GCN.",
"This strategy is shown to be effective in encoding the semantic information of entity names in prior work (Xu et al., 2019; Wu et al., 2019a).",
"Formally, let G 1 = ( E 1 , R 1 , T 1 ) and G 2 = ( E 2 , R 2 , T 2 ) be two KGs to be aligned, we put G 1 and G 2 together as one big input graph to NMN.",
"Each GCN layer takes a set of node features as input and updates the node representations as: h ( l ) i = ReLU( (cid:88) j N i { i } 1 (cid:15) i W ( l ) h ( l 1) j ) (1) G 1 G 2 GCNs d ( e 1 , e 2 ) KG Structure Embedding Neighborhood Sampling e 1 e 1 e 1 e 2 e 2 e 2 e 1 e 2 Neighborhood Matching Neighborhood Aggregation Neighborhood Aggregation Figure 2: Overall architecture and processing pipeline of Neighborhood Matching Network (NMN).",
"where { h ( l ) 1 , h ( l ) 2 , ..., h ( l ) n | h ( l ) i R d ( l ) } is the output node (entity) features of l -th GCN layer, (cid:15) i is the normalization constant, N i is the set of neighbor indices of entity i , and W ( l ) R d ( l ) d ( l 1) is a layer-specific trainable weight matrix.",
"To control the accumulated noise, we also introduce highway networks (Srivastava et al., 2015) to GCN layers, which can effectively control the noise propagation across GCN layers (Rahimi et al., 2018; Wu et al., 2019b).",
"The one-hop neighbors of an entity are key to determine whether the entity should be aligned with other entities.",
"However, as we have discussed in Sec. 1, not all one-hop neighbors contribute positively for entity alignment.",
"To choose the right neighbors, we apply a down-sampling process to select the most informative entities towards the central target entity from its one-hop neighbors.",
"Recall that we use pre-trained word embeddings of entity names to initialize the input node features of GCNs.",
"As a result, the entity embeddings learned by GCNs contain rich contextual information for both the neighboring structures and the entity semantics.",
"NMN exploits such information to sample informative neighbors, i.e., neighbors that are more contextually related to the central entity are more likely to be sampled.",
"Our key insight is that the more often a neighbor and the central (or target) entity appear in the same context, the more representative and informative the neighbor is towards the central entity.",
"Since the contexts of two equivalent entities in real-world corpora are usually similar, the stronger a neighbor is contextually related to the target entity, the more alignment clues the neighbor is likely to offer.",
"Experimental results in Sec. 5.3 confirm this observation.",
"Formally, given an entity e i , the probability to sample its one-hop neighbor e i j is determined by: p ( h i j | h i ) = softmax( h i W s h Tij ) = exp ( h i W s h Tij ) (cid:80) k N i exp ( h i W s h Tik ) (2) where N i is the one-hop neighbor index of central entity e i , h i and h i j are learned embeddings for entities e i and e i j respectively, and W s is a shared weight matrix.",
"By selectively sampling one-hop neighbors, NMN essentially constructs a discriminative subgraph of neighborhood for each entity, which can enable more accurate alignment through neighborhood matching.",
"The neighborhood subgraph, produced by the sampling process, determines which neighbors of the target entity should be considered in the later stages.",
"In other words, later stages of the NMN processing pipeline will only operate on neighbors within the subgraph.",
"In the neighborhood matching stage, we wish to find out, for each candidate entity in the counterpart KG, which neighbors of that entity are closely related to a neighboring node within the subgraph of the target entity.",
"Such information is essential for deciding whether two entities (from two KGs) should be aligned.",
"As discussed in Sec. 3.3, equivalent entities tend to have similar contexts in real-world corpora; therefore, their neighborhoods sampled by NMN should be more likely to be similar.",
"NMN exploits this observation to estimate the similarities of the sampled neighborhoods.",
"Candidate selection.",
"Intuitively, for an entity e i in E 1 , we need to compare its sampled neighborhood subgraph with the subgraph of each candidate entity in E 2 to select an optimal alignment entity.",
"Exhaustively trying all possible entities of E 2 would be prohibitively expensive for large real-world KGs.",
"To reduce the matching overhead, NMN takes a low-cost approximate approach.",
"To that end, NMN first samples an alignment candidate set C i = { c i 1 , c i 2 , ..., c i t | c i k E 2 } for e i in E 1 , and then calculates the subgraph similarities between e i and these candidates.",
"This is based on an observation that the entities in E 2 which are closer to e i in the embedding space are more likely to be aligned with e i .",
"Thus, for an entity e j in E 2 , the probability that it is sampled as a candidate for e i can be calculated as: p ( h j | h i ) = exp ( (cid:107) h i h j (cid:107) L 1 ) (cid:80) k E 2 exp ( (cid:107) h i h k (cid:107) L 1 ) (3) Cross-graph neighborhood matching.",
"Inspired by recent works in graph matching (Li et al., 2019b), our neighbor matching module takes a pair of subgraphs as input, and computes a cross-graph matching vector for each neighbor, which measures how well this neighbor can be matched to any neighbor node in the counterpart.",
"Formally, let ( e i , c i k ) be an entity pair to be measured, where e i E 1 and c i k E 2 is one of the candidates of e i , p and q are two neighbors of e i and c i k , respectively.",
"The cross-graph matching vector for neighbor p can be computed as: a pq = exp( h p h q ) (cid:80) q (cid:48) N sik exp( h p h q (cid:48) ) (4) m p = (cid:88) q N sik a pq ( h p h q ) (5) where a pq are the attention weights, m p is the matching vector for p , and it measures the difference between h p and its closest neighbor in the other subgraph, N si k is the sampled neighbor set of c i k , h p and h q are the GCN-output embeddings for p and q respectively.",
"Then, we concatenate neighbor p 's GCN-output embeddings with weighted matching vector m p : h p = [ h p (cid:107) m p ] (6) For each target neighbor in a neighborhood subgraph, the attention mechanism in the matching module can accurately detect which of the neighbors in the subgraph of another KG is most likely to match the target neighbor.",
"Intuitively, the matching vector m p captures the difference between the two closest neighbors.",
"When the representations of the two neighbors are similar, the matching vector tends to be a zero vector so that their representations stay similar.",
"When the neighbor representations differ, the matching vector will be amplified through propagation.",
"We find this matching strategy works well for our problem settings.",
"In the neighborhood aggregation stage, we combine the neighborhood connection information (learned at the KG structure embedding stage) as well as the output of the matching stage (Sec. 3.4) to generate the final embeddings used for alignment.",
"Specifically, for entity e i , we first aggregate its sampled neighbor representations { h p } .",
"Inspired by the aggregation method in (Li et al., 2016), we compute a neighborhood representation for e i as: g i = ( (cid:88) p N si ( h p W gate ) h p ) WN (7) Then, we concatenate the central entity e i 's GCN-output representation h i with its neighborhood representation to construct the matching oriented representation for e i : h matchi = [ g i (cid:107) h i ] (8) 3.6 Entity Alignment and Training Pre-training.",
"As discussed in Sec. 3.3, our neighborhood sampling is based on the GCN-output entity embeddings.",
"Therefore, we first pretrain the GCN-based KG embedding model to produce quality entity representations.",
"Specifically, we measure the distance between two entities to determine whether they should be aligned: d ( e 1 , e 2 ) = (cid:107) h e 1 h e 2 (cid:107) L 1 (9) The objective of the pre-trained model is: L = (cid:88) ( i,j ) L (cid:88) ( i (cid:48) ,j (cid:48) ) L (cid:48) max { 0 , d ( i, j ) d ( i (cid:48) , j (cid:48) ) + } (10) where > 0 is a margin hyper-parameter; L is our alignment seeds and L (cid:48) is the set of negative aligned entity pairs generated by nearest neighbor sampling (Kotnis and Nastase, 2017).",
"Overall training objective.",
"The pre-training phase terminates once the entity alignment performance has converged to be stable.",
"We find that after this stage, the entity representations given by the GCN are sufficient for supporting the neighborhood sampling and matching modules.",
"Hence, Figure 3: Distribution of difference in the size of neighborhoods of aligned entity pairs on DBP15K ZH EN .",
"we replace the loss function of NMN after the pretraining phase as: L = (cid:88) ( r,t ) L (cid:88) ( r (cid:48) ,t (cid:48) ) C max { 0 , d ( r, t ) d ( r (cid:48) , t (cid:48) ) + } (11) d ( r, t ) = (cid:107) h matchr h matcht (cid:107) L 1 (12) where the negative alignments set C = { ( r (cid:48) , t (cid:48) ) | ( r (cid:48) = r t (cid:48) C r ) ( t (cid:48) = t r (cid:48) C t ) } is made up of the alignment candidate sets of r and t , C r and C t are generated in the candidate selection stage described in Sec. 3.4.",
"Note that our sampling process is nondifferentiable, which corrupts the training of weight matrix W s in Eq.",
"2.",
"To avoid this issue, when training W s , instead of direct sampling, we aggregate all the neighbor information by intuitive weighted summation: g wi = ( (cid:88) p N i ip ( h p W gate ) h p ) WN (13) where ip is the aggregation weight for neighbor p , and is the sampling probability p ( h p | h i ) for p given by Eq.",
"2.",
"Since the aim of training W s is to let the learned neighborhood representations of aligned entities to be as similar as possible, the objective is: L w = (cid:88) ( r,t ) L (cid:107) g wr g wt (cid:107) L 1 (14) In general, our model is trained end-to-end after pre-training.",
"During training, we use Eq.",
"11 as the main objective function, and, every 50 epochs, we tune W s using Eq.",
"14 as the objective function.",
"Datasets.",
"Follow the common practice of recent works (Sun et al., 2018; Cao et al., 2019; Sun et al., 2020), we evaluate our model on DBP15K (Sun et al., 2017) and DWY100K (Sun et al., 2018) datasets, and use the same split with previous works, 30% for training and 70% for testing.",
"To Datasets Ent.",
"evaluate the performance of NMN in a more challenging setting, we also build a sparse dataset S-DBP15K based on DBP15K.",
"Specifically, we randomly remove a certain proportion of triples in the non-English KG to increase the difference in neighborhood size for entities in different KGs.",
"Table 1 gives the detailed statistics of DBP15K and S-DBP15K, and the information of DWY100K is exhibited in Table 2.",
"Figure 3 shows the distribution of difference in the size of one-hop neighborhoods of aligned entity pairs.",
"Our source code and datasets are freely available online.",
"1 Comparison models.",
"We compare NMN against 12 recently proposed embedding-based alignment methods: MTransE (Chen et al., 2017), JAPE (Sun et al., 2017), IPTransE (Zhu et al., 2017), GCN-Align (Wang et al., 2018), BootEA (Sun et al., 2018), SEA (Pei et al., 2019a), RSN (Guo et al., 2019), MuGNN (Cao et al., 2019), KECG (Li et al., 2019a), AliNet (Sun et al., 2020), GMNN (Xu et al., 2019) and RDGCN (Wu et al., 2019a).",
"The last two models also utilize entity names for alignment.",
"Model variants.",
"To evaluate different components of our model, we provide two implementation variants of NMN: (1) NMN (w/o nbr-m), where we replace the neighborhood matching part by taking the average of sampled neighbor representations as the neighborhood representation; and (2) NMN (w/o nbr-s), where we remove the sampling process and perform neighborhood matching on all one-hop neighbors.",
"neigh-1 https://github.com/StephanieWyt/NMN",
"bors for each entity in the neighborhood sampling stage (Sec. 3.3).",
"For S-DBP15K, we set to 1.",
"We sample 3 neighbors for each entity in S-DBP15K ZH EN and S-DBP15K JA EN , and 10 neighbors in S-DBP15K FR EN .",
"NMN uses a 2-layer GCN.",
"The dimension of hidden representations in GCN layers described in Sec. 3.2 is 300, and the dimension of neighborhood representation g i described in Sec. 3.5 is 50.",
"The size of the candidate set in Sec. 3.4 is 20 for each entity.",
"The learning rate is set to 0.001.",
"To initialize entity names, for the DBP15K datasets, we first use Google Translate to translate all non-English entity names into English, and use pre-trained English word vectors glove.840B.300d 2 to construct the initial node features of KGs.",
"For the DWY100K datasets, we directly use the pre-trained word vectors to initialize the nodes.",
"Metrics.",
"Following convention, we use Hits@1 and Hits@10 as our evaluation metrics.",
"A Hits@k score is computed by measuring the proportion of correctly aligned entities ranked in the top k list.",
"A higher Hits@k score indicates better performance.",
"Table 3 reports the entity alignment performance of all approaches on DBP15K and DWY100K datasets.",
"It shows that the full implementation of NMN significantly outperforms all alternative approaches.",
"Structured-based methods.",
"The top part of the table shows the performance of the state-of-the-art structure-based models which solely utilize structural information.",
"Among them, BootEA delivers the best performance where it benefits from more training instances through a bootstrapping process.",
"By considering the structural heterogeneity, MuGNN and AliNet outperform most of other structure-based counterparts, showing the importance of tackling structural heterogeneity.",
"Entity name initialization.",
"The middle part of Table 3 gives the results of embedding-based models that use entity name information along with structural information.",
"Using entity names to initialize node features, the GNN-based models, GMNN and RDGCN, show a clear improvement over structure-based models, suggesting that entity 2 http://nlp.stanford.edu/projects/glove/ names provide useful clues for entity alignment.",
"In particular, GMNN achieves the highest Hits@10 on the DWY100K datasets, which are the only monolingual datasets (in English) in our experiments.",
"We also note that, GMNN pre-screens a small candidate set for each entity based on the entity name similarity, and only traverses this candidate set during testing and calculating the Hits@k scores.",
"NMN vs. its variants.",
"The bottom part of Table 3 shows the performance of NMN and its variants.",
"Our full NMN implementation substantially outperforms all baselines across nearly all metrics and datasets by accurately modeling entity neighborhoods through neighborhood sampling and matching and using entity name information.",
"Specifically, NMN achieves the best Hits@1 score on DBP15K ZH EN , with a gain of 2.5% compared with RDGCN, and 5.4% over GMNN.",
"Although RDGCN employs a dual relation graph to model the complex relation information, it does not address the issue of neighborhood heterogeneity.",
"While GMNN collects all one-hop neighbors to construct a topic entity graph for each entity, its strategy might introduce noises since not all one-hop neighbors are favorable for entity alignment.",
"When comparing NMN and NMN (w/o nbr-m), we can observe around a 2.5% drop in Hits@1 and a 0.6% drop in Hits@10 on average, after removing the neighborhood matching module.",
"Specifically, the Hits@1 scores between NMN and NMN (w/o nbr-m) differ by 3.9% on DBP15K FR EN .",
"These results confirm the effectiveness of our neighborhood matching module in identifying matching neighbors and estimating the neighborhood similarity.",
"Removing the neighbor sampling module from NMN, i.e., NMN (w/o nbr-s), leads to an average performance drop of 0.3% on Hits@1 and 1% on Hits@10 on all the datasets.",
"This result shows the important role of our sampling module in filtering irrelevant neighbors.",
"When removing either the neighborhood matching module (NMN (w/o nbr-m)) or sampling module (NMN (w/o nbr-s)) from our main model, we see a substantially larger drop in both Hits@1 and Hits@10 on DBP15K than on DWY100K.",
"One reason is that the heterogeneity problem in DBP15K is more severe than that in DWY100K.",
"The average proportion of aligned entity pairs that have a different number of neighbors is 89% in DBP15K compared to 84% in DWY100K.",
"These results show Models DBPZH-ENDBPJA-ENDBPFR-ENDBP-WD DBP-YG Hits@1 Hits@10 Hits@1 Hits@10 Hits@1 Hits@10 Hits@1 Hits@10 Hits@1 Hits@10 MTransE (Chen et al., 2017) 30.8 61.4 27.9 57.5 24.4 55.6 28.1 52.0 25.2 49.3 JAPE (Sun et al., 2017) 41.2 74.5 36.3 68.5 32.4 66.7 31.8 58.9 23.6 48.4 IPTransE (Zhu et al., 2017) 40.6 73.5 36.7 69.3 33.3 68.5 34.9 63.8 29.7 55.8 GCN-Align (Wang et al., 2018) 41.3 74.4 39.9 74.5 37.3 74.5 50.6 77.2 59.7 83.8 SEA (Pei et al., 2019a) 42.4 79.6 38.5 78.3 40.0 79.7 51.8 80.2 51.6 73.6 RSN (Guo et al., 2019) 50.8 74.5 50.7 73.7 51.6 76.8 60.7 79.3 68.9 87.8 KECG (Li et al., 2019a) 47.8 83.5 49.0 84.4 48.6 85.1 63.2 90.0 72.8 91.5 MuGNN (Cao et al., 2019) 49.4 84.4 50.1 85.7 49.5 87.0 61.6 89.7 74.1 93.7 AliNet (Sun et al., 2020) 53.9 82.6 54.9 83.1 55.2 85.2 69.0 90.8 78.6 94.3 BootEA (Sun et al., 2018) 62.9 84.8 62.2 85.4 65.3 87.4 74.8 89.8 76.1 89.4 GMNN (Xu et al., 2019) 67.9 78.5 74.0 87.2 89.4 95.2 93.0 99.6 94.4 99.8 RDGCN (Wu et al., 2019a) 70.8 84.6 76.7 89.5 88.6 95.7 97.9 99.1 94.7 97.3 NMN 73.3 86.9 78.5 91.2 90.2 96.7 98.1 99.2 96.0 98.2 w/o nbr-m 71.1 86.7 75.4 90.4 86.3 95.8 96.0 98.4 95.0 97.8 w/o nbr-s 73.0 85.6 77.9 88.8 89.9 95.7 98.0 99.0 95.9 98.1 Table 3: Performance on DBP15K and DWY100K.",
"that our sampling and matching modules are particularly important, when the neighborhood sizes of equivalent entities greatly differ and especially there may be few common neighbors in their neighborhoods.",
"On the more sparse and challenging datasets S-DBP15K, we compare our NMN model with the strongest structure-based model, BootEA, and GNN-based models, GMNN and RDGCN, which also utilize the entity name initialization.",
"Baseline models.",
"In Table 4, we can observe that all models suffer a performance drop, where BootEA endures the most significant drop.",
"With the support of entity names, GMNN and RDGCN achieve better performances over BootEA.",
"These results show when the alignment clues are sparse, structural information alone is not sufficient to support precise comparisons, and the entity name semantics are particularly useful for accurate alignment in such case.",
"NMN.",
"Our NMN outperforms all three baselines on all sparse datasets, demonstrating the effectiveness and robustness of NMN.",
"As discussed in Sec. 1, the performances of existing embedding-based methods decrease significantly as the gap of equivalent entities' neighborhood sizes increases.",
"Specifically, on DBP15K ZH EN , our NMN outperforms RDGCN, the best-performing baseline, by a large margin, achieving 65%, 53% and 48% on Hits@1 on the entity pairs whose number of neighbors differs by more than 10, 20 and 30, respectively.",
"Sampling and matching strategies.",
"When we compare NMN and NMN (w/o nbr-m) on the S-DBP15K, we can see a larger average drop in Hits@1 than on the DBP15K (8.2% vs. 3.1%).",
"The result indicates that our neighborhood matching module plays a more important role on the more sparse dataset.",
"When the alignment clues are less obvious, our matching module can continuously amplify the neighborhood difference of an entity pair during the propagation process.",
"In this way, the gap between the equivalent entity pair and the negative pairs becomes larger, leading to correct alignment.",
"Compared with NMN, removing sampling module does hurt NMN in both Hits@1 and Hits@10 on S-DBP15K ZH EN .",
"But, it is surprising that NMN (w/o nbr-s) delivers slightly better results than NMN on S-DBP15K JA EN and S-DBP15K FR EN .",
"Since the average number of neighbors of entities in S-DBP15K is much less than that in the DBP15K datasets.",
"When the number of neighbors is small, the role of sampling will be unstable.",
"In addition, our sampling method is relatively simple.",
"When the alignment clues are very sparse, our strategy may not be robust enough.",
"We will explore more adaptive sampling method and scope in the future.",
"strategies, we compare our NMN with a variant that uses random sampling strategy on S-DBP15K datasets.",
"Figure 4 illustrates the Hits@1 of NMN using our designed graph sampling method (Sec. 3.3) and a random-sampling-based variant when sampling different number of neighbors.",
"Our NMN consistently delivers better results compared to the variant, showing that our sampling strategy can effectively select more informative neighbors.",
"Impact of neighborhood sampling size.",
"From Figure 4, for S-DBP15K ZH EN , both models reach a performance plateau with a sampling size of 3, and using a bigger sampling size would lead to performance degradation.",
"For S-DBP15K JA EN and S-DBP15K FR EN , we observe that our NMN performs similarly when sampling different number of neighbors.",
"From Table 1, we can see that S-DBP15K ZH EN is more sparse than S-DBP15K JA EN and S-DBP15K FR EN .",
"All models deliver much lower performance on S-DBP15K ZH EN .",
"Therefore, the neighbor quality of this dataset might be poor, and a larger sampling size will introduce more noise.",
"On the other hand, the neighbors in JA-EN and FR-EN datasets might be more informative.",
"Thus, NMN is not sensitive to the sampling size on these two datasets.",
"How does the neighborhood matching module work?",
"In an attempt to understand how our neighborhood matching strategy helps alignment, we visualize the attention weights in the neighborhood matching module.",
"Considering an equivalent entity pair in DBP15K ZH EN , both of which indicate an American film studio Paramount Pictures .",
"From Figure 5, we can see that the five neighbors sampled by our sampling module for each central entity are very informative ones for aligning the two central entities, such as the famous movies released by Paramount Pictures , the parent company and subsidiary of Paramount Pictures .",
"This demonstrates the effectiveness of our sampling strategy again.",
"Among the sampled neighbors, there are also two pairs of common neighbors (indicate Saving Private Ryan and Viacom ).",
"We observe that for each pair of equivalent neighbors, one neighbor can be particularly attended by its counterpart (the corresponding square has a darker color).",
"This example clearly demonstrates that our neighborhood matching module can accurately estimate the neighborhood similarity by accurately detecting the similar neighbors.",
"We have presented NMN, a novel embedded-based framework for entity alignment.",
"NMN tackles the ubiquitous neighborhood heterogeneity in KGs.",
"We achieve this by using a new sampling-based approach to choose the most informative neighbors for each entity.",
"As a departure from prior works, NMN simultaneously estimates the similarity of two entities, by considering both topological structure and neighborhood similarity.",
"We perform extensive experiments on real-world datasets and compare NMN against 12 recent embedded-based methods.",
"Experimental results show that NMN achieves the best and more robust performance, consistently outperforming competitive methods across datasets and evaluation metrics.",
"This work is supported in part by the National Hi-Tech R&D Program of China (No. 2018YFB1005100), the NSFC under grant agreements 61672057, 61672058 and 61872294, and a UK Royal Society International Collaboration Grant.",
"For any correspondence, please contact Yansong Feng."
] | [
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"result",
"method",
"abstain",
"objective",
"abstain",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"objective",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"other",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"abstain",
"other",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"result",
"result",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"objective",
"abstain",
"method",
"abstain",
"other",
"other"
] |
[
"We present a new summarization task, generating summaries of novel chapters using sum-mary/chapter pairs from online study guides.",
"This is a harder task than the news summarization task, given the chapter length as well as the extreme paraphrasing and generalization found in the summaries.",
"We focus on extractive summarization, which requires the creation of a gold-standard set of extractive summaries.",
"We present a new metric for aligning reference summary sentences with chapter sentences to create gold extracts and also experiment with different alignment methods.",
"Our experiments demonstrate significant improvement over prior alignment approaches for our task as shown through automatic metrics and a crowd-sourced pyramid analysis.",
"When picking up a novel one is reading, it would be helpful to be reminded of what happened last.",
"To address this need, we develop an approach to generate extractive summaries of novel chapters.",
"This is much harder than the news summarization tasks on which most of the summarization field (e.g., (Cheng and Lapata, 2016; Grusky et al., 2018; Paulus et al., 2017)) focuses; chapters are on average seven times longer than news articles.",
"There is no one-to-one correspondence between summary and chapter sentences, and the summaries in our dataset use extensive paraphrasing, while news summaries copy most of their information from the words used in the article.",
"We focus on the task of content selection, taking an initial, extractive summarization approach given the task difficulty.",
"1 As the reference sumEqual contribution.",
"maries are abstractive, training our model requires creating a gold-standard set of extractive summaries.",
"We present a new approach for aligning chapter sentences with the abstractive summary sentences, incorporating weighting to ROUGE (Lin, 2004) and METEOR (Lavie and Denkowski, 2009) metrics to enable the alignment of salient words between them.",
"We also experiment with BERT (Devlin et al., 2018) alignment.",
"We use a stable matching algorithm to select the best alignments, and show that enforcing one-to-one alignments between reference summary sentences and chapter sentences is the best alignment method of those used in earlier work.",
"We obtain a dataset of summaries from five study guide websites paired with chapter text from Project Gutenberg.",
"Our dataset consists of 4,383 unique chapters, each of which is paired with two to five human-written summaries.",
"We experiment with generating summaries using our new alignment method within three models that have been developed for single document news summarization (Chen and Bansal, 2018; Kedzie et al., 2018; Nallapati et al., 2017).",
"Our evaluation using automated metrics as well as a crowd-sourced pyramid evaluation shows that using the new alignment method produces significantly better results than prior work.",
"We also experiment with extraction at different levels of granularity, hypothesizing that extracting constituents will work better than extracting sentences, since summary sentences often combine information from several different chapter sentences.",
"Here, our results are mixed and we offer an explanation for why this might be the case.",
"Our contributions include a new, challenging summarization task, experimentation that reveals potential problems with previous methods for creating extracts, and an improved method for creating gold standard extracts.",
"Relatively little work has been done in summarization of novels, but early work (Mihalcea and Cey-lan, 2007) provided a dataset of novel/summary pairs drawn from CliffsNotes and GradeSaver and developed an unsupervised system based on Meade (Radev et al., 2001) and TextRank (Mihal-cea and Tarau, 2004) that showed promise.",
"More recently, Zhang et al. (2019) developed an approach for summarizing characters within a novel.",
"We hypothesize that our proposed task is more feasible than summarizing the full novel.",
"Previous work has summarized documents using Rhetorical Structure Theory (RST) (Mann and Thompson, 1988) to extract elementary discourse units (EDUs) for compression and more content-packed summaries (Daume III and Marcu, 2002; Li et al., 2016; Arumae et al., 2019).",
"Some abstractive neural methods propose attention to focus on phrases within a sentence to extract (Gehrmann et al., 2018).",
"Fully abstractive methods are not yet appropriate for our task due to extensive paraphrasing and generalization.",
"While previous work on semantic textual similarity is relevant to the problem of finding alignments between chapter and summary text, the data available (Cer et al., 2017; Dolan and Brockett, 2005) is not suitable for our domain, and the alignments we generated from this data were of a poorer quality than the other methods in our paper.",
"We collect summary-chapter pairs from five online study guides: BarronsBookNotes (BB), BookWolf (BW), CliffsNotes (CN), GradeSaver (GS) and NovelGuide (NG).",
"2 We select summaries from these sources for which the complete novel text can be found on Project Gutenberg.",
"Our initial dataset, for summaries with two or more sources, includes 9,560 chapter/summary pairs for 4,383 chapters drawn from 79 unique books.",
"As our analysis shows a very long tail, two rounds of filtering were applied.",
"First, we remove reference texts with > 700 sentences, as these are too large to fit into mini-batches ( 10% of data).",
"Second, we remove summaries with a compres-2 We do not have the rights to redistribute the data.",
"To allow others to replicate the dataset, we provide a list of novel chapters we used at https://github.com/ manestay/novel-chapter-dataset Summary Src Mean (stdev) Median Total # CN 442 (369) 347 1,053 BB 517 (388) 429 1,000 GS 312 (311) 230 1,983 BW 276 (232) 214 182 NG 334 (302) 244 2,070 All Sources 373 (339) 279 6,288 Chapter Text 5,165 (3,737) 4,122 6,288 Table 1: Train Split Statistics : World count statistics with total number for summaries and chapter text.",
"This results in 8,088 chapter/summary pairs, and we randomly assign each book to train, development and test splits (6,288/938/862 pairs re-spectively).",
"After filtering, chapters are on average seven times longer than news articles from CNN/Dailymail (5,165 vs 761 words), and chapter summaries are eight times longer than news summaries (372 vs 46 words).",
"Train split statistics are given in Table",
"1. These statistics reveal the large variation in length.",
"Furthermore, we calculate word overlap , the proportion of vocabulary that overlaps between the summary and chapter.",
"For novels, this is 33.7%; for CNN/DailyMail news, this is 68.7%.",
"This indicates the large amount of paraphrasing in the chapter summaries in relation to the original chapter.",
"In Figure 1, we show the first three sentences of a reference summary for Chapter 11, The Awakening which is paraphrased from several, nonconsecutive chapter sentences shown near the bottom of the figure.",
"We also show a portion of the summaries from two other sources which convey the same content and illustrate the extreme level of paraphrasing as well as differences in detail.",
"We show the full chapter and three full reference summaries in Appendix A.2.",
"To train models for content selection, we need saliency labels for each chapter segment that serve as proxy extract labels, since there are no gold extracts.",
"In news summarization, these are typically produced by aligning reference summaries to the best matching sentences from the news article.",
"Here, we align the reference summary sentences with sentences from the chapter.",
"We address two questions for aligning chapter GS: In this chapter Mr. and Mrs. Pontellier participate in a battle of wills.",
"When Mr. Pontellier gets back from the beach, he asks his wife to come inside.",
"She tells him not to wait for her, at which point he becomes irritable and more forcefully tells her to come inside.",
"NG: Mr. Pontellier is surprised to find Edna still outside when he returns from escorting Madame Lebrun home.",
"... although he asks her to come in to the house with him, she refuses, and remains outside, exercising her own will.",
"BW: Leonce urges Edna to go to bed, but she is still exhilarated and decides to stay outside in the hammock...",
"Chapter sentences: He had walked up with Madame Lebrun and left her at the house.",
"Do you know it is past one o'clock?",
"Come on, and he mounted the steps and went into their room.",
"Don't wait for me, she answered.",
"You will take cold out there, he said, irritably.",
"What folly is this? Why don't you come in?",
"and summary sentences to generate gold standard extracts: 1) Which similarity metric works best for alignment (Section 4.1)?",
"and 2) Which alignment method works best (Section 4.2)?",
"ROUGE is commonly used as a similarity metric to align the input document and the gold standard summary to produce gold extracts (Chen and Bansal, 2018; Nallapati et al., 2017; Kedzie et al., 2018).",
"One drawback to using ROUGE as a similarity metric is that it weights all words equally.",
"We want to, instead, assign a higher weight for the salient words of a particular sentence.",
"To achieve this, we incorporate a smooth inverse frequency weighting scheme (Arora et al., 2017) to compute word weights.",
"The weight of a given word is computed as follows: W ( w i ) = + p ( w i ) (1) where p ( w i ) is estimated from the chapter text and is a smoothing parameter (here = 1 e 3 ).",
"N-gram and Longest Common Subsequence (LCS) weights are derived by summing the weights of each of the individual words in the N-gram/LCS.",
"We take the average of ROUGE-1, 2, L using this weighting scheme as the metric for generating extracts, R-wtd , incorporating a stemmer to match morphological variants (Porter, 1980).",
"Similarity Metrics Results: We compare R-wtd against ROUGE-L (Chen and Bansal, 2018) (R-L), and ROUGE-1, with stop-word removal and stemming (Kedzie et al., 2018) (R-1), for sentence alignment.",
"To incorporate paraphrasing, we average METEOR (Banerjee and Lavie, 2005) scores with ROUGE-1,2,L for both un-weighted (RM) and weighted scores (RM-wtd).",
"Given the recent success of large, pre-trained language models for downstream NLP tasks, we also experiment with BERT (Devlin et al., 2019) to compute alignment, using cosine similarity between averaged chapter segment and summary segment vectors.",
"We compare the generated gold extracts using RL F1 against reference summaries, to determine a shortlist for human evaluation (to save costs).",
"For the human evaluation, we ask crowd workers to measure content overlap between the generated alignments, and the reference summary, on a subset of the validation data.",
"For each summary reference, they are shown a generated alignment and asked to indicate whether it conveys each of up to 12 summary reference sentences.",
"An example task is shown in Appendix Figure 7.",
"We then compute precision and recall based on the number of summary sentences conveyed in the extract.",
"Table 2 shows that humans prefer alignments generated using R-wtd by a significant margin.",
"3 Sample alignments generated by R-wtd in comparison to the baseline are shown in Figure",
"2. Method RM R-wtd RM-wtd R-1 R-L BERTR-L F1 41.2 40.6 39.3 37.1 35.1 35.4 H-F1 33.7 44.8 38.8 Table 2: ROUGE-L F1, and crowd-sourced F1 scores (H-F1) for content overlap.",
"Some previous work in news summarization has focused on iteratively picking the best article sentence with respect to the summary, in order to get the gold extracts (Nallapati et al., 2017; Kedzie et al., 2018), using ROUGE between the set of selected sentences and the target summary.",
"In contrast, others have focused on picking the best article sentence with respect to each sentence in the summary (Chen and Bansal, 2018).",
"We investigate which approach yields better alignments.",
"We refer 3 We suspect incorporating METEOR by averaging didn't work because the scale is different from ROUGE scores.",
"to the former method as summary-level alignment and the latter method as sentence-level alignment.",
"For sentence-level alignment, we note that the problem of finding optimal alignments is similar to a stable matching problem.",
"We wish to find a set of alignments such that there exists no chapter segment a and summary segment x where both a and x would prefer to be aligned with each other over their current alignment match.",
"We compute alignments based on the Gale-Shapley algorithm (1962) for stable matching and compare it with the greedy approach from prior work (Chen and Bansal, 2018).",
"For summary-level alignment (Nallapati et al., 2017; Kedzie et al., 2018), we compare two variants: selecting sentences until we reach the reference word count (WL summary), and selecting sentences until the ROUGE score no longer increases (WS summary).",
"Crowd-sourced evaluation results (Table 3) show that sentence-level stable matching is significantly better.",
"We use this in the remainder of this work.",
"These differences in alignments affect earlier claims about the performance of summarization systems, as they were not measured, yet have a significant impact.",
"4 Method P R F1 Greedy Sent 48.4 48.7 48.5 Stable Sent 52.8 52.6 52.7 WL summary 34.5 36.6 36.7 WS summary 42.7 36.6 38.0 Table 3: Crowd sourced evaluation on content overlap for summary vs. sentence level on validation set.",
"Ref summary: He says he will, as soon as he has finished his last cigar.",
"R-L greedy: You will take cold out there, he said, irritably.",
"R-L stable: He drew up the rocker, hoisted his slippered feet on the rail, and proceeded to smoke a cigar.",
"R-wtd stable: Just as soon as I have finished my cigar.",
"In order to assess how alignments impact summarization, we train three extractive systems hierarchical CNN-LSTM extractor (Chen and Bansal, 2018) (CB), seq2seq with attention (Kedzie et al., 2018) (K), and RNN (Nallapati et al., 2017) (N).",
"The target word length of generated summaries is based on the average summary length of similarly long chapters from the training set.",
"5 We also experiment with aligning and extracting at the constituent level, 6 given our observation during data analysis that summary sentences are often drawn from two different chapter sentences.",
"We create syntactic constituents by taking sub-trees from constituent parse trees for each sentence (Manning et al., 2014) rooted with S -tags.",
"To ensure that constituents are long enough to be meaningful, we take the longest S -tag when one S tag is embedded within others (see Appendix A.5).",
"Summary quality is evaluated on F1 scores for R{ 1,2,L } , and METEOR.",
"Each chapter has 2-5 reference summaries and we evaluate the generated summary against all the reference summaries.",
"Part of a generated summary of extracted constituents for Chapter 11, The Awakening , is shown in Figure",
"3. The full generated summaries for this chapter (both extracted constituents and extracted sentences) are shown in Appendix A.2.",
"Generated Summary: | I thought I should find you in bed , || said her husband , | when he discovered her | lying there .",
"| He had walked up with Madame Lebrun and left her at the house .",
"|| She heard him moving about the room ; | every sound indicating impatience and irritation .",
"| Figure 3: System generated summary, extracted constituents in teal, and separated by | .",
"We compare our method for generating extractive targets (ROUGE weighted, with stable matching at the sentence level) against the baseline method for generating extractive targets for each of the systems.",
"Table 4 shows three rows for each summarization system: using the original target summary labels, and using either constituent or sentence segments.",
"We see our proposed alignment method performs significantly better for all mod-5 We do so by binning chapters into 10 quantiles by length.",
"6 Prior work has used EDUs, but automated parsers such as (Ji and Eisenstein, 2014) perform poorly in this domain.",
"els.",
"ROUGE-L in particular increases 10% to 18% relatively over the baselines.",
"Moreover, it would seem at first glance that the K and N baseline models perform better than the CB baseline, however this difference has nothing to do with the architecture choice.",
"When we use our extractive targets, all three models perform similarly, suggesting that the differences are mainly due to small, but important, differences in their methods for generating extractive targets.",
"Human Evaluation: Given questions about the reliability of ROUGE (Novikova et al., 2017; Cha-ganty et al., 2018), we perform human evaluation to assess which system is best at content selection.",
"We use a lightweight, sampling based approach for pyramid analysis that relies on crowd-sourcing, proposed by Shapira et al. (2019), and correlates well with the original pyramid method (Nenkova et al., 2007).",
"We ask the crowd workers to indicate which of the sampled reference summary content units are conveyed in the generated summary.",
"7 We evaluated our best system + alignment on extraction of sentences and of constituents (CB R-wtd), along with a baseline system (CB K-align), 8 using the crowd-sourced pyramid evaluation method.",
"To produce readable summaries for extracted constituents, each extracted constituent is included along with the context of the containing sentence (black text in Figure 3).",
"We find that CB Sent R-wtd has significantly higher content overlap with reference summaries in Table 5.",
"We present a new challenging task for summarization of novel chapters.",
"We show that sentence-7 See the screen shot in Appendix A.4 8",
"level, stable-matched alignment is better than the summary-level alignment used in previous work and our proposed R-wtd method for creating gold extracts is shown to be better than other similarity metrics.",
"The resulting system is the first step towards addressing this task.",
"While both human evaluation and automated metrics concur that summaries produced with our new alignment approach outperform previous approaches, they contradict on the question of whether extraction is better at the constituent or the sentence level.",
"We hypothesize that because we use ROUGE to score summaries of extracted constituents without context, the selected content is packed into the word budget; there is no potentially irrelevant context to count against the system.",
"In contrast, we do include sentence context in the pyramid evaluation in order to make the summaries readable for humans and thus, fewer constituents make it into the generated summary for the human evaluation.",
"This could account for the increased score on automated metrics.",
"It is also possible that smaller constituents can be matched to phrases within the summary with metrics such as ROUGE, when they actually should not have counted.",
"In future work, we plan to experiment more with this, examining how we can combine constituents to make fluent sentences without including potentially irrelevant context.",
"We would also like to further experiment with abstractive summarization to re-examine whether large, pre-trained language models (Liu and Lap-ata, 2019) can be improved for our domain.",
"We suspect these models are problematic for our documents because they are, on average, an order of magnitude larger than what was used for pretraining the language model (512 tokens).",
"Another issue is that the pre-trained language models are very large and take up a substantial amount of GPU memory, which limits how long the input document can be.",
"While truncation of a document may not hurt performance in the news domain due to the heavy lede bias, in our domain, truncation can hurt the performance of the summarizer."
] | [
"objective",
"abstain",
"method",
"objective",
"objective",
"abstain",
"objective",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"result",
"result",
"abstain",
"abstain",
"objective",
"objective",
"result",
"objective",
"other",
"other",
"objective",
"other",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"result",
"result",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"objective",
"result",
"objective",
"abstain",
"objective",
"objective",
"objective",
"abstain",
"abstain",
"method",
"result",
"method",
"abstain",
"abstain"
] |
[
"People vary in their ability to make accurate predictions about the future.",
"Prior studies have shown that some individuals can predict the outcome of future events with consistently better accuracy.",
"This leads to a natural question: what makes some forecasters better than others?",
"In this paper we explore connections between the language people use to describe their predictions and their forecasting skill.",
"Datasets from two different forecasting domains are explored: (1) geopolitical forecasts from Good Judgment Open, an online prediction forum and (2) a corpus of company earnings forecasts made by financial analysts.",
"We present a number of linguistic metrics which are computed over text associated with people's predictions about the future including: uncertainty, readability, and emotion.",
"By studying linguistic factors associated with predictions, we are able to shed some light on the approach taken by skilled forecasters.",
"Furthermore, we demonstrate that it is possible to accurately predict forecasting skill using a model that is based solely on language.",
"This could potentially be useful for identifying accurate predictions or potentially skilled forecasters earlier.",
"1 1 Introduction People often make predictions about the future, for example meteorologists tell us what the weather might look like tomorrow, financial analysts predict which companies will report favorable earnings and intelligence analysts evaluate the likelihood of future geopolitical events.",
"An interesting question is why some individuals are significantly better forecasters (Mellers et al., 2015b)?",
"Previous work has analyzed to what degree various factors (intelligence, thinking style, knowledge 1 We provide our code and dataset descriptions at: https://github.com/viczong/measuring_forecasting_skill_from_text . of a specific topic, etc.) contribute to a person's skill.",
"These studies have used surveys or psychological tests to measure dispositional, situational and behavioral variables (Mellers et al., 2015a).",
"Another source of information has been largely overlooked, however: the language forecasters use to justify their predictions.",
"Recent research has demonstrated that it is possible to accurately forecast the outcome of future events by aggregating social media users' predictions and analyzing their veridicality (Swamy et al., 2017), but to our knowledge, no prior work has investigated whether it might be possible to measure a forecaster's ability by analyzing their language.",
"In this paper, we present the first systematic study of the connection between language and forecasting ability.",
"To do so, we analyze texts written by top forecasters (ranked by accuracy against ground truth) in two domains: geopolitical forecasts from an online prediction forum, and company earnings forecasts made by financial analysts.",
"To shed light on the differences in approach employed by skilled and unskilled forecasters, we investigate a variety of linguistic metrics.",
"These metrics are computed using natural language processing methods to analyze sentiment (Pang et al., 2002; Wilson et al., 2005), uncertainty (de Marneffe et al., 2012; Saur and Pustejovsky, 2012), readability, etc.",
"In addition we make use of word lists taken from the Linguistic Inquiry and Word Count (LIWC) software (Tausczik and Pennebaker, 2010), which is widely used in psychological research.",
"By analyzing forecasters' texts, we are able to provide evidence to support or refute hypotheses about factors that may influence forecasting skill.",
"For example, we show forecasters whose justifications contain a higher proportion of uncertain statements tend to make more accurate predictions.",
"This supports the hypothesis that more open-minded thinkers, who have a higher tolerance for ambiguity tend to make better predictions (Tetlock, 2005).",
"Beyond analyzing linguistic factors associated with forecasting ability, we further demonstrate that it is possible to identify skilled forecasters and accurate predictions based only on relevant text.",
"Estimating the quality of a prediction using the forecaster's language could potentially be very ben-eficial.",
"For example, this does not require access to historical predictions to evaluate past performance, so it could help to identify potentially skilled individuals sooner.",
"Also, forecasters do not always provide an explicit estimate of their confidence, so a confidence measure derived directly from text could be very useful.",
"In this section, we are interested in uncovering linguistic cues in people's writing that are predictive of forecasting skill.",
"We start by analyzing texts written by forecasters to justify their predictions in a geopolitical forecasting forum.",
"Linguistic differences between forecasters are explored by aggregating metrics across each forecaster's predictions.",
"In 3, we analyze the accuracy of individual predictions using a dataset of financial analysts' forecasts towards companies' (continuous) earnings per share.",
"By controlling for differences between analysts and companies, we are able to analyze intra-analyst differences between accurate and inaccurate forecasts.",
"To explore the connections between language and forecasting skill, we make use of data from Good Judgment Open, 2 an online prediction forum.",
"Users of this website share predictions in response to a number of pre-specified questions about future events with uncertain outcomes, such as: Will North Korea fire another intercontinental ballistic missile before August 2019?",
"Users' predictions consist of an estimated chance the event will occur (for example, 5%) in addition to an optional text justification that explains why the forecast was made.",
"A sample is presented in Figure 1.",
"Preprocessing.",
"Not all predictions contain associated text justifications; in this work, we only consider predictions with justifications containing more than 10 tokens.",
"We ran langid.py (Lui 2 https://www.gjopen.com/ Question: Will Kim Jong Un visit Seoul before 1 October 2019? Estimated Chance: 5% Forecast Justification: No North Korean leader has stepped foot in Seoul since the partition of the Koreas at the end of the Korean War. ... Figure 1: A sample prediction made by a user in response to a question posted by the Economist . and Baldwin, 2012) to remove forecasts with non-English text, and further restrict our data to contain only users that made at least 5 predictions with text.",
"In our pilot studies, we also notice some forecasters directly quote text from outside resources (like Wikipedia, New York Times, etc.) as part of their justifications.",
"To avoid including justifications that are mostly copied from external sources, we remove forecasts that consist of more than 50% text enclosed in quotation marks from the data.",
"Dataset statistics.",
"We collected all questions with binary answers that closed before April 9, 2019, leading to a total of 441 questions.",
"23,530 forecasters made 426,909 predictions.",
"During preprocessing steps, 3,873 forecasts are identified as heavily quoted and thus removed.",
"After removing non-English and heavily quoted forecasts, forecasts with no text justifications or justifications less than 10 tokens, in addition users with fewer than 5 predictions with text, 55,099 forecasts made by 2,284 forecasters are selected for the final dataset.",
"The distribution of predictions made by each forecaster is heavily skewed.",
"8.0% of forecasters make over 50 forecasts.",
"3 On average, each forecaster makes 10.3 forecasts, excluding those who made over 50 predictions.",
"In Table 1, we also provide breakdown statistics for top and bottom forecasters.",
"In order to build a model that can accurately classify good forecasters based on features of their language, we first need a metric to measure people's forecasting skill.",
"For this purpose we use Brier score (Brier, 1950), a commonly used measure for evaluating probabilistic forecasts.",
"4 For questions 3 In our dataset, forecasters could even make over 1,000 forecasts with justifications.",
"4 Other possible scoring rules exist, for example ranking forecasters by log-likelihood.",
"For a log-likelihood scoring rule, however, we need to adjust estimates of 1.00 and 0.00, which are not uncommon in the data, to avoid zero probability events.",
"There are many ways this adjustment could be done and it is difficult to justify one choice over another.",
"Here f i is the forecaster's estimated probability, o i is a binary variable indicating the final outcome of the event, and N is the total number of forecasts.",
"Brier scores can be interpreted as the mean squared error between the forecast probability and true answer; lower scores indicate better forecasts.",
"Ranking forecasters.",
"Directly comparing raw Brier scores is problematic, because users are free to choose questions they prefer, and could achieve a lower Brier score simply by selecting easier questions.",
"To address this issue, we standardized Brier scores by subtracting the mean Brier scores and dividing by the standard deviation within questions (Mellers et al., 2015a).",
"We construct a set of balanced datasets for training and evaluating classifiers by choosing the top K and bottom K forecasters respectively.",
"In our experiments, we vary K from 100 to 1,000; when K =1,000, the task can be interpreted roughly as classifying all 2k users into the top or bottom half of forecasters.",
"5 2.3 Linguistic Analysis In 2.2, we discussed how to measure ground-truth forecasting skill by comparing a user's predictions against ground-truth outcomes.",
"In the following subsections, we examine a selected series of linguistic phenomenon and their connections with forecasting ability.",
"Statistical tests are conducted using the paired bootstrap (Efron and Tibshirani, 1994).",
"As we are performing multiple hypothesis testing, we also report results for Bonferroni-corrected significance level 0.05/30.",
"As discussed in 2.1, the distribution of forecasts per user is highly skewed.",
"To control for this, we compute averages for each forecaster and use aggregate statistics to compare differences between the two groups at the user-level.",
"Analyses are performed over 6,639 justifications from the top 500 forecasters and 6,040 from bottom 500.",
"Length.",
"We first check the average length of justifications from different groups and report our results 5 Readers may wonder if there do exist differences between top and bottom forecasters.",
"We provide justifications for our ranking approach in Appendix A.1.",
"in Table 1.",
"We observe that skilled forecasters normally write significantly longer justifications with more tokens per sentence.",
"This suggests that good forecasters tend to provide more rationale to support their predictions.",
"Readability.",
"We compute two widely used metrics for readability: (1) Flesch reading ease (Flesch, 1948) and (2) Dale-Chall formula (Dale and Chall, 1948).",
"Table 2 summarizes our results on average readability scores.",
"We find good forecasters have lower readability compared to bad forecasters.",
"It is interesting to compare this result with the findings reported by Ganjigunte Ashok et al. (2013), who found a negative correlation between the success of novels and their readability, and also the work of Sawyer et al. (2008) who found award winning articles in academic marketing journals had higher readability.",
"Our finding that more accurate forecasters write justifications that have lower readability suggests that skilled forecasters tend to use more complex language.",
"Emotion.",
"We also analyze the sentiment reflected in forecasters' written text.",
"Rather than analyzing sentiment orientation (positive, negative, or neutral), here we focus on measuring sentiment strength .",
"We hypothesize that skilled forecasters organize their supporting claims in a more rational way using less emotional language.",
"Many existing sentiment analysis tools (e.g., Socher et al. (2013)) are built on corpora such as the Stanford Sentiment Treebank, which are composed of movie reviews or similar texts.",
"However, justifications in our dataset focus on expressing opinions towards future uncertain events, rather than simply expressing preferences toward a movie or restaurant, leading to a significant domain mismatch.",
"In pilot studies, we noticed many sentences that are marked as negative by the Stanford sentiment analyzer on our data do not in fact express a negative emotion.",
"We thus use Semantic Orientation CALculator (SO-Metric p Bonferroni Textual Factors Readability Flesch reading ease Dale-Chall Emotion Absolute sentiment strength Parts of Speech Cardinal Noun Preposition Pronoun 1st personal pronoun Verb Cognitive Factors Uncertainty % uncertain statements Tentative (LIWC) Thinking style % forecasts with quoted text Temporal orientation Focus on past Focus on present & future Table 2: Comparison of various metrics computed over text written by the top 500 and bottom 500 forecasters.",
"CAL), a lexicon-based model proposed by Taboada et al. (2011) which has been demonstrated to have good performance across a variety of domains.",
"The model generates a score for each justification by adding together semantic scores of words present in the justification, with a 0 score indicating a neutral sentiment.",
"We then take the absolute values of scores from the model and calculate averages for each group.",
"Results in Table 2 show that the top 500 forecasters have a significantly lower average sentiment strength compared to bottom 500 forecasters, indicating statements from skilled forecasters tend to express neutral sentiment.",
"Parts of Speech.",
"As shown in Table 2, we observe that top forecasters use a higher percentage of cardinal numbers and nouns, while higher numbers of verbs are associated with lower forecasting ability.",
"6 We also note the bottom 500 use a higher percentage of pronouns when justifying their predictions.",
"To investigate this difference, we further separate first person pronouns 7 from second or third person pronouns.",
"As presented in Table 2, first person pronouns are used more often by the top forecasters.",
"We now evaluate a number of factors that were found to be related to decision making processes based on prior psychological studies (e.g., Mellers et al. (2015a)), that can be tested using computational tools.",
"A number of these metrics are calculated by using the Linguistic Inquiry and Word Count (LIWC) lexicon (Tausczik and Pennebaker, 2010), a widely used tool for psychological and social science research.",
"Uncertainty.",
"To test the hypothesis that good forecasters have a greater tolerance for uncertainty and ambiguity, we employ several metrics to evaluate the degree of uncertainty reflected in their written language.",
"We use the model proposed by Adel and Schutze (2017) to estimate the proportion of uncertain statements made by each forecaster in our dataset.",
"It is an attention based convolutional neural network model, that achieves state-of-the-art results on a Wikipedia benchmark dataset from the 2010 CoNLL shared task (Farkas et al., 2010); we use the trained parameters provided by Adel and Schutze (2017).",
"After the model assigns an uncertainty label for each sentence, we calculate the percentage of sentences marked as uncertain.",
"Results of this analysis are reported in Table 2; we observe that the top 500 forecasters make a significantly greater number of uncertain statements compared to the bottom 500, supporting the hypothesis mentioned above.",
"Thinking style.",
"In 2.1, we discussed the issue that many forecasts contain quoted text.",
"Although we removed posts consisting of mostly quoted text as a preprocessing step, we are interested in how people use outside resources during their decision making process.",
"We thus calculate the portion of forecasts with quotes for the two groups.",
"We notice skilled forecasters cite outside resources more frequently.",
"This may indicate that skilled forecasters tend to account for more information taken from external sources when making predictions.",
"Temporal orientation.",
"We make use of the LIWC lexicon (Tausczik and Pennebaker, 2010) to analyze the temporal orientation of forecasters' justifications.",
"We notice good forecasters tend to focus more on past events (reflected by tokens like ago and talked ); bad forecasters pay more attention to what is currently happening or potential future events (using tokens like now , will , and soon ).",
"We conjecture this is because past events can provide more reliable evidence for what is likely to happen in the future.",
"In 2.3, we showed there are significant linguistic differences between justifications written by skilled and unskilled forecasters.",
"This leads to a natural question: is it possible to automatically identify skilled forecasters based on the written text associated with their predictions?",
"We examine this question in general terms first, then present experiments using a realistic setup for early prediction of forecasting skill in 2.5.",
"Models and features.",
"We start with a log-linear model using bag-of-ngram features extracted from the combined answers for each forecaster.",
"We experimented with different combinations of n-gram features from sizes 1 to 4. N-grams of size 1 and 2 have best classification accuracy.",
"We map n-grams that occur only once to a (cid:104) UNK (cid:105) token, and replace all digits with 0.",
"Inspired by our findings in 2.3, we also incorporate textual and cognition factors as features in our log-linear model.",
"We also experiment with convolutional neural networks (Kim, 2014) and BERT (Devlin et al., 2019).",
"The 1D convolutional neural network consists of a convolution layer, a max-pooling layer, and a fully connected layer.",
"We minimize cross entropy loss using Adam (Kingma and Ba, 2015); the learning rate is 0.01 with a batch size of 32.",
"We fine-tune BERT on our dataset, using a batch size of 5 and a learning rate of 5e-6.",
"All hyperparame-ters were selected using a held-out dev set.",
"Model performance.",
"Results are presented in Table 3. As we increase the number of forecasters K , the task becomes more difficult as more forecasters are ranked in the middle.",
"However, we observe a stable accuracy around 70%.",
"All models consistently outperform a random baseline (50% accuracy), suggesting that the language users use to describe their predictions does indeed contain information that is predictive of forecasting ability.",
"The n-grams with largest weights in the logistic regression model are presented in Table 4. We find that n-grams that seem to indicate uncertainty, including: it seems unlikely , seem to have and it is likely are among the largest positive weights.",
"With the model developed in 2.4, we are now ready to answer the following question: using only their first written justification, can we foresee a forecaster's future performance?",
"Setup.",
"Our goal is to rank forecasters by their performance.",
"We first equally split all 2,284 forecasters into two groups (top half versus bottom half) based on their standardized Brier scores.",
"We then partition them into 60% train, 20% validation, and 20% test splits within each group.",
"We combine all justifications for each forecaster in the training set.",
"For forecasters in the validation and test sets, we only use their single earliest forecast.",
"We use forecasters' final rank sorted by averaged standardized Brier score over all forecasts as ground truth.",
"We then compare our text-based model to the following two baselines: (1) a random baseline (50%) and (2) the standardized Brier score of the users' single earliest forecast.",
"Results.",
"We calculate the proportion of good forecasters identified in the top N , ranked by our text-based model, and report results in Table 5. We observe that our models achieve comparable or even better performance relative to the first prediction's adjusted Brier score.",
"Calculating Brier scores requires knowing ground-truth, while our model can evaluate the performance of a forecaster without waiting to know the outcome of a predicted event.",
"In 2, we showed that linguistic differences exist between good and bad forecasters, and furthermore, these differences can be used to predict which forecasters will perform better.",
"We now turn to the question of whether it is possible to identify which individual forecasts, made by the same person, are more likely to be correct.",
"The Good Judgment Open data is not suitable to answer this question, because forecasts are discrete, and thus do not provide a way to rank individual predictions by accuracy beyond whether they are correct or not.",
"Therefore, in this section, we consider numerical forecasts in the financial domain, which can be ranked by their accuracy as measured against ground truth.",
"In this paper, we analyze forecasts of companies' earnings per share (EPS).",
"Earnings per share is defined as the portion of a company's profit allocated to each share of common stock.",
"It is an important indicator of a company's ability to make profits.",
"For our purposes, EPS also supports a cleaner experimental design as compared to stock prices, which constantly change in real time.",
"Data.",
"We analyze reports from the Center for Financial Research and Analysis (CFRA).",
"8 These reports provide frequent updates for analysts' estimates and are also organized in a structured way, enabling us to accurately extract numerical forecasts and corresponding text justifications.",
"We collected CFRA's analyst reports from the Thomson ONE database 9 from 2014 to 2018.",
"All notes making forecasts are extracted under the An-alyst Research Notes and other Company News section.",
"The dataset contains a total of 32,807 notes from analysts, covering 1,320 companies.",
"We use a pattern-based approach (in Appendix B.1) for extracting numerical forecasts.",
"After removing notes without EPS estimates, 16,044 notes on 1,135 companies remain (this is after removing analysts who make fewer than 100 forecasts as discussed later in this section).",
"We next evaluate whether the text can reflect how accurate these predictions are.",
"Forecast error.",
"We measure the correctness of forecasts by absolute relative error (Barefield and Comiskey, 1975; Dreman and Berry, 1995).",
"The error is defined by the absolute difference between the analyst's estimate e and corresponding actual EPS o , scaled by the actual EPS: Forecast Error = | e o | | o | Low forecast errors indicate accurate forecasts.",
"Ranking individual forecasts.",
"As our goal is to study the intra-analyst differences between accurate and inaccurate forecasts, we standardize forecast errors within each analyst by subtracting the analyst's mean forecast error and then dividing by the standard deviation.",
"To guarantee we have a good estimate for the mean, we only include analysts who make at least 100 forecasts (19 analysts are selected).",
"We notice most forecast errors are smaller than 1, while a few forecasts are associated with very large forecasting errors.",
"11 Including these outliers would greatly affect our estimation for analysts' mean error.",
"Thus, we only use the first 90% of the sorted forecast errors in this calculation.",
"8 https://www.cfraresearch.com/ 9 https://www.thomsonone.com/ 10 Other methods for measuring the forecasting error have been proposed, for example to scale the relative error by stock price.",
"We do not take this approach as stock prices are dynamically changing.",
"11 For example, one analyst estimated an EPS for Fiscal Year 2015 of Olin Corporation (OLN) as $1.63, while the actual EPS was $-0.01, a standardized forecast error of 164.",
"Our goal is to test whether linguistic differences exist between accurate and inaccurate forecasts, independently of who made the prediction, or how difficult a specific company's earnings might be to predict.",
"To control for these factors, we standardize forecasting errors within analysts (as described in 3.1), and create training/dev/test splits across companies and dates.",
"Setting.",
"We collect the top K and bottom K predictions and split train, dev and test sets by time range and company.",
"All company names are randomly split into 80% train and 20% evaluation sets.",
"We use predictions for companies in the train group that were made in 2014-2016 as our training data.",
"The dev set and test set consist of predictions for companies in evaluation group made during the years 2017 and 2018, respectively.",
"All hyperpa-rameters are the same as those used in 2.4.",
"When evaluating the classifier's performance, we balance the data for positive and negative categories.",
"Results.",
"Table 6 shows the performance of our classifier on the test set.",
"We observe our classifiers consistently achieve around 60% accuracy when varying the number of top and bottom forecasts, K .",
"We present our linguistic analysis in Table 7.",
"The same set of linguistic features in 2.3 is applied to top 4,000 accurate and bottom 4,000 inaccurate analysts notes, excluding readability metric and quotation measure in thinking style metric.",
"Analysts' notes are written in a professional manner, which makes readability metric not applicable.",
"The notes do not contain many quoted text so we exclude quotation measure from the analysis.",
"We also replace the emotion metric with a sentiment lexicon specifically tailored for financial domain and provide our discussions.",
"The Bonferroni-corrected significance level is 0.05/15.",
"We defer discussions to 4 for comparing across different domains.",
"On average, each forecast contains 132.2 tokens with 5.5 sentences.",
"Financial sentiment.",
"We make use of a lexicon developed by Loughran and Mcdonald (2011), which is specifically designed for financial domain.",
"The ratio of positive and negative sentiment terms to total number of tokens is compared.",
"Our results show that inaccurate forecasts use significantly more negative sentiment terms.",
"In 2 and 3, we analyze the language people use when they make forecasts in geopolitical and financial domains.",
"Specifically, these two sections reveal how language is associated with accuracy both within and across forecasters.",
"In this section, we compare our findings from these domains.",
"Our studies reveal several shared characteristics of accurate forecasts from a linguistic perspective over geopolitical and financial domains (in Table 2 and Table 7).",
"For example, we notice that skilled forecasters and accurate forecasts more frequently refer to past events.",
"We also notice accurate predictions consistently use more nouns while unskilled forecasters use more verbs.",
"We also note one main difference between two domains is uncertainty metric: in Good Judgment Open dataset, we observe that more skilled forecasters employ a higher level of uncertainty; while for individual forecasts, less uncertainty seems to be better.",
"It makes us consider the following hypothesis: within each forecaster, people are more likely to be correct when they are more certain about their judgments, while in general skilled forecasters exhibit a higher level of uncertainty.",
"To test this hypothesis, we calculate the Spearman's between the financial analysts' mean forecasting errors and their average portion of uncertain statements.",
"Results show that these two variables are negative correlated with =-0.24, which provides some support for our hypothesis, however the sample size is very small (there are only 19 analysts in the financial dataset).",
"Also, these mean forecasting errors are not standardized by the difficulty of companies analysts are forecasting.",
"Many recent studies have analyzed connections between users' language and human attributes (Hovy et al., 2015; Nguyen et al., 2013; Volkova et al., 2014; Tan et al., 2016; Althoff et al., 2014).",
"Son et al. (2018) developed a tool for discourse analysis in social media and found that older individuals and females tend to use more causal explanations.",
"Another example is work by Schwartz et al. (2015), who developed automatic classifiers for temporal orientation and found important differences relating to age, gender in addition to Big Five personality traits.",
"Eichstaedt et al. (2015) showed that language expressed on Twitter can be predictive of community-level psychological correlates, in addition to rates of heart disease.",
"Demszky et al. (2019) analyzed political polarization in social media and Voigt et al. (2017) examined the connections between police officers' politeness and race by analyzing language.",
"A number of studies (De Choud-hury et al., 2014; Eichstaedt et al., 2018; Benton et al., 2017; Park et al., 2017) have examined the connection between users' language on social media and depression and alcohol use (Kiciman et al., 2018).",
"Other work has analyzed users' language to study the effect of attributes, such as gender, in online communication (Bamman et al., 2014; Wang and Jurgens, 2018; Voigt et al., 2018).",
"In this work we study the relationship between people's language and their forecasting skill.",
"To the best of our knowledge, this is the first work that presents a computational way of exploring this direction.",
"Our work is also closely related to prior research on predicting various phenomenon from users' language.",
"For example Tan et al. (2014) study the effect of wording on message propagation, Gillick and Bamman (2018) examine the connection between language used by politicians in campaign speeches and applause and Perez-Rosas and Mi-halcea (2015) explored linguistic differences between truthful and deceptive statements.",
"Ganjigunte Ashok et al. (2013) show linguistic cues drawn from authors' language are strong indicators of the success of their books and Tsur and Rap-poport (2009) presented an unsupervised model to analyze the helpfulness of book reviews by analyzing their text.",
"There have been several studies using data from Good Judgment Open or Good Judgment Project (Mellers et al., 2015b).",
"One recent study examining the language side of this data is Schwartz et al. (2017).",
"Their main goal is to suggest objective metrics as alternatives for subjective ratings when evaluating the quality of recommendations.",
"To achieve this, justifications written by one group are provided as tips to another group.",
"These justifications are then evaluated on their ability to persuade people to update their predictions, leading to real benefits that can be measured by objective metrics.",
"Prior work has also studied persuasive language on crowdfunding platforms (Yang et al., 2019).",
"In contrast, our work focuses on directly measuring forecasting skill based on text justifications.",
"Finally we note that there is a long history of research on financial analysts' forecasting ability (Crichfield et al., 1978; Chopra, 1998; Loh and Mian, 2006).",
"Most work relies on regression models to test if pre-identified factors are correlated with forecasting skill (e.g., Loh and Mian (2006); Call et al. (2009)).",
"Some work has also explored the use of textual information in financial domain.",
"For example, Kogan et al. (2009) present a study of predicting companies' risk by using financial reports.",
"We also note a recent paper on studying financial analysts' decision making process by using text-based features from earning calls (Keith and Stent, 2019).",
"As far as we aware, our work is the first to evaluate analysts' forecasting skill based on their language.",
"Our experiments demonstrated it is possible to analyze language to estimate people's skill at making predictions about the future.",
"In this section we highlight several limitations of our study and ethical issues that should be considered before applying our predictive models in a real-world application.",
"In our study, we only considered questions with binary answers; future work might explore questions with multiple-choice outcomes.",
"Prior studies have found that people's forecasting skills can be improved through experience and training (Mellers et al., 2014).",
"Our study does not take this into account as we do not have detailed information on the forecasters' prior experience.",
"Finally, we have not investigated the differences in our model's outputs on different demographic groups (e.g., men versus women), so our models may contain unknown biases and should not be used to make decisions that might affect people's careers.",
"In this work, we presented the first study of connections between people's forecasting skill and language used to justify their predictions.",
"We analyzed people's forecasts in two domains: geopolitical forecasts from an online prediction forum and a corpus of company earning forecasts made by financial analysts.",
"We investigated a number of linguistic metrics that are related to people's cognitive processes while making predictions, including: uncertainty, readability and emotion.",
"Our experimental results support several findings from the psychology literature.",
"For example, we observe that skilled forecasters are more open-minded and exhibit a higher level of uncertainty about future events.",
"We further demonstrated that it is possible to identify skilled forecasters and accurate predictions based solely on language.",
"We would like to thank the anonymous reviewers for providing valuable feedback on an earlier draft of this paper.",
"This material is based in part on research sponsored by the NSF (IIS-1845670), ODNI and IARPA via the BETTER program (2019-19051600004) DARPA via the ARO (W911NF-17-C-0095) in addition to an Amazon Research Award.",
"The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of NSF, ODNI, ARO, IARPA, DARPA or the U.S. Government."
] | [
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"method",
"method",
"objective",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"objective",
"abstain",
"method",
"method",
"result",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"objective",
"abstain",
"other",
"abstain",
"other",
"other",
"other",
"other",
"other",
"method",
"method",
"abstain",
"method",
"method",
"other",
"abstain",
"other",
"method",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"other",
"method",
"other",
"method",
"other",
"other",
"objective",
"abstain",
"abstain",
"abstain",
"objective",
"other",
"method",
"other",
"objective",
"method",
"other",
"method",
"other",
"method",
"other",
"other",
"method",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"objective",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"abstain",
"other",
"other",
"other",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"objective",
"method",
"objective",
"result",
"result",
"objective",
"other",
"other",
"other"
] |
[
"The ability to sequence unordered events is evidence of comprehension and reasoning about real world tasks/procedures.",
"It is essential for applications such as task planning and multi-source instruction summarization.",
"It often requires thorough understanding of temporal common sense and multimodal information, since these procedures are often conveyed by a combination of texts and images.",
"While humans are capable of reasoning about and sequencing unordered procedural instructions, the extent to which the current machine learning methods possess such capability is still an open question.",
"In this work, we benchmark models' capability of reasoning over and sequencing unordered multimodal instructions by curating datasets from online instructional manuals and collecting comprehensive human annotations.",
"We find current state-of-the-art models not only perform significantly worse than humans but also seem incapable of efficiently utilizing multimodal information.",
"To improve machines' performance on multimodal event sequencing, we propose sequence-aware pretraining techniques exploiting the sequential alignment properties of both texts and images, resulting in >5% improvements on perfect match ratio.",
"Instructions are essential sources for agents to learn how to complete complex tasks composed of multiple steps (e.g., making a wood sign from scratch).",
"However, instructions do not always come in a proper sequential order, for example, when instructions must be combined across sources (e.g., to accomplish a complex task there might be multiple useful resources for certain task-steps come out from a single Google search).",
"Therefore, sequencing unordered task-steps is crucial for comprehending and inferring task procedures, which requires thorough understanding of event causal and temporal common sense.",
"It is essential for Sand the wood block.",
"applications such as multi-source instruction summarization and robot task planning (Garattoni and Birattari, 2018).",
"Existing work has studied sequencing unordered texts from paper abstracts or short stories (Chen et al., 2016; Cui et al., 2018).",
"However, real-life tasks are often complex, and multimodal information is usually provided to supplement textual descriptions to avoid ambiguity or illustrate details that are hard to narrate, as illustrated in Figure 1. To investigate whether current AI techniques can efficiently leverage multimodal information to sequence unordered task instructions, we curate two datasets from online instructional manuals (Hadley et al.; Yagcioglu et al., 2018).",
"We consider two representative instruction domains: cooking recipes and How-To\" instructions (WikiHow). We establish human performance for the sequencing task on a subset of each data resource. As certain steps to perform a task can potentially be interchangeable, 1 we collect annotations of possible orders 1 For example, without special requirements, preparing certain ingredients of a dish, such as slicing carrots or cucumbers, does not necessarily need to follow a specific order. 4525 alternative to the originally authored ones to create multiple references . Such additional annotation provides not only better measurement of human and model performance by alleviating unintended biases from content creators, but also a useful resource for future research of models that are aware of task-step dependencies and interchangeability. To measure the ability of state-of-the-art AI techniques to sequence instruction steps, we construct models consisting of: (1) an input encoder which encodes image, text, or multimodal inputs, and (2) an order decoder which predicts step order using the encoded representations. They are jointly trained with the order supervisions. Our preliminary studies show that multimodal information is consistently helpful for the sequencing task. However, compared to humans, current models are less efficient in utilizing multimodal information. We hypothesize that it is because the models do not effectively capture the sequential information in the vision modality nor the sequential alignment between multimodal contents. To address this, we propose to equip models with capabilities of performing sequential aware multimodal grounding . Specifically, we propose several self-supervised objectives, including sequence-based masked language modeling, image region modeling, and content swapped prediction, to pretrain the models before finetuning them on the downstream sequencing task. The proposed pretraining techniques are shown to be effective in improving multimodal performance, enjoying a >5% improvement on the perfect match ratio metric. However, it is still significantly behind human performance ( 15 % in perfect match ratio metric). The same trend is observed when alternative orders are considered. Our key contributions are two-fold: (1) We propose a multimodal sequencing task with two cu-rated instructional manuals, and comprehensive human annotations. (2) We investigate model performance on sequencing unordered manuals, and propose sequence-aware pretraining techniques to more effectively use the multimodal information. Our experiments and extensive analysis provide insights on which task categories are most challenging for the state-of-the-art models. They also shed the light that more sophisticated sequential multimodal grounding are required to further improve the performance for the proposed multimodal sequencing task. 2 Problem Definition Given a task procedure S consisting of N steps, where each step S i S can consist of two types of contents: a textual description T i of tokens { T i,k } n T k =1 and/or image(s) I i = { I i,k } n I k =1 . 2 A model is required to take as inputs a random permutation of S , i . e . S p = { S p 1 , ..., S p N } , where p is a permutation ( S p j can take one of the following three modalities: T p j , I p j , and { T p j , I p j } ), and predict the correct order of S p , i . e . argsort ( S p ) . 3 Datasets and Human Annotation We are interested in understanding the current state-of-the-art models' performance on this multimodal instruction sequencing task. To this end, we curate instruction datasets to support our study. 3.1 Instruction Manual Datasets There are three major features we require for the target datasets: (1) It is multimodal. (2) It consists of task procedures as sequences of steps. (3) Different modalities are used intentionally to complement each other. In light of these, we consider the following two datasets: RecipeQA. We start from a popular as well as intuitive choice of instruction manuals, recipes, which fully fulfill the aforementioned criteria. RecipeQA is a multimodal question answering dataset consisting of recipes scraped from Instructables.com (Yag-cioglu et al., 2018). We utilize the recipes collected in RecipeQA and convert each unique recipe into sequential multimodal steps for our task. WikiHow. To expand the types of instruction manuals for our task beyond recipes, we also consider a popular How To ...\" type of instructions, WikiHow, which is an online knowledge base that consists of human-created articles describing procedures to accomplish a desired task.",
"Each article contains a high level goal of a task, a short summary of the task procedures, and several multimodal steps where each step consists of a description paired with one or a few corresponding images.",
"We scrape the entire WikiHow knowledge resource, containing more than 100k unique articles (mostly) with multimodal contents , as well as the hierarchically structured category for each article.",
"Table 1 presents the essential statistics of the two datasets (more details are in Append. Sec. A).",
"To ensure the validity of our proposed multimodal sequencing task, we establish the human performance via Amazon Mechanical Turk.",
"Since our dataset is constructed from resources that are not directly designed for the sequencing task, the quality of random samples is unverified.",
"Specifically, some articles in WikiHow may not have a notion of proper order among the steps.",
"3 As a result, to construct a high quality test set particularly for WikiHow for establishing human performance, we first identify a set of categories which are more likely to feature proper order, e .",
"g .",
"Home and Garden and Hobbies and Crafts .",
"4 A random proportion is then sampled and the co-authors further downsam-ple the subset to 300 samples with the aforementioned criteria via majority vote.",
"For RecipeQA, we randomly sample 100 recipes from the dataset.",
"And hence, the resulting two subsets serve as our golden-test-set for performance benchmarking.",
"Human Performance.",
"Prompted with a task goal and a randomly scrambled sequence of the task-steps (can be one of the following modalities: mul-3 No temporal or other dependencies among the task-steps, e . g . How to be a good person, where each step depicts a different aspect and tips of being a good person. 4 Although the data used for training is not cleansed and thus can be noisy, we believe models can still learn to sequence from many of the articles designed to have proper order. timodal or text/image-only), workers are asked to examine the contents and decide the proper performing order.",
"Human performance are then computed against the original authored orders as the ground truths, averaged across the whole set.",
"5 Alternative Orders.",
"When performing a task, some steps can be interchangeable.",
"To take the interchangeability into consideration in our benchmark task, we also collect possible alternative orders to the original ones to create multiple references.",
"For each instance in our golden-test-set, given the instruction steps sequenced in their original order, we ask workers to annotate alternative orders if the presented task-steps can be performed following a different order.",
"6 Although in this work we are mainly focusing on sequential instructions and hence the interchangeability is also gauged in a sequential manner, we want to point out that the nature of task-step interchangeability is also highly related to parallel (branching) steps of tasks (Sakaguchi et al., 2021).",
"We argue that the actions that can be performed interchangeably imply no direct dependencies are among these actions and thus can potentially be parallelized, and hence our alternative order formulation can help inferring these parallel actions.",
"More details of the two human annotation tasks can be found in Append.",
"Sec.",
"B. 4 Models To benchmark the proposed task, we construct models comprising: (1) an encoder which encodes multimodal or text/image-only inputs, and (2) an order decoder which utilizes the encoded representations to predict the orders.",
"To help models capture sequentiality in task-steps better as well as adapt to our target task domains, we pretrain the encoders with several self-supervised objectives on the instructions before integrating them with the decoder.",
"Text-Only Encoders.",
"We use RoBERTa (Liu et al., 2019) for text-only inputs.",
"Although the next-sentence prediction in BERT (Devlin et al., 2019) 5 We design an algorithm to compute the inter-annotator agreements (IAAs), see Append.",
"Sec.",
"B.3 for details.",
"The IAAs for ( multimodal , text-only , image-only ) versions in WikiHow is: (0.84, 0.82, 0.69), and (0.92, 0.87, 0.81) in RecipeQA.",
"6 The alternative order annotation IAAs for ( multimodal , text-only , image-only ) versions in WikiHow is: (0.73, 0.71, 0.78), and (0.79, 0.76, 0.79) in RecipeQA.",
"can potentially be exploited for sequencing, we empirically find that RoBERTa performs better.",
"Multimodal Encoders.",
"We consider the following two V&L models mainly due to their easy adaptation to our proposed sequencing task: VisualBERT (Li et al., 2019) grounds object detected image regions ( e .",
"g .",
"by Faster-RCNN (Ren et al., 2016)) to language with a single transformer model (Vaswani et al., 2017).",
"VisualBERT is pretrained with: (1) multimodal masked language modeling (MLM) 7 , and (2) image-text matching prediction (ITM), where the image in an image-caption pair is randomly replaced with another one to create misalignment, and the model is required to predict whether the current pair is aligned.",
"CLIP-ViL (Shen et al., 2021) is also a single-stream V&L model similar to VisualBERT, while the visual encoder is replaced by a patch-based model inspired by the ViT (Dosovitskiy et al., 2021) in CLIP (Radford et al., 2021), where the image features are taken as gridded-image-patches as shown in Figure 2. The pretraining objectives remain the same as VisualBERT.",
"Empirically, both Shen et al. (2021) and this work find such patch-based model tends to yield better downstream performance.",
"Image-Only Encoders.",
"We attempt to provide an image-only baseline on our sequencing task with two visual encoders: (1) ResNet -based (He et al., 2016) Faster-RCNN model (also the visual encoder in VisualBERT) where both the detected regional features and the whole-image-feature are used, and (2) the aforementioned patch-based CLIP model.",
"8 7 RoBERTa is used to initialize VisualBERT and CLIP-ViL.",
"8 Without confusion, throughout the paper we term the ViTand CLIP-inspired visual encoder simply as CLIP.",
"The standard multimodal grounding techniques (Li et al., 2019; Lu et al., 2019; Su et al., 2020; Chen et al., 2020a) do not explicitly concern the sequentiality of text and associated image sequences, and hence may fall short of effectively utilizing the sequential properties in multimodal inputs.",
"To encourage models to have better awareness of the sequential alignments in multimodal instruction steps, we propose to pretrain the encoders with the following self-supervised objectives: (1) masked language modeling ( MLM ), (2) (patch-based) image-swapping predictions ( ISP/PISP ), and (3) sequential masked region modeling ( SMRM ).",
"Figure 2 illustrates an overview of the pretraining paradigm.",
"For the proposed objectives, the inputs to the models are generally ordered instruction step sequences, which can be further sub-sampled to produce length-varying subsequences.",
"Although we do not find this necessarily benefit the downstream performance, it is observed that the sub-sampling helps the model converge faster.",
"While all of our proposed objectives can be applied to sequence with arbitrary length ( 2 ), without loss of generality and for simplicity, the following sections assume the sub-sampled sequence is of length 2 .",
"The standard MLM (Devlin et al., 2019) is employed by the text-only models to adapt a pretrained language model to the target domain (task instructions).",
"Following prior V&L works, we apply MLM to multimodal models.",
"Specifically, we ensure that the textual description of each step T i gets similar amount of tokens being masked-out such that the models can potentially exploit the 4528 image sequences more.",
"This objective concerns, with certain probability, randomly swapping a pair of items in a sequence and asking the model to judge whether the resulting sequence is properly ordered or not ( i . e . binary classification).",
"We mainly perform the swapping in the image modality and hence it can be viewed as a sequence-aware version of ITM objective in most V&L models.",
"As in ITM, the output representation at the [CLS] token is used to make the prediction.",
"Standard.",
"For an ordered sequence S , we can randomly swap two 10 items of S , { S i , S j } , where i < j , to { S j , S i } , with a certain probability .",
"Our preliminary studies find that swapping the textual contents does not necessarily help the downstream performance for either text-only or multimodal models, so we only perform the swapping on the images { I i , I j } in both multimodal and image-only models.",
"For patch-based image inputs (or regional features), the whole patches of an image are swapped with those of another one within the same sequence, as illustrated in Obj 2 in Figure 2. Patch-Based.",
"We can perform the aforementioned swapping prediction with a finer granularity, directly on the image patches.",
"Assuming each image I i is cropped into w patches (or w detected regions), i .",
"e .",
"{ i i,k } wk =1 = { i i, 1 , ..., i i,w } , we randomly select M (ranging from 1 to w ) number of patches each from the two images I i , I j ( i . e . { i i,p } , { i i,q } , p, q M -sized sampled indices) to be swapped with probability .",
"Specifically, for each image patch i i,m I i , a randomly selected image patch i j,n I j is sampled to be swapped with.",
"The sampled M -sized indices do not need to be the same set of integers for each image.",
"The Obj 3 in Figure 2 illustrates the patch-based swapping prediction with w = 4 and M = 2 .",
"Prior works extend the masked learning to the visual modality, where the masked target is either a predefined discrete visual vocabulary (Sun et al., 2019; Bao et al., 2021) or (soft) object class labels (Lu et al., 2019; Su et al., 2020; Chen et al., 2020a).",
"In this work, we construct a feature-based target vocabulary dynamically in each training batch.",
"We first randomly select the same amount 9 As higher chances that the complementary textual information is also masked out from different steps.",
"of X % ( X = 15 ) patches for each image to be masked out (replaced with 0-tensor), and then construct a target vocabulary from the original output representations (before masking) of these patches.",
"Concretely, denote the output representation of an input image-patch i i,m as h ( i ) i,m and the masked positions of I i as D i , we can construct a candidate list from all the output representations of the patches at the masked positions of each image, i .",
"e .",
"C = { h ( i ) i,m }{ h ( i ) j,n } , m, n D i , D j .",
"Denote the masked image patches (the gray-colored image patches in Figure 2) as mask(i) i,m , for each output masked representation h ( mask(i) ) i,m , we concatenate it with all the candidates, i .",
"e .",
"h ( mask(i) ) i,m || h ( i' ) , i' C , which results in | C | concatenated representations for each masked position.",
"A | C | -way multi-class classification can then be performed by maximizing the probability of p ( i i,m | h ( mask(i) ) i,m ; C ) .",
"For robust training, we additionally: (1) shuffle the candidate set C for each masked position to prevent overfitting, and (2) ensure the overlapping of masked positions in each pair of images, D i D j , is < 50%, allowing the models to utilize information of similar regions from other images in the sequence.",
"As the mechanism in some objectives cannot guarantee mutually exclusive impacts ( e . g . performing ISP and PISP simultaneously may create confusing swapped patches), we employ a turn-taking fashion, with uniform probability, one of the objectives ( Obj ) is sampled for each training mini-batch.",
"The overall pretraining objective is defined as below: L = LMLM + L Obj , Obj { ISP , PISP , SMRM } (1) 4.3 Order Decoder BERSON BERSON is a recently proposed state-of-the-art neural sentence ordering framework (Cui et al., 2020), where a pointer network (Vinyals et al., 2016) exploits both the local (relative pairwise order) and global (self-attentions on top of the entire input sequence) information of the inputs to decode the predicted order.",
"BERSON mainly exploits the [CLS] output representations for relational understanding, which aligns well with how our encoders are pretrained (Figure 2).",
"We integrate our encoders (with or without sequence-aware pretraining) into BERSON, replacing its original BERT encoder.",
"The BERSON-module-specific components are freshly initialized and then the entire integrated module is finetuned on our sequencing task.",
"Our experiments seek to answer these questions: (1) How valid is the proposed task for humans to complete?",
"(2) Is multimodality helpful?",
"(3) Can the proposed sequence-aware pretraining utilize multimodality more effectively?",
"(4) How would results differ when alternative orders are considered?",
"Position-Based metrics concern the correctness of the absolute position of each item in a sequence, including: (1) Accuracy (Acc) which computes the ratio of absolute positions in the ground truth order that are correctly predicted; (2) Perfect Match Ratio (PMR) which measures the percentage of predicted orders exactly matching the ground truth orders; and (3) Distance (Dist.) which measures the average distance 11 between the predicted and ground truth positions for each item.",
"Longest Common Subsequence computes the average longest subsequences in common (Gong et al., 2016) between the predicted and ground truth orders ( L q ).",
"We also consider a stricter version, longest common substring, which requires the consecutiveness for the comparisons ( L r ).",
"Kendall's Tau ( ) (Lapata, 2003) is defined as 1 2 (# inversions ) / (# pairs ) , where the inversion denotes that the predicted relative order of a pair of items is inverted compared to the corresponding ground truth relative order, and # pairs = (cid:0) N 2 (cid:1) for N -length sequence.",
"Each metric focuses on different perspectives of the predictions, i .",
"e .",
"position metrics concern the absolute correctness, while common subsequence and metrics measure if general sequential tendency is preserved despite incorrect absolute positions.",
"We use the original data splits for RecipeQA.",
"For WikiHow, to prevent models' exploiting knowledge from similar articles, we split the data so that certain (sub)categories do not overlap in each split.",
"We use only the train splits in each dataset to perform their respective pretraining.",
"More details of the data splits are in Append.",
"Sec.",
"A. Preliminary studies show that joint training with both RecipeQA and WikiHow data does not necessarily improve 11 Except for distance metric, higher scores are better.",
"the downstream performance, thus the models evaluated in the two datasets are trained simply using their respective training sets for faster convergence.",
"We cap the overall sequence length at 5 and each step description with maximally 5 sentences for both models and humans.",
"The maximum input length per step is 60 tokens (overall maximum length = 300 ) for training and GPU memory effi-ciency.",
"= 0 .",
"5 for both ISP and PISP.",
"All images are resized to 224 224 , and 32 32 patch is used for CLIP-based models, resulting in 7 7 = 49 patches per image.",
"Aside from standard positional embedding, we only supplement a modality token type embedding (text := 0, image := 1) to the multimodal models.",
"Pretrained weights for each encoder is obtained either from their corresponding code bases or by running their codes on our setup.",
"12 5.3 Standard Benchmark Results Table 2 summarizes both the human and model performance for each input modality evaluated using the original ground truth orders on the golden-test-set, whereas Table 3 summarizes a more detailed breakdown of the model performance when incrementing combinations of pretraining objectives.",
"As is shown, multimodal information is veri-fied consistently helpful for humans.",
"Compared under same scenario with or without the sequence-aware pretraining, the two multimodal models consistently outperform their text-only counterparts, where the proposed pretraining technique is shown particularly effective for the patch-based multimodal model (CLIP-ViL).",
"However, our top-performing models still exhibit significant gaps below human performance, especially in PMR.",
"Additionally, we observe a different trend in the two datasets where the multimodality benefits more in RecipeQA than WikiHow.",
"The gap between the multimodal human and model performance is larger than the text-only counterparts in WikiHow, while a reversed trend is shown in RecipeQA.",
"We hypothesize that recipes may contain more domain-specific language usages and/or less words for the pretrained language models and hence benefits more from the our in-domain sequence-aware pretraining.",
"Humans, on the other hand, benefit more from the images in WikiHow as its texts are hypothesized to contain more ambiguities.",
"models perform closer to humans, and on which the multimodal information is most efficiently utilized.",
"In Figure 3 we select categories with the top and least performance gaps (with PMR metric, top=3, least=2) between the human and our best performing models.",
"We observe that the categories on which multimodal models outperform the text-only ones the most are also the categories the models perform closest to humans, e .",
"g .",
"Home and Garden .",
"We hypothesize that the images in these categories are well complementary to the texts and that our sequence-aware grounding performs effectively.",
"In contrast, in categories such as Arts and Entertainment and Hobbies and Crafts where humans still enjoy benefits from multimodal information, our models have difficulty utilizing the multimodal information.",
"We hypothesize that better visual understanding may alleviate the potentially suboptimal grounding as images of these categories can contain many non-common objects.",
"For each instance where alternative ground truth orders exist, the performance is computed by the best each predicted order can obtain against all the",
"ground truth orders 13 , denoted by multi-reference performance , and the subset containing these instances is denoted as the multi-reference subset .",
"Statistics.",
"Table 5 lists the essential statistics of the multi-reference subsets, including the counts of the multi-reference instance for each dataset and modality, as well as the per-instance statistics.",
"Multi-Reference Performance.",
"The noticeable main competitors in Table 2 are multimodal and text-only models, and hence for conciseness, in Table 4 we mainly report the multi-reference version 13 Jointly considered from all the evaluation metrics.",
"14 The overall average number of ground truth references becomes 1.19, 1.23, 1.09 for multimodal, text-only, and image-only versions in WikiHow; and 1.10, 1.17, 1.14 in RecipeQA.",
"of their best performing variants with the selected metrics.",
"Several trends still hold: (1) Multimodal models still outperform the text-only counterparts.",
"(2) Human performance is still well above models' even under multi-reference setups.",
"Additionally, both humans and models perform significantly worse in the multi-reference subset when single (original) ground truth is enforced, implying the validity of our alternative order annotations.",
"We originally hypothesize that enforcing the original authored order to be the only ground truth would be unfair to the text-only models, as images can often better represent the detailed scene changes omitted by the texts, while in reality certain steps may not need to strictly follow the authored order.",
"Judging from the number of instances that improve after evaluating with alternative orders, the text-only model indeed benefits more from the multi-reference setup.",
"Examining the general trends in Table 4, one can conclude that the textual contents indeed posses certain levels of ambiguities where images can help to alleviate.",
"However, as the performance gaps between multimodal and text-only models are still significant under the multi-reference settings, advantages of multimodality.",
"Note that humans achieve perfect performance on the multi-reference subset in RecipeQA, though unlikely it may seem, it is mainly due to recipes tend to have rarer possible alternative orders.",
"WikiHow Categories.",
"Table 6 lists the WikiHow categories with the most (top-5) annotated multi-reference ground truths.",
"Note that the categories with more annotated alternative ground truths are also among the worse performance from both humans and models (Figure 3).",
"We provide sample qualitative inspections in Append.",
"Sec.",
"C.1.",
"sequential reasoning which is shown evident for procedural understanding (Tomkins, 1952; Baron-Cohen et al., 1986; Loucks et al., 2017).",
"In NLP, existing works attempt the sequencing task as sorting a series of unordered sentences (Chen et al., 2016; Cui et al., 2018; Logeswaran et al., 2018; Oh et al., 2019; Lee et al., 2020; Calizzano et al., 2021) from paper abstracts or short paragraphs.",
"While certain prior work also attempts to extend it to incorporate multimodality (Agrawal et al., 2016), the dataset used, Visual StoryTelling (Huang et al., 2016), features album images that were not intended to be procedural nor supply unstated details to complement the texts.",
"In computer vision, existing work leverages shuffle frame prediction for learning video representations (Lee et al., 2017; Xu et al., 2019; Wang et al., 2020; Li et al., 2020) as well as cycle consistency constraints for learning temporal dynamics (Epstein et al., 2021).",
"Zellers et al. (2021) also features a pairwise relative frame re-ordering objective to learn temporal common sense from scripted videos, however, as their downstream tasks mainly concern visual reasoning and ordering by frame-text-matching (also on Visual StoryTelling), the re-ordering objective is more focused on the visual modality.",
"Our work takes a different perspective to tackle a comprehensive multimodal sequencing task with a focus on the procedural task-solving knowledge and gauging the helpfulness of complementary information in different modalities.",
"Task/Procedure Understanding.",
"Other works have utilized WikiHow for learning task knowledge.",
"In NLP, textual descriptions of WikiHow have been used for abstractive summarization (Koupaee and Wang, 2018), procedural understanding (Zhou et al., 2019; Tandon et al., 2020), and intent estimation (Zhang et al., 2020a).",
"Prior work (Zhang et al., 2020b) considers WikiHow for learning event temporal ordering, but limited to only pairwise relations.",
"A concurrent work uses WikiHow to infer visual goals (Yang et al., 2021).",
"We hope our curation can help advancing the goal of comprehensive multimodal procedural understanding.",
"Another popular form of comprehending given procedures is through a multiple choice machine comprehension task.",
"Prior work has utilized text book figures (Kembhavi et al., 2017) as a holistic \"reading reference\" for models to select the correct order of certain (textually described) events from given multiple choices .",
"Another work attempts the original visual ordering task of RecipeQA (Liu et al., 2020) (also an multiple choice task).",
"However, we argue that our task tackles a more complex task as the desired orders need to be directly derived and the event-wise complementary multimodal understanding is not an essential component in these existing works.",
"Multimodality.",
"Beside models used in this work, there are several recent advanced multimodal grounding techniques (Tan and Bansal, 2019; Li et al., 2019; Lu et al., 2019; Su et al., 2020; Chen et al., 2020b; Huang et al., 2020; Wen et al., 2021).",
"We utilize VisualBERT and CLIP-ViL for their simplicity to be adapted to our task and easier integration to our proposed pretraining techniques, however, our framework is able to incorporate any of the aforementioned multimodal models.",
"In this work we present studies of language and multimodal models on procedure sequencing, leveraging popular online instructional manuals.",
"Our experiments show that both multimodality and our proposed sequence-aware pretraining are helpful for multimodal sequencing, however, the results also highlight significant gaps below human performance ( 15% on PMR).",
"We provide insights as well as resources, such as the multi-reference annotations of the sequencing task, to spur future relevant research.",
"We also anticipate that the alternative orders defined and annotated in our work can benefit more comprehensive task-procedure understanding.",
"Future work such as predicting task steps which can be parallel or interchangeable, and understanding step dependencies can be explored.",
"Many thanks to Liunian Harold Li for his original CLIP-ViL implementation; to I-Hung Hsu and Zi-Yi Dou for their helpful discussion and feedback; and to the anonymous reviewers for their constructive suggestions.",
"This material is based on research supported by the Machine Common Sense (MCS) program under Cooperative Agreement N66001-19-2-4032 with the US Defense Advanced Research Projects Agency (DARPA) and a CISCO research contract.",
"The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing DARPA, CISCO, or the U.S. Government.",
"We hereby acknowledge that all of the co-authors of this work are aware of the provided ACM Code of Ethics and honor the code of conduct.",
"This work is mainly about sequencing a given series of multimodal task procedures, represented by text descriptions along with their images.",
"The followings give the aspects of both our ethical considerations and our potential impacts to the community.",
"Dataset.",
"We collect the human performance on our sequencing task (both the standard human performance and the alternative order annotations) via Amazon Mechanical Turk (MTurk) and ensure that all the personal information of the workers involved (e.g., usernames, emails, urls, demographic information, etc.) is discarded in our dataset.",
"While the sequence orders either from the original author intended ones or those annotated by the workers for the standard performance may possess unintended biases against certain population group of people ( e . g . due to cultural differences or educational differences, some tasks may be performed differently from the original intended orders), we anticipate the additional multi-reference annotation can alleviate such an issue as well as provide a broader view to approach procedural understanding, i .",
"e .",
"certain task-steps can be interchanged.",
"This research has been reviewed by the IRB board and granted the status of an IRB exempt .",
"The detailed annotation process (pay per amount of work, guidelines) is included in the appendix; and overall, we ensure our pay per task is above the the annotator's local minimum wage (approximately $12 USD / Hour).",
"We primarily consider English speaking regions for our annotations as the task requires certain level of English proficiency.",
"Techniques.",
"We benchmark the proposed sequencing task with the state-of-the-art large-scale pretrained language and multimodal models with our novel sequence-aware pretraining techniques.",
"As commonsense and task procedure understanding are of our main focus, we do not anticipate production of harmful outputs, especially towards vulnerable populations, after training models on our proposed task."
] | [
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"result",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"objective",
"method",
"method",
"abstain",
"other",
"other",
"other",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain"
] |
[
"Aspect-based sentiment analysis (ABSA) involves three subtasks, i.e., aspect term extraction, opinion term extraction, and aspect-level sentiment classification.",
"Most existing studies focused on one of these subtasks only.",
"Several recent researches made successful attempts to solve the complete ABSA problem with a unified framework.",
"However, the interactive relations among three subtasks are still under-exploited.",
"We argue that such relations encode collaborative signals between different subtasks.",
"For example, when the opinion term is delicious , the aspect term must be food rather than place .",
"In order to fully exploit these relations, we propose a Relation-Aware Collaborative Learning (RACL) framework which allows the subtasks to work coordinately via the multi-task learning and relation propagation mechanisms in a stacked multi-layer network.",
"Extensive experiments on three real-world datasets demonstrate that RACL significantly outperforms the state-of-the-art methods for the complete ABSA task.",
"Aspect-based sentiment analysis (ABSA) is a fine-grained task which aims to summarize the opinions of users towards specific aspects in a sentence.",
"ABSA normally involves three subtasks, namely aspect term extraction (AE), opinion term extraction (OE), and aspect-level sentiment classification (SC).",
"For example, given a review The place is small and cramped but the food is delicious. , AE aims to extract a set of aspect terms { place , food } .",
"OE aims to extract a set of opinion terms { small , cramped , delicious } .",
"Meanwhile, it is expected for SC to assign a sentiment polarity negative and positive to the aspect place and food , respectively.",
"Most existing works treat ABSA as a two-step task containing AE and SC.",
"They develop one separate method for each subtask (Tang et al., 2016; Xu et al., 2018; Li et al., 2018a; Hu et al., 2019), or take OE as an auxiliary task of AE (Wang et al., 2017; Li et al., 2018b).",
"In order to perform ABSA for practical use, the separate methods need to be pipelined together.",
"Recently, several studies attempt to solve ABSA in a unified framework (Wang et al., 2018a; Li et al., 2019; He et al., 2019; Luo et al., 2019).",
"Despite their effectiveness, we argue that these methods are not sufficient to yield satisfactory performance for the complete ABSA task.",
"The key reason is that the interactive relations among different subtasks have been largely neglected in existing studies.",
"These relations convey collaborative signals which can enhance the subtasks in a mutual way.",
"For example, the opinion term delicious can serve as the evidence of the aspect term food , and vice versa.",
"In the following, we first analyze the interactive relations among different subtasks, and then present our RACL framework which is developed to exploit these relations.",
"The detailed relations are summarized in Figure 1 (left), where each arrow denotes one specific relation R i .",
"R 1 indicates the dyadic relation between AE and OE.",
"In practice, the aspect terms must be the targets of opinion, indicating that most aspect terms like place can only be modified by corresponding opinion terms like small and cramped rather than a term like delicious .",
"Hence AE and OE might hold informative clues to each other.",
"Ours (cid:51) (cid:51) (cid:51) (cid:51) R 2 indicates the triadic relation between SC and R 1 .",
"One critical problem in SC is to determine the dependency between the aspect and its context.",
"For example, the context small and cramped plays an important role in predicting the polarity of place .",
"Such a dependency is highly in accordance with R 1 which emphasizes the interaction between the aspect and opinion terms.",
"Hence SC and R 1 can help refine the selection process for each other.",
"R 3 indicates the dyadic relation between SC and OE.",
"The specific opinion terms generally convey specific polarities.",
"For example, fantastic is often positive.",
"The opinion terms extracted in OE should be paid more attention when predicting the sentiment polarity in SC.",
"R 4 indicates the dyadic relation between SC and AE.",
"In the complete ABSA task, the aspect terms are unknown and SC will assign a polarity to every word.",
"The aspect terms, e.g., place and food , will have their corresponding polarities, while other words are considered as the background ones without sentiment.",
"That is to say, the results from AE should be helpful in supervising the training of SC.",
"When reviewing the literature on the ABSA task, we find that existing separate methods either do not utilize any relations, or only utilize R 1 by treating OE as an auxiliary task of AE.",
"Meanwhile, the unified methods at most explicitly utilize R 3 and R 4 .",
"In view of this, we propose a novel Relation-Aware Collaborative Learning (RACL) framework to fully exploit the interactive relations in the complete ABSA task.",
"We compare our model with existing methods by their capability in utilizing interactive relations in Table 1. RACL is a multi-layer multi-task learning framework with a relation propagation mechanism to mutually enhance the performance of subtasks.",
"For multi-task learning, RACL adopts the shared-private scheme (Collobert and Weston, 2008; Liu et al., 2017).",
"Subtasks AE, OE, and SC first jointly train the low-level shared features, and then they train their high-level private features independently.",
"In this way, the shared and private features can embed the task-invariant and task-oriented knowledge respectively.",
"For relation propagation, RACL improves the model capacity by exchanging informative clues among three subtasks.",
"Moreover, RACL can be stacked to multiple layers to perform collaborative learning at different semantic levels.",
"We conduct extensive experiments on three datasets.",
"Results demonstrate that RACL significantly outperforms the state-of-the-art methods for both the single subtasks and the complete ABSA task.",
"Aspect-based sentiment analysis (ABSA) is first proposed by Hu and Liu (2004) and has been widely studied in recent years (Zhang et al., 2018).",
"We organize existing studies by how the subtasks are performed and combined to perform ABSA.",
"Separate Methods Most existing studies treat ABSA as a two-step task containing aspect term extraction (AE) and aspect-based sentiment classification (SC), and develop separate methods for AE (Popescu and Etzioni, 2005; Wu et al., 2009; Li et al., 2010; Qiu et al., 2011; Liu et al., 2012; Chen et al., 2014; Chernyshevich, 2014; Toh and Wang, 2014; Vicente et al., 2015; Liu et al., 2015, 2016; Yin et al., 2016; Wang et al., 2016; Li and Lam, 2017; Clercq et al., 2017; He et al., 2017; Xu et al., 2018; Yu et al., 2019), and SC (Jiang et al., 2011; Mohammad et al., 2013; Kiritchenko et al., 2014; Dong et al., 2014; Vo and Zhang, 2015; Ma et al., 2017; Wang et al., 2018b; Zhu and Qian, 2018; Chen and Qian, 2019; Zhu et al., 2019), respectively.",
"Some of them resort to the auxiliary task opinion term extraction (OE) and exploit their relation for boosting the performance of AE.",
"For the complete ABSA task, results from two steps must be merged together in a pipeline manner .",
"In this way, the relation between AE/OE and SC is totally neglected, and the errors from the upstream AE/OE will be propagated to the downstream SC.",
"The overall performance of ABSA task is not promising for pipeline methods.",
"Unified Methods Recently, several studies attempt to solve ABSA task in a unified framework.",
"The unified methods fall into two types: collapsed tagging (Mitchell et al., 2013; Zhang et al., 2015; Wang et al., 2018a; Li et al., 2019) and joint training (He et al., 2019; Luo et al., 2019).",
"The former combines the labels of AE and SC to construct collapsed labels like { B-senti, I-senti, O } .",
"The subtasks need to share all trainable features without distinction, which is likely to confuse the learning process.",
"Moreover, the relations among subtasks cannot be explicitly modeled for this type of methods.",
"Meanwhile, the latter constructs a multi-task learning framework where each subtask has inde-pendent labels and can have shared and private features.",
"This allows the interactive relations among different subtasks to be modeled explicitly for the joint training methods.",
"However, none of existing studies along this line has fully exploited the power of such relations.",
"We differentiate our work from aforementioned methods in that we propose a unified framework which exploits all dyadic and triadic relations among subtasks to enhance the learning capability.",
"AE aims to predict a tag sequence YA = { y A 1 , ..., y Ai , ... , y An } for aspect extraction, where y Ai { B, I, O } denotes the beginning of, inside of , and outside of an aspect term.",
"OE aims to predict a tag sequence YO = { y O 1 , ..., y Oi , ... , y On } for opinion extraction, where y Oi { B, I, O } denotes the beginning of, inside of , and outside of an opinion term.",
"SC aims to predict a tag sequence YS = { y S 1 , ..., y Si , ... , y Sn } for sentiment classification, where y Si { pos , neu , neg } denotes the positive, neutral , and negative sentiment polarities towards each word.",
"Our proposed RACL is a unified multi-task learning framework which enables propagating the interactive relations (denoted as the same R 1 .. R 4 as those in Figure 1) for improving the ABSA performance, and it can be stacked to multiple layers to interact subtasks at different semantic levels.",
"We present the overall architecture of RACL in Figure",
"2(a) and details of a single layer in Figure",
"2(b).",
"In particular, a single RACL layer contains three modules: AE, OE, and SC, where each module is designed for the corresponding subtask.",
"These modules receive a shared representation of the input sentence, then encode their task-oriented features.",
"After that, they propagate relations R 1",
"..",
"R 4 for collaborative learning by exchanging informative clues to further enhance the task-oriented features.",
"Finally, three modules will make predictions for the corresponding tag sequences YA , YO , and YS based on the enhanced features.",
"In the following, we first illustrate the relation-aware collaborative learning in one layer, then show the stacking and the training of the entire RACL.",
"3.3 Relation-Aware Collaborative Learning Input Word Vectors Given a sentence S e , we can map the word sequence in S e with either pre-trained word embeddings (e.g., GloVe) or pre-trained language encoders (e.g., BERT) to generate a sequence of word vectors E = { e 1 , ..., e i , ..., e n } R d w n , where d w is the dimension of word vectors.",
"We will examine the effects of these two types of word vectors in the experiments.",
"Multi-task Learning with Shared-Private Scheme To perform multi-task learning, different subtasks should focus on the different characteristics of a shared training sample.",
"Inspired by the shared-private scheme (Collobert and Weston, 2008; Liu et al., 2017), we extract both the shared and private features to embed task-invariant and task-oriented knowledge for the AE, OE, and SC modules.",
"To encode the shared task-invariant features, we simply feed each e i in E into a fully-connected layer and generate a transformed vector h i R d h .",
"We then obtain a sequence of shared vectors H = { h 1 , ..., h i , ..., h n } R d h n for each sentence which will be jointly trained by all subtasks.",
"Upon the shared task-invariant features H , the AE, OE, and SC modules will encode the task-oriented private features for the corresponding subtasks.",
"We choose a simple CNN as the encoder function F due to its high computation efficiency.",
"For subtasks AE and OE, the key features for determining the existence of aspect and opinion terms are the representations of the original and adjacent words.",
"Therefore, we construct two encoders to extract local AE-oriented features XA and OE-oriented features XO : FA : H XA , XA R d c n , FO : H XO , XO R d c n (1) For subtask SC, the process of feature generation is different from that in AE/OE.",
"In order to determine the sentiment polarity towards an aspect term, we need to extract related semantic information from its context.",
"The critical problem in SC is to determine the dependency between an aspect average pooling AEOE Y A(1) Y S(1) Y O(1) Y A(2) Y S(2) Y O(2) Y A(L) Y S(L) Y O(L) layer (2) layer (1) layer (L) ...",
"term and its context.",
"Moreover, in the complete ABSA task, the aspect terms are unknown in SC and it needs to assign a polarity to every word in S e .",
"Based on these observations, we first encode the contextual features X ctx from H : F ctx : H X ctx , X ctx R d h n (2) Then we treat the shared vector h i as the query aspect and compute the semantic relation between the query and contextual features using the attention mechanism: ds ( i (cid:54) = j ) i,j = (( h i ) T X ctxj ) [ log 2 (2 + | i j | )] 1 , M ctxi,j = exp ( ds i,j ) (cid:80) nk =1 exp ( ds i,k ) , (3) where ds ( i (cid:54) = j ) i,j denotes the dependency strength between the i -th query word and the j -th context word, and M ctxi,j is the normalized attention weight of ds ( i (cid:54) = j ) i,j .",
"Finally, for the aspect query w i , we can obtain the global SC-oriented features X Si by a weighted sum of all contextual features (except the one for w i ): XS i = n (cid:88) j =1 ( M ctx i,j X ctx j ) (4) Propagating Relations for Collaborative Learning After encoding task-oriented features, we propagate the interactive relations ( R 1 .. R 4 ) among subtasks to mutually enhance the AE, OE, and SC modules.",
"We add a coefficient [ log 2 (2+ | i j | )] 1 based on the absolute distance between two words.",
"The rationale is that the adjacent context words should contribute more to the sentiment polarity.",
"(1) R 1 is the dyadic relation between AE and OE, which indicates that AE and OE might hold informative clues to each other.",
"In order to model R 1 , we want the AE-oriented features XA and the OE-oriented features XO to exchange useful information based on their semantic relations.",
"Take the subtask AE as an example, the semantic relation between the word in AE and that in OE is defined as follows: sr ( i (cid:54) = j ) i,j = ( X Ai ) T X Oj , MO 2 A i,j = exp ( sr i,j ) (cid:80) nk =1 exp ( sr i,k ) (5) For the word w i in AE, we can obtain the useful clues XO 2 A i from OE by applying a weighted sum of semantic relations to all words in OE (except the word w i itself), i.e., XO 2 A i = (cid:88) n j =1 ( MO 2 A i,j X Oj ) (6) We then concatenate the original AE-oriented features XA and the useful clues XO 2 A from OE as the final features for AE, and feed them into a fully-connected layer to predict the tags of aspect terms: YA = softmax ( WA ( XA XO 2 A )) , (7) where WA R 3 2 d c is a transformation matrix, YA R 3 n is the predicted tag sequence of AE.",
"For subtask OE, we use the transposed matrix of sr ( i (cid:54) = j ) i,j in Eq.",
"5 to compute the corresponding MA 2 O .",
"In this way, the semantic relation between AE and OE will be consistent without regard to the direction.",
"Then we can obtain the useful clues XA 2 O from AE and generate the predicted tag sequence YO R 3 n in a similar way, i.e., YO = softmax ( WO ( XO XA 2 O )) (8) Additionally, each w i cannot be an aspect term and an opinion term at the same time, so we add a regularization hinge loss to constrain YA and YO : LR = (cid:88) n i =1 max (0 , P y Ai { B,I } + P y Oi { B,I } 1 . 0) , (9) where P denotes the probability under the given conditions.",
"(2) R 2 is the triadic relation between SC and R 1 .",
"Remember that the dependency between the aspect term and its context is critical for subtask SC, and we have already calculated this dependency using the normalized attention weight M ctx .",
"Hence we can model R 2 by propagating R 1 to M ctx .",
"We use MO 2 A as the representative of R 1 , and add it on M ctx to denote the influence from R 1 to SC.",
"More formally, we define R 2 as the following operation: M ctxi,j M ctxi,j + MO 2 A i,j (10) Actually, MO 2 A characterizes the dependency between aspect terms and contexts in the view of term extraction while M ctx characterizes it in the view of sentiment classification.",
"The dual-view relation R 2 can help refine the selection processes for both extraction and classification subtasks.",
"(3) R 3 is the dyadic relation between SC and OE, which indicates that the extracted opinion terms should be paid more attention when predicting the sentiment polarity.",
"In order to model R 3 , similarly to the method for R 2 , we update M ctx in SC using the generated tag sequence YO from OE: M ctxi,j M ctxi,j + P y Oj { B,I } [ log 2 (2 + | i j | )] 1 (11) By doing this, the opinion terms can get larger weights in the attention mechanism.",
"Consequently, they will contribute more to the prediction of the sentiment polarity.",
"After getting the interacted values for M ctx , we can recompute the SC-oriented features XS in Eq.4 accordingly.",
"Then we concatenate H and XS as the final features for SC and feed them into a fully-connected layer to predict sentiment polarities for the candidate aspect terms: YS = softmax ( WS ( H XS )) , (12) where WS R 3 2 d h is a transformation matrix, YS R 3 n is the predicted tag sequence of SC.",
"(4) R 4 is the dyadic relation between SC and AE, which indicates that the results from AE are helpful in supervising the training of SC.",
"Clearly, only aspect terms have sentiment polarities.",
"Although SC needs to assign a polarity to every word, we know the ground truth aspect terms in AE during the training process.",
"Therefore, we directly use the ground truth tag sequence YA of AE to refine the labeling process in SC.",
"Specifically, only the predicted tags towards true aspect terms would be counted in the training procedure: y S i I ( y A i ) y S i , (13) where I ( y Ai ) equals to 1 if w i is an aspect term and to 0 if not.",
"Notice that this approach is only used in the training procedure.",
"When using one single RACL layer, AE, OE, and SC modules only extract corresponding features in a relatively low linguistic level, which may be insufficient to serve as the evidence to label each word.",
"Hence we stack RACL to multiple layers to obtain high-level semantic features for subtasks, which helps to conduct deep collaborative learning.",
"Specifically, we first encode features X ctx (1) , XA (1) XO 2 A (1) , and XO (1) XA 2 O (1) in layer (1) .",
"Then in layer (2) , we input these features for SC, AE, and OE to generate X ctx (2) , XA (2) , and XO (2) .",
"In this way, we can stack RACL to L layers.",
"We then conduct average pooling on results from all layers to obtain the final prediction: YT = avg ([ YT (1) , YT (2) , ..., YT ( L ) ]) , (14) where T { A, O, S } denotes the specific subtask, and L is the number of layers.",
"This shortcut-like architecture can enforce the features in the low layers to be meaningful and informative, which in turn helps the high layers to make better predictions.",
"After generating the tag sequences YA , YO , and YS for the sentence S e , we compute the cross-entropy",
"cross-entropy loss of each subtask:",
"LT = (cid:88) n i =1 (cid:88) J j =1 y Tij log ( y Tij ) , (15) where T { A, O, S } denotes the subtask, n is the length of S e , J is the category of labels, y Ti and y Ti are the predicted tags and ground truth labels.",
"The final loss L of RACL is the combination of the loss for subtasks and the loss for regularization, i.e., L = (cid:80) LT + LR , where is a coefficient.",
"We then train all parameters with back propagation.",
"Datasets We evaluate RACL on three real-world ABSA datasets from SemEval 2014 (Pontiki et al., 2014) and 2015 (Pontiki et al., 2015), which include reviews from two domains: restaurant and laptop.",
"Original datasets only have ground truth labels for aspect terms and corresponding sentiment polarities, while labels for opinion terms are annotated by two previous works (Wang et al., 2016, 2017).",
"All datasets have a fixed training/test split.",
"We further randomly sample 20% training data as the development set to tune hyper-parameters, and only use the remaining 80% for training.",
"The statistics for datasets are summarized in Table 2. Table 2: The statistics of datasets.",
"Settings We examine RACL with two types of word vectors: the pre-trained word embedding and pre-trained language encoder .",
"In the word embedding implementation, we follow the previous studies (Xu et al., 2018; He et al., 2019; Luo et al., 2019) and use two types of embeddings, i.e., general-purpose and domain-specific embeddings.",
"The former is from GloVe vectors with 840B tokens (Pennington et al., 2014), and the latter is trained on a large domain-specific corpus using fastText and published by Xu et al. (2018).",
"Two types of embeddings are concatenated as the word vectors.",
"In the language encoder implementation, we follow Hu et al. (2019) by using the BERT Large (Devlin et al., 2019) as the backbone and fine-tuning it during the training process.",
"We denote these two implementations as RACL-GloVe and RACL-BERT 1 .",
"For RACL-GloVe, we set the dimension d w =400, d h =400, d c =256 and the coefficient =1e-5.",
"Other hyper-parameters are tuned on the development set.",
"The kernel size K of CNN and the layer number L is set to { 3,3,5 } and { 4,3,4 } for three datasets, respectively.",
"We train the model for fixed epochs using Adam optimizer (Kingma and Ba, 2015) with learning rate 1e-4 and batch size 8.",
"For RACL-BERT, we set d w to 1024 and learning rate to 1e-5 for fine-tuning BERT, and other hyper-parameters are directly inherited from RACL-GloVe.",
"We use four metrics for evaluation, i.e., AE-F 1 , OE-F 1 , SC-F 1 , and ABSA-F 1 .",
"The first three denote the F 1 -score of each subtask, while the last one measures the overall performance for complete ABSA 2 .",
"To compute ABSA-F 1 , the result for an aspect term would be considered as correct only when both AE and SC results are correct.",
"The model achieving the minimum loss on the development set is used for evaluation on the test set.",
"2 Following He et al. (2019), if an aspect term contains multiple words, we use the predicted sentiment of the first word as the SC result.",
"Moreover, aspect terms with conflict sentiment labels are ignored when computing SC-F 1 and ABSA-F 1 .",
"The same goes for all baseline methods.",
"Baselines To demonstrate the effectiveness of RACL for the complete ABSA task, we compare it with the following pipeline and unified baselines.",
"The hyper-parameters for baselines are set to the optimal values as reported in their papers.",
"{ CMLA, DECNN } + { TNet, TCap } : CMLA (Wang et al., 2017) and DECNN (Xu et al., 2018) are the state-of-the-art methods for AE, while TNet (Li et al., 2018a) and T(rans)Cap (Chen and Qian, 2019) are the top-performing methods for SC.",
"We then construct four pipeline baselines through combination.",
"MNN (Wang et al., 2018a): is a unified method utilizing the collapsed tagging scheme for AE and SC.",
"E2E-ABSA (Li et al., 2019): is a unified method using the collapsed tagging scheme for AE and SC, and it introduces the auxiliary OE task without explicit interaction.",
"DOER (Luo et al., 2019): is a multi-task unified method which jointly trains AE and SC, and it explicitly models the relation R 4 .",
"IMN-D (He et al., 2019): is a unified method involving joint training for AE and SC with separate labels.",
"The OE task is fused into AE to construct five-class labels.",
"It explicitly models relations R 3 and R 4 3 .",
"SPAN-BERT (Hu et al., 2019): is a pipeline method using BERT Large as the backbone.",
"A multi-target extractor is used for AE, then a polarity classifier is used for SC.",
"IMN-BERT : is an extension of the best unified baseline IMN-D with BERT Large .",
"By doing this, we wish to conduct convincing comparisons for the BERT-style methods.",
"The input dimension and learning rate of IMN-BERT are the same as our RACL-BERT, and other hyper-parameters are inherited from IMN-D .",
"The comparison results for all methods are shown in Table 3. The methods are divided into three groups: M1 M4 are GloVe-based pipeline methods, M5 M9 are GloVe-based unified methods, and M10 M12 are BERT-based methods.",
"Firstly, among all GloVe-based methods (M1 M9), we can observe that RACL-GloVe consistently outperforms all baselines in terms of 3 For a fair comparison, we remove the auxiliary document-level datasets in TransCap and IMN-D, and only use the same aspect-level datasets as ours.",
"the overall metric ABSA-F 1 , and achieves 2.12%, 2.92%, and 2.40% absolute gains over the strongest baselines on three datasets.",
"The results prove that jointly training all subtasks and comprehensively modeling the interactive relations are critical for improving the performance of the complete ABSA task.",
"Moreover, RACL-GloVe also achieves the best or second best results on all subtasks.",
"This further demonstrates that the learning process of each subtask can be enhanced by the collaborative learning.",
"Another observation from M1 M9 is that the unified methods (M5 M9) perform better than the pipeline ones (M1 M4).",
"Secondly, among the GloVe-based unified methods, RACL-GloVe, IMN-D, and DOER perform better than MNN and E2E-TBSA in general.",
"This can be due to the fact that the former three methods explicitly model interactive relations among subtasks while the latter two do not.",
"We notice that DOER gets a poor SC-F 1 score.",
"The reason might be that it utilizes an auxiliary sentiment lexicon to enhance the words with positive and negative sentiment.",
"It is hard for DOER to handle words with neutral sentiment and this results in a low SC-F 1 score.",
"Thirdly, the BERT-based methods (M10 M12) achieve a better performance than GloVe-based methods by utilizing the large-scale external knowledge encoded in the pre-trained BERT Large backbone.",
"Specifically, SPAN-BERT is a strong baseline in subtask AE by reducing the search space with a multi-target extractor.",
"However, its performance on SC drops a lot because it cannot capture the dependency between the extracted aspect terms in AE and the opinion terms in SC without interactions among subtasks.",
"IMN-BERT achieves relatively high scores on OE and SC, but its performance on AE is the worst among three without the guidance from the relations R 1 and R 2 .",
"In contrast, RACL-BERT gets significantly better overall scores than SPAN-BERT and IMN-BERT on all three datasets.",
"This again shows the superiority of our RACL framework for the complete ABSA task by using all interactive relations.",
"To investigate the effects of different relations on RACL -GloVe/-BERT, we conduct the following ablation study.",
"We sequentially remove each interactive relation and obtain four simplified variants.",
"As expected, all simplified variants in Table 4 have a performance decrease of ABSA-F 1 .",
"The results clearly demonstrate the effectiveness of the proposed relations.",
"Moreover, we find that the relations play more important roles on small datasets than on large ones.",
"The reason might be that it is hard to train a complicated model on small datasets, and the relations can absorb external knowledge from other subtasks.",
"There are two key hyper-parameters in our model: the kernel size K of the CNN encoder and the layer number L .",
"To investigate their impacts, we first vary K in the range of [1, 9] stepped by 2 while fixing L to the values in section 4.1, and then vary L in the range of [1, 7] stepped by 1 while fixing K .",
"We only present the ABSA-F 1 results for RACL-GloVe in Figure 3 since the hyper-parameters of RACL-BERT are inherited from RACL-GloVe.",
"In Figure",
"3(a), K =1 yields extremely poor performance because the raw features are generated only by the current word.",
"Increasing K to 3 or 5 can widen the receptive field and remarkably boosts the performance.",
"However, when further increasing K to 7 or 9, many irrelevant words are added as noises and thus deteriorate the performance.",
"In Figure",
"3(b), increasing L can, to some extent, expand the learning capability and achieve high performance.",
"However, too many layers introduce excessive parameters and make the learning process over complicated.",
"This section details the analysis on results of several examples by different methods for a case study.",
"We choose CMLA+TCap (denoted as PIPELINE), IMN-D, and RACL-GloVe as three competitors.",
"We do not include the BERT-based methods as we wish to investigate the power of the models without the external resources.",
"S1 and S2 verify the effectiveness of relation R 1 .",
"In S1, due to the existence of the conjunction and , two baselines incorrectly extract offers as an opinion term as easy .",
"In contrast, RACL-GloVe can successfully filter out offers in OE by using R 1 .",
"The reason is that offers has never co-occured as an opinion term with the aspect term OS in the training set, and R 1 which connects the AE and OE subtasks will treat them as irrelevant terms.",
"This information will be passed to OE subtask during the testing phase.",
"Similarly, in S2, both baselines fail to recognize looking as an aspect term, because it might be the present participle of look without opinion information.",
"Instead, RACL-GloVe correctly labels it as R 1 provides useful clues from opinion terms faster and sleeker .",
"S3 shows the superiority of relation R 2 which is critical to connect the three subtasks but has never been employed in previous studies.",
"Both baselines successfully extract Dessert and die for for AE and OE, but assign the incorrect neutral sentiment polarity even if IMN-D has emphasized the opinion terms.",
"The reason is that these two terms have not co-occurred in the training samples, and it is hard for SC to recognize their dependency.",
"In contrast, since Dessert and die for are typical words in AE and OE, RACL-GloVe is able to encode their dependency in R 1 .",
"By propagating R 1 to SC using R 2 , RACL-GloVe can assign a correct polarity for Dessert .",
"To take a close look, we visualize the averaged predicted results (left) and the attention weights (right) of all layers in Figure 4. Clearly, the original attention M ctx before of Dessert does not concentrate on die for .",
"After getting enhanced by MO 2 A and OE, M ctx after successfully highlights the opinion words and SC makes a correct prediction.",
"S4 shows the benefits from relation R 3 .",
"IMN-D and RACL-GloVe assign a correct polarity towards Sushi in SC since they both get the guidance from fresh in OE, while PIPELINE gets lost in contexts and makes a false prediction without the help of the opinion term.",
"Notice that S1 S4 simultaneously demonstrate the necessity for R 4 , since RACL-GloVe is not biased by background words and can make correct sentiment predictions in all examples.",
"To demonstrate that our RACL model does not incur the high computational cost, we compare it with two strong baselines DOER and IMN-D in terms of the parameter number and running time.",
"We run three models on the Restaurant 2014 dataset with the same batch size 8 in a single 1080Ti GPU, and present the results in Table 6.",
"Obviously, our proposed RACL has similar computational complexity with IMN-D, and they are both much simpler than DOER.",
"In this paper, we highlight the importance of interactive relations in the complete ABSA task.",
"In order to exploit these relations, we propose a Relation-Aware Collaborative Learning (RACL) framework with multi-task learning and relation propagation techniques.",
"Experiments on three real-world datasets demonstrate that our RACL framework with its two implementations outperforms the state-of-the-art pipeline and unified baselines for the complete ABSA task.",
"We thank the anonymous reviewers for their valuable comments.",
"The work described in this paper is supported by the NSFC projects (61572376, 91646206), and the 111 project (B07037)."
] | [
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"objective",
"objective",
"objective",
"objective",
"objective",
"objective",
"objective",
"objective",
"objective",
"objective",
"objective",
"objective",
"objective",
"objective",
"objective",
"objective",
"result",
"abstain",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"other",
"other",
"method",
"abstain",
"abstain",
"abstain",
"other",
"method",
"abstain",
"abstain",
"other",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"objective",
"objective",
"other",
"other"
] |
[
"Abstract In this paper, we propose an effective deep learning framework for inducing courteous behavior in customer care responses.",
"The interaction between a customer and the customer care representative contributes substantially to the overall customer experience.",
"Thus, it is imperative for customer care agents and chatbots engaging with humans to be personal, cordial and emphatic to ensure customer satisfaction and retention.",
"Our system aims at automatically transforming neutral customer care responses into courteous replies.",
"Along with stylistic transfer (of courtesy), our system ensures that responses are coherent with the conversation history, and generates courteous expressions consistent with the emotional state of the customer.",
"Our technique is based on a reinforced pointer-generator model for the sequence to sequence task.",
"The model is also conditioned on a hierarchically encoded and emotionally aware conversational context.",
"We use real interactions on Twitter between customer care professionals and aggrieved customers to create a large conversational dataset having both forms of agent responses: generic and courteous.",
"We perform quantitative and qualitative analyses on established and task-specific metrics, both automatic and human evaluation based.",
"Our evaluation shows that the proposed models can generate emotionally-appropriate courteous expressions while preserving the content.",
"Experimental results also prove that our proposed approach performs better than the baseline models.",
"With the advancement of artificial intelligence (AI) and natural language processing (NLP), automatic systems have made immense impact on human lives by assisting them in their everyday",
"works.",
"Human-computer interaction is pervasive in many applications such as chatbots, personal assistants and many more.",
"Natural language generation (NLG) component of such systems is an important aspect of every human computer interaction.",
"Thus research in recent years have been on modulating biases, styles and control in text generation to enhance these interactions.",
"Customer care is an essential tool used by companies to provide guidance, assistance and in building stable customer relations.",
"The ease of access, ease in following-up and immediacy of social media has made it a strong platform for companies and applications to interact with their customers.",
"In this platform, we see the usage of courteous and emphatic language, which is the cen-ter of our current study.",
"For the growth of any company or application it is necessary for the customer care agents to be cordial and amicable to the customer.",
"Thus along with handling queries, it is important for agents to provide customer satisfaction by greeting, empathizing, appreciating feedback, apologizing at the right time, and thus build a strong relation with the customer.",
"In Table 1, we showcase different situations in which an agent can behave courteously, thereby providing a good customer experience.",
"In this work, we focus on proposing an effective deep learning framework to enhance the existing NLG systems by converting their replies to courteous ones, by staying conversationally grounded, and emotionally aware of the user.",
"For any Natural Language Generation (NLG) module (generic or task oriented), courteous response can play an important role in keeping the user engaged with the system.",
"Also, it will make the system more human-like while generating responses.",
"Inducing courteous behavior in responses can be fused with any existing NLG system to give them humanly essence and simultaneously make users Generic Courteous Behaviour How can we help?",
"more comfortable in using these systems leading to an increase in user association with the brand or product.",
"This would eventually lead to customer satisfaction with an increase in customer retention.",
"Moreover, such language conditioning shall ensure that responses are more human-like.",
"Thus, the major motivation behind this task is to create systems that are able to converse with humans ef-ficiently and generate replies in accordance with the emotions of the customer.",
"Courteousness is a virtue of humans and to be able to make a machine behave courteously is a challenging task.",
"Unlike a generic NLG system that focuses in generating responses, our system adds courteous nature and emotional sense to the replies, thereby, making the responses interesting and engaging to the users.",
"Such systems have high applications in many areas/companies that employ chatbots to deal with the customers.",
"We thus propose a novel research direction of inducing courteous behavior in the natural language responses for the customer care domain whilst being contextually consistent.",
"The key contributions of our work are summarized as follows:",
"(i) Creation of a high quality and a large conversational dataset, Courteously Yours Customer Care Dataset (CYCCD) prepared from the actual conversations on Twitter.",
"We provide both forms of agent responses: generic and courteous.",
"(ii) Proposal of a strong benchmark model based on a context and emotionally aware reinforced pointer-generator approach which demonstrates very strong performance (both on quantitative and qualitative analyses) on established and task-specific metrics, both automatic and human evaluation based.",
"The rest of the paper is structured as follows: In section 2, we discuss the related works.",
"In Section 3 we explain the proposed methodology followed by the dataset description in section 4. Experimental details, evaluation metrics and results are presented in section 5 and 6 respectively.",
"In section 7, we present the concluding remarks followed by future directions.",
"Natural language generation (NLG) module has been gaining importance in wide applications such as dialogue systems (Vinyals and Le, 2015; Shen et al., 2018; Wu et al., 2018; Serban et al., 2017a; Raghu et al., 2018; Zhang et al., 2018; Li et al., 2016), question answering systems (Reddy et al., 2017; Duan et al., 2017), and many other natural language interfaces.",
"To help the users achieve their desired goals, response generation provides the medium through which a conversational agent is able to communicate with its user.",
"In (Ser-ban et al., 2017b), the authors have proposed an hierarchical encoder-decoder model for capturing the dependencies in the utterances of a dialogue.",
"Conditional auto-encoders have been employed in (Zhao et al., 2017), that generates diverse replies by capturing discourse-level information in the encoder.",
"Our work differentiates from these previous works in dialogue generation in a way that we embellish the appropriate response content with courteous phrases and sentences, according to the conversation.",
"Hence, our system is an accompaniment to any standalone natural language generation system to enhance its acceptability, usefulness and user-friendliness.",
"Emotion classification and analysis (Herzig et al., 2016) in customer support dialogue is important for better understanding of the customer and to provide better customer support.",
"Lately, a number of works have been done on controlled text generation (Hu et al., 2017; Li et al., 2017; Subramanian et al., 2017; Fedus et al., 2018; Peng et al., 2018) in order to generate responses with desired attributes.",
"Emotion aware text generation (Zhou and Wang, 2018; Zhou et al., 2018; Huang et al., 2018) have gained popularity as it generates responses depending on a specific emotion.",
"Previous works in conditioned text generation have worked on inducing specific biases and behaviors (Herzig et al., 2017) while generation (like emotion, style, and personality trait).",
"Our work is different in the sense that it can encompass different emotional states (like joy, excitement, sadness, disappointment) and traits (like friendliness, apologetic, thankfulness, empathy), as is the demand of the situation.",
"Style transfer has been an emerging field in natural language processing (NLP).",
"A couple of works have been done in changing the style of an input text and designing the output text according to some particular styles.",
"In (Rao and Tetreault, 2018), a dataset has been introduced for formality style transfer.",
"Unsupervised text style transfer has encouraged in transforming a given text without parallel data (Shen et al., 2017; Carlson et al., 2017; Fu et al., 2018; Li et al., 2018; Niu and Bansal, 2018).",
"Overall our system is novel as it is motivated by the need for inducing specific behavior and style in an existing NLG systems (neu-ral, or template-based) as a means of post editing, by simultaneously being emotionally and contextually consistent.",
"We have successfully demonstrated this behavior through empirical analysis for a specific application of customer care.",
"Given the Conversation History (previous few exchanges in the dialog), and the Generic Response, the task is to generate the Courteous Response.",
"The architectural diagram of our proposed model is in Figure 1. 3.1 Conversational History Representation The conversation history C is a sequence of utterances ( u 1 , u 2 , . . . , u D ) and each utterance u d is a sequence of words w 1 , w 2 , . . . , w N which are represented by their embeddings.",
"For encoding the emotional states associated with these utterances, we use the output distribution from Deep-Moji (Felbo et al., 2017) which is pre-trained on the emoji prediction task.",
"Let the utterance u d be a sequence of sentences s 1 , s 2 , . . . , s N , where the n th sentence has an emotional embedding e n,d .",
"Then the emotional representation of the utterance is: e d [ i ] = max n e n,d [ i ] (1) The first bi-directional layer over any utterance u d yields the hidden states h 11 d , h 12 d , . . . , h 1 Nd , where N is the word length of the utterance.",
"The final representation of any utterance r d is given by the concatenation of the emotional representation as well as the last hidden state of the Bi-directional Long Short Term Memory (Bi-LSTM) (Hochreiter and Schmidhuber, 1997) encoder.",
"The second hierarchical layer Bi-LSTM encodes the utterance representations r 1 , r 2 , . . . , r D as hidden states h 21 , h 22 , . . . , h 2 D .",
"The last hidden state h 2 D is the representative of the conversational history, and is renamed as the conversational context vector c .",
"Another single layer unidirectional LSTM network encodes the generic response word embedding sequence to obtain the encoder hidden states h i .",
"At the decoder time step t , the decoder LSTM state s t is used to calculate the attention distribution over the encoder states t :",
"This attention distribution helps to identify the relevant encoder states necessary for the current decoding step.",
"The representation of the encoder for this time step is an attention weighted sum of its states, called the context vector: h t = (cid:88) i ti h i (5) The LSTM state s t is updated using s t 1 , the previous time step's context vector h t 1 , word embedding of the previously generated word w emb ( y t 1 ) , and the conversation context vector c .",
"To aid the copying of words from the generic response while generating the courteous response, we use the mechanism similar to (See et al., 2017).",
"For the pointer generator network, the model computes two distributions, one over the Figure 1: Architectural Diagram of the Proposed Model.",
"encoder words ( t ) and one over the vocabulary ( p vocab ).",
"The trade-off between the two distributions is computed dynamically in the form of the generation probability p gen [0 , 1] from the context vector h t , the decoder state s t , the decoder input x t , and conversational context vector c : p gen = ( w Th h t + w Ts s t + w Tx x t + w Tc c + b gen ) (8) where vectors w h , w s , w x , w c and scalar b gen are trainable parameters and is the sigmoid function.",
"The final distribution over the union of the vocabulary words and the words of the generic response is calculated by: P ( w ) = p gen p vocab ( w ) + (1 p gen ) (cid:88) i : w i = w ti (9) 3.5 Model training We use the joint reinforcement learning (RL) and machine learning (ML) training as used in (Paulus et al., 2017).",
"If y = { y 1 , y 2 , . . . , y n (cid:48) } is the gold output tokens for given generic response tokens x 1 and conversation history x 2 , the maximum-likelihood objective using teacher forcing is given by: LML = n (cid:48) (cid:88) t =1 log p ( y t | y 1 , . . . , y t 1 , x 1 , x 2 ) (10) Along with training with the maximum likelihood error, we also use reinforcement learning to learn from maximizing discrete metrics that are task specific (which we design as the rewards).",
"We use the self-critical policy gradient algorithm proposed in (Rennie et al., 2017).",
"Here the REINFORCE (Williams, 1992) algorithm is baselined with the reward obtained by the inference time algorithm (which performs greedy decoding), without the need for training a critic network for estimating value functions.",
"During training, two output sequences are produced: y s , obtained by sampling p ( y st | y s 1 , . . . , y st 1 , x ) probability distribution, and y g , the baseline output, obtained by greedily maximizing the output probability distribution at each time step.",
"It is the weighted mean of the two terms:",
"(i) BLEU metric m 1 : Ensures the content matching between the reference and the decoded outputs.",
"(ii) Emotional accuracy m 2 : Measured by the cosine similarity of the emoji distributions of the gold and generated responses (using pretrained DeepMoji).",
"It ensures that the emotional states of the generated courteous behavior is consistent with the gold.",
"We first pre-train using the maximum likelihood (ML) objective (Eq. 10) and then using a mixed objective function with a reduced learning rate: L mixed = L RL + (1 ) LML , (13) 3.6 Baselines We develop the following models: 1. Model-1 : This is a Seq2Seq model with attention (Luong et al., 2015) and decoder conditioned on the conversational context vector c (without concatenating emotional embedding i.e. instead of Eq. 2, r d = h 1 Nd ) 2. Model-2 : This model is developed using Model-1 along with the copying mechanism of Pointer Generator Network.",
"3. Model-3 : This model is developed using Model-2 along with emotional embeddings in the conversational context vector as in E.g., 2. Train Valid Test # Conversation 140203 20032 40065 # Utterances 179034 25642 51238 Table 2: Dataset Statistics 4 Dataset In this section we describe the details of the dataset that we create for our experiments.",
"We use the data of the interactions between customers and professional customer care agents of companies on their Twitter handles.",
"We source the requisite Twitter data from the dataset made available on Kaggle by Thought vector 1 .",
"Tweets have 1 https://www.kaggle.com/thoughtvector/customer-support-on-twitter labels of company names, anonymized user ids, time stamps, and response tweet ids essential for reconstructing conversations, and nuanced analyses.",
"We filter out conversations having multiple responses to a single tweet, and those starting by a tweet by a company.",
"This was done to ensure correct conversation flow and to acquire suggestion / complaint based exchanges, respectively.",
"As there exists no dataset with generic and courteous versions of utterances we create our own dataset.",
"We prepare responses of generic styles by filtering out courteous sentences, phrases and expressions from the actual responses.",
"We presume actual responses as the courteous form of response.",
"Tweet by the Customer Care professional : Oh no that's not good.",
"I can help!",
"What is happening with your internet?",
"We use this conversation to prepare the courteous and the generic response 1. Courteous response : Oh no that's not good.",
"I can help!",
"What is happening with your internet?",
"2. Generic response : What is happening with your internet?",
"As we want to filter out courteous phrases / sentences from a given customer care tweet, we segment the tweet into sentences.",
"Purely courteous (and non-informative) sentences must be removed, purely informative sentences must be retained, and informative sentences with courteous expressions must be transformed (to remove only the courteous part from the sentence).",
"We define these three forms of sentences as:",
"(i) Courteous sentences : Sentences which do not contain any information/ suggestions, and are purely non-informative.",
"These may include personalized greetings and expression of appreciation, apology, empathy, assurance, or enthusiasm.",
"Example: Sorry to hear about the trouble!",
"(ii) Informative sentences without courteous expressions : These sentences contain the actual content of the tweet and are generally assertions, instructions, imperatives or suggestions.",
"Example: Simply visit url name to see availability in that area!",
"(iii) Hybrid-Informative sentences with courteous expressions : These are the sentences of the second type also containing some expressions of the first type.",
"Example: We appreciate the feedback, we'll pass this along to the appropriate team.",
"We annotate sentences in isolation by grouping similar sentences together to speed up annotations and then reconstruct the generic sentences by postprocessing rules.",
"We follow the following procedure to prepare the dataset for each company separately: 1. Sentence segmentation : We first extract the tweets from customer care agents.",
"Each tweet is segmented into sentences to eventually identify three forms of the sentences.",
"2. Clustering : As expressions and sentences used by professionals of a company often follow similar patterns.",
"Grouping similar sentences together before annotation would therefore significantly make the process faster.",
"The vector-semantic representations of sentences are obtained using the sentence encoder(Conneau et al., 2017) trained on the SNLI corpus(Bowman et al., 2015).",
"We use the K-Means clustering(Aggarwal and Zhai, 2012)(k = 300) to cluster these sentences.",
"3. Annotations : Three annotators proficient in the English language were assigned to annotate the sentences into the three categories: purely courteous, purely informative, hybrid .",
"For sentences having both informative and courteous clauses/expressions (hybrid), they were asked to manually prepare the generic sentence by removing the courteous part.",
"Also they were asked to identify non English conversations (and filter them).",
"We observe the multi-rater Kappa agreement ratio of approximately 80%, which may be considered as reliable.",
"4. Preparing generic responses : Now let us assume we have a courteous response S with n sentences s 1 , s 2 , . . . , s n .",
"We obtain the generic response by removing the courteous sentences, retaining the informative sentences, and replacing the hybrid sentences with the prepared generic equivalents.",
"We divide the conversation into train, validation and test sets as given in Table 2. Each training example is of the form: conversational history (last three utterances), generic response and courteous response.",
"Implementation Details: We use a vocabulary of size 30k for the task (as the range of courteous expressions is limited, and informative contents can be copied even if they are out-of-vocabulary-OOV).",
"We use 256 dimensional hidden states and 128 dimensional word embeddings (not pre-trained).",
"We use AdaGrad as the optimizer with gradient clipping (magnitude 2).",
"We train with batches of size 16, and use the same size for beam search decoding.",
"We monitor smoothened running loss on the validation set for early stopping and finding the best models for decoding.",
"We use = 0.99 (similar to (Paulus et al., 2017)) for the joint loss.",
"For the reward function the values of 1 and 2 are 0.75 and 0.25, respectively.",
"Automatic Evaluation: For automatic evaluation, in addition to the standard metrics like BLEU (Papineni et al., 2002), ROUGE (Lin, 2004) and perplexity, we also use two task-specific metrics: 1. Content preservation (CP): We want to measure how much of the informative content from the original generic response( X ) is reflected in the generated courteous response( Y ).",
"We use a measure similar to ROUGE-L recall.",
"where LCS is the longest common subsequence.",
"2. Emotional accuracy (EA): To measure the consonance between the generated courteous expressions (source of emotion) and the gold, we find the cosine similarity between the MojiTalk emoji distributions of the two responses ( X e and Y e ).",
"EA = X e Y e / ( | X e || Y e | ) (15) Human Evaluation: In order to understand the quality of the responses we adopt human evaluation to compare the performance of different models.",
"We randomly sample 500 responses from the test set for human evaluation.",
"Given a generic response along with conversation history, three human annotators with post-graduate exposure were assigned to evaluate the courteous responses generated by the different models for the three metrics: 1. Fluency (F): The courteous response is grammatically correct and is free of any errors.",
"2. Content Adequacy (CA): The generated response contains the information present in the generic form of the response and there is no loss of information while adding the courteous part to the responses.",
"3. Courtesy Appropriateness (CoA): The courtesy part added to the generic responses is in accordance to the conversation history.",
"The scoring scheme for fluency and content adequacy is 0: incorrect or incomplete, 1: moderately correct, 2: correct, whereas for courtesy appropriateness the scoring scheme is -1: inappropriate, 0: non-courteous, 1: appropriate, respectively.",
"We computed the Fleiss' kappa (Fleiss, 1971) for the above metrics to measure inter-rater consistency.",
"The kappa score for fluency is 0.75 and courtesy appropriateness is 0.77 indicating sub-stantial agreement and the score is 0.67 for content adequacy denoting considerable agreement.",
"Automatic evaluation results: Results of the different models are presented in Table 3. The proposed model performs significantly better than the other baselines for all the evaluation metrics and the improvement in each model is statistically sig-nificant compared to the other models.",
"2 .",
"The attention based sequence to sequence model (Model 1) is a decent baseline with good scores (56.80 BLEU).",
"The Pointer generator model (Model 2) is aided by the copying mechanism.",
"Thus, it is better modeled to include portions of the content from the generic response into the courteous response.",
"This is corroborated by the increased score in CP (+9.33%).",
"Its emotional accuracy is slightly reduced from Model 1 (-0.45%), probably because of eagerness to copy rather than generate.",
"The advantage of the emotional embedding in Model 3 over Model 2 is reflected with the increased scores(+3.77%), because of its ability to better understand the emotional states and generate more appropriate courteous responses.",
"The perplexity values are slightly reduced in Model 3 and Model 4, apparently because of the emotion embedding confusing the actual content from the conversation history.",
"The final model performs decently better than other models.",
"The reinforcement learning objective helps it to improve upon the desired metrics rather than just learn to be accurate at the token 2 we perform statistical significance tests (Welch, 1947) and it is conducted at 5% (0.05) significance level Model BLEU ROUGE PPL CP EA 1 2 L 1 Seq2Seq 56.80 63.8 59.06 64.52 58.21 68.34 82.43 2 Seq2Seq + P 66.11 69.92 64.85 66.40 42.91 77.67 81.98 3 Seq2Seq + P + EE 68.16 72.18 67.92 71.17 43.52 76.05 85.75 4 Proposed Model 69.22 73.56 69.92 72.37 43.77 77.56 86.87 Table 3: Results of various Models; P: Pointer Generator Model; EE: Emotional embedding Model F CA CoA 0 1 2 0 1 2 -1 0 1 Model 1 15.70 42.50 41.80 16.21 41.69 42.10 23.71 51.08 25.21 Model 2 14.23 42.77 43.00 15.62 39.65 44.73 22.05 39.43 38.52 Model 3 11.15 44.10 44.75 13.66 41.12 45.22 15.23 41.22 43.55 Our Model 10.05 44.90 44.60 13.85 38.48 47.67 14.11 41.11 44.78 Table 4: Human evaluation results for Fluency, Content Adequacy and Courtesy Appropriateness (All values are in percentages.) level.",
"Human evaluation results: In Table 4, we present the results of human evaluation.",
"In case of fluency, our proposed model and the third model show similar performance, whereas Models 1 and 2 are relatively less fluent.",
"Model 2 shows great improvement with respect to Model 1 as it is able to copy the content from the input.",
"Also, for content adequacy our proposed model has been able to generate 38.48% moderate replies that have adequate amount of information in it while it generates around 47.67% correct responses that contain all the information present in the input.",
"For courtesy appropriateness, Model 1 and Model 2 show lower performance while our proposed model has been able to capture the courteous behavior.",
"As score 1 is given to the responses that are courteous as well as the nature of courteousness is in accordance to the conversation, it can be seen that our model achieves 44.78% performance level which is higher than the other models.",
"From this evaluation, we can infer that the responses generated by our model are not only adequate in terms of information preservation, but also able to induce the courteous behavior by making the responses interesting and informative.",
"Error Analysis: We further analyse the outputs generated from our proposed model to perform a detailed qualitative analysis of the responses.",
"In Table 5, we present few examples of the responses generated by the different models given the generic input.",
"Some common forms of mistakes include: 1. Unknown Tokens: As Model 1 does not have the copying mechanism, the number of unknown Generic Input Model 1 Model 2 Model 3 Our Model dm us more info and well take a look into it for you we'll look into it im sorry to hear this please dm us more info and we'll take a look into it for you were here to help please dm us more info and well take a look into it for you were here to help please dm us more info and well take a look into it for you at the earliest adjust the brightness via your display settings on your device whos the brightness via your display settings on your device were here to help adjust the brightness via your display settings on your device we have several ways to change the display brightness on your device and were happy to help thanks for reaching out we have several ways to change the display brightness on your device and were happy to help we'll follow up with the store we'd like to help well follow up were here to help well follow up with the store sorry to hear that well follow up with the store thats disappointing to hear, we'll follow up with the store can you confirm which platform you are using for video access ?",
"tokens is predicted the most in this.",
"Also often the model predicts end of sequence' token just after the out of vocabulary' token, thus leaving sequences incomplete.",
"2. Wrong copying: Sometimes pointer network makes mistakes while copying (being influenced by language model): Gold:",
"..",
"which store in gillingham did you visit ?",
"; Predicted:",
"..",
"which store in belgium did you visit ?",
"3. Mistakes in emotion identification: These mistakes are more prominent in Models 1 and 2 (they don't have emotional embeddings), where the generated courteous phrases denote mistakes in identifying the emotional state of the customer.",
"For example, Gold: you're very welcome, hope the kids have an amazing halloween !",
"; Predicted: we apologize for the inconvenience.",
"hope the kids have an amazing halloween !",
"4. Extra information: Models 1, 2, 3 sometimes generate extra informative sentences than in the generic response: Gold: please send us a dm ; Predicted: please send us a dm please let us know if you did not receive it 5. Contextually wrong courteous phrases: These mistakes are common across models while generating courteous phrases with content in them: Gold: we want to help, reply by dm and",
"..",
"; Predicted: im sorry you havent received it.",
"please reply by dm and",
"..",
"6. Difference in phrases: Generated responses differ from reference responses in their use of (equivalent) courteous phrases, and are hence wrongly penalized by some metrics.",
"courteous behavior.",
"Incorporation of courteousness is important for attaining user satisfaction and to improve the performance of the application leading to user retention.",
"We successfully prepare a large benchmark corpus, created from the actual showcasing of courteous behavior by human professionals on Twitter.",
"Our developed models appropriately model the dialogue history and are informed of the past emotional states through emotional embeddings.",
"We have used both automatic and human based metrics for evaluating the performance of our model.",
"In qualitative and quantitative analyses of the generated responses, we observe contextually correct courteous behavior and content preservation, along with minor inaccuracies as discussed in the error analysis section.",
"Overall the performance of our model shows the variations in responses with the other models keeping the information and courtesy nature of the generated responses intact.",
"In future, along with the opportunity of extending the architectural designs and training methodologies to enhance the performance of our systems, we look forward to designing a specific component to enhance the natural language generation component of an end to end chatbot, by including appropriate mechanisms to interact with all its components (memory, database, and the dialog manager).",
"Moreover, studies will be conducted on courtesy transfer for the other domains, and also transfer learning from one domain to the another (like customer care to hospitality).",
"Asif Ekbal acknowledges the Young Faculty Research Fellowship (YFRF), supported by Visves-varaya PhD scheme for Electronics and IT, Ministry",
"Ministry of Electronics and Information Technology (MeitY), Government of India, being implemented by Digital India Corporation (formerly Media Lab Asia)."
] | [
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"objective",
"objective",
"abstain",
"method",
"abstain",
"method",
"objective",
"method",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"other",
"other",
"objective",
"objective",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"other",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"abstain",
"other",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"method",
"result",
"result",
"objective",
"abstain",
"other",
"other"
] |
[
"Open-domain dialogue generation has gained increasing attention in Natural Language Processing.",
"Its evaluation requires a holistic means.",
"Human ratings are deemed as the gold standard.",
"As human evaluation is inefficient and costly, an automated substitute is highly desirable.",
"In this paper, we propose holistic evaluation metrics that capture different aspects of open-domain dialogues.",
"Our metrics consist of (1) GPT-2 based context coherence between sentences in a dialogue, (2) GPT-2 based fluency in phrasing, (3) n -gram based diversity in responses to augmented queries, and (4) textual-entailment-inference based logical self-consistency.",
"The empirical validity of our metrics is demonstrated by strong correlations with human judgments.",
"We open source the code and relevant materials.",
"1 1 Introduction Learning to communicate is a key capacity of intelligent agents.",
"Research on enabling a machine to have meaningful and natural conversations with humans plays a fundamental role in developing artificial general intelligence, as can be seen in the formulation of Turing test (Turing, 1950).",
"Recently open-domain or non-task-oriented dialogue systems have attracted a surge of research interest (Bessho et al., 2012; Sordoni et al., 2015; Shang et al., 2015; Vinyals and Le, 2015; Serban et al., 2016, 2017; Ghazvininejad et al., 2018).",
"Evaluating models of open-domain dialogue generation in an efficient manner poses a significant challenge in developing dialogue systems.",
"The prevalent method of open-domain dialogue evaluation is human-based rating with a given rubric.",
"When various variations in the model and sets of hyper-parameters are needed, the labor-intensive human evaluation is deemed impracticable.",
"This key drawback may hinder the research progress and render the human evaluation approach not scalable.",
"Previous automatic evaluation metrics generally focus on the quality of the dialogue generation: context coherence and fluency.",
"Word-overlap metrics (Papineni et al., 2002; Banerjee and Lavie, 2005; Lin, 2004) or ad-hoc classifiers (Tao et al., 2018; Ghazarian et al., 2019) are designed for measuring the quality.",
"In open-domain dialogue, the relation between two utterances is more critical as shown in the first example of Table 1.",
"Compared with the previous two approaches, a language model, trained on an enormous amount of text, can naturally capture coherence among both words and utterances.",
"On the other hand, a good evaluation metric should not only measure the quality of generation, but also the diversity of generation, which is especially important for open-ended tasks like dialogue or story generation (Hashimoto et al., 2019).",
"Some n -gram based metrics have been utilized to measure diversity (Mou et al., 2016; Serban et al., 2017).",
"However, this metric might be improper for diversity evaluation since the generated utterances given various queries provided by the benchmark are generally diverse.",
"In our experiments, we observe constantly high diversity in terms of human ratings and n -gram based entropy when evaluating the generated responses directly.",
"In addition to the three aforementioned metrics, logical self-consistency is also a key aspect of dialogue models (Zhang et al., 2018).",
"An dialogue example with logical contradiction is displayed in the second example of Table 1.",
"Welleck et al. (2019) measured logical self-consistency by transferring each sentence into a rule-based triple, (category, relation, category), with the help of human annotators.",
"We are nevertheless unaware of any reliable automatic measure of logical consistency in open-domain dialogue.",
"In this work, we propose holistic metrics that evaluate distinctive aspects of generated dialogues.",
"Specifically, we consider (1) context coherence of a dialogue: the meaningfulness of a response within the context of prior query, (2) language fluency of generated responses: the quality of phrasing relative to a human native speaker, (3) response diversity of a set of generated sentences: the variety in meaning and word choice of responses, and (4) logical self-consistency : the logical consistency of utterances from a dialogue agent.",
"Both context coherence and response fluency (quality metrics) can naturally be captured by metrics based on strong language models like GPT-2 (Radford et al., 2019).",
"Therefore, we propose to recruit and fine-tune GPT-2 as a basis of our quality metrics.",
"With regard to response diversity and logical self-consistency , we propose to measure them under augmented utterances with controlled paraphrasing.",
"We leverage two effective approaches to generate augmented utterances: word substitution and text generator with a k -best decoder.",
"Moreover, we utilize n -gram based entropy to capture response diversity and entailment based approach to capture logical self-consistency .",
"Our experiments show that the proposed metrics strongly correlate with human judgments.",
"Moreover, our augmented datasets allow for a more accurate and straightforward human annotation, significantly improving the agreement between human evaluation.",
"We release the code and relevant materials as open-source contribution to pave the way towards further research.",
"Heuristic-based metrics have been shown to align well with human judgments and widely applied in various language generation tasks.",
"For machine translation, BLEU (Papineni et al., 2002) computes n -gram precision, whereas METEOR (Banerjee and Lavie, 2005) takes into account both precision and recall.",
"For summarization, ROUGE (Lin, 2004) also considers both precision and recall by calculating F-measure.",
"These n -gram based metrics are well-suited for the generation tasks that are more source-determined or low conditional entropy such as translation, image captioning, and summarization.",
"Some dialogue studies adopted these metrics to evaluate the quality of generated conversation responses (Ritter et al., 2011; Su et al., 2018; Sordoni et al., 2015).",
"They nevertheless are not suitable for open-ended generations or high conditional entropy tasks like dialogue generation where a diverse range of generations is acceptable conditional on a query.",
"Indeed, Liu et al. (2016) conducts extensive empirical studies on these metrics (e.g., BLEU, METEOR, and ROUGE) to test their effectiveness on evaluating dialogue generation and find limited relation between these automatic metrics and human judgments.",
"The word-overlap metrics (e.g., BLEU) fail to capture the semantic similarity between model and reference responses.",
"The following works leverage the distributed representation learned in neural network models to capture semantic similarity among context, model response, and reference response.",
"Lowe et al. (2017) collect a dataset of human scores and train a hierarchical recurrent neural network (RNN) to predict human-like scores to input responses given the context, resulting in an automatic metric that has a medium level correlation with human judgments.",
"Obtaining this metric however requires a large dataset of human-annotated scores, thus rendering this approach less flexible and extensible.",
"Tao et al. (2018) proposes a referenced metric and unreferenced metric blended evaluation routine (RUBER) for open-domain dialogue systems.",
"This blended metric is a combination of two metrics.",
"A referenced metric measures the similarity between model-generated and reference responses on the basis of word-embeddings.",
"An unreferenced metric captures the relevance between the query and response.",
"It is obtained by training a neural network classifier to determine whether a response is appropriate.",
"The positive examples are the references, while the negative examples are reference responses randomly chosen from the dataset, hence avoiding the need of human-annotated data.",
"After training, the Softmax score is utilized to measure whether the generated response is coherent with the query.",
"Attempting to improve RUBER, Ghazarian et al. (2019) explores to use contextualized embeddings from BERT.",
"The BERT-based unreferenced metric improves over the word-embedding-based RUBER unreferenced metric.",
"Interestingly, they show that the combined metric has a reduced correlation with human judgments than the unreferenced metric alone.",
"Although this finding is counterintuitive, it is consistent with the characteristics of open-domain dialogue that a range of diverse responses is reasonable given a query.",
"Hence a response can be acceptable to human annotators even if it does not align well with the reference either in terms of word-overlap or semantic embedding.",
"Context Coherence.",
"One key component of dialogue response is its coherence to the query as explored in Tao et al. (2018) and Ghazvininejad et al. (2018).",
"Prior work measures the coherence based on the Softmax score of a trained binary classifier.",
"Here we explore an alternative approach based on language modeling (Bengio et al., 2003).",
"A language model can naturally capture the coherence of the response to the query without resorting to an ad-hoc classifier.",
"Language Fluency.",
"Besides coherence, a good response should be fluent.",
"Fluency is often measured by a language model (Holtzman et al., 2018; Xu et al., 2018).",
"We define the response fluency score as negative perplexity of generated responses.",
"Response Diversity.",
"In addition to quality metrics, response diversity is also critical, especially for high conditional entropy tasks like dialogue or story generation (Hashimoto et al., 2019).",
"Some n -gram based metric has been utilized to measure diversity.",
"Mou et al. (2016) and Serban et al. (2017) compute unigram entropy across all generated utterances to measure the diversity.",
"This metric might be improper for diversity since the generated utterances given various queries are generally diverse.",
"In our experiments, we observe constantly high diversity in terms of human ratings and n -gram based entropy.",
"In another perspective, the entropy computed across all generated responses is essentially measuring the marginal entropy of the responses, while our actual interest is in the conditional entropy of the responses conditional on the queries.",
"Logical Self-Consistency.",
"Similar to diversity evaluation, current benchmarks are not suitable for evaluating logical self-consistency .",
"The current dataset is well-formed making the system to generate a simple and nonredundant response, but unfortunately, there still exist logical contradictions as shown in Table 1.",
"The natural language inference (NLI) task (Williams et al., 2018) aiming to check whether the sentence is entailed or contradicted by a previous sentence is highly related to logic evaluation on open-domain dialogues.",
"Language models, which predict the next token given previous tokens, naturally capture the coherence between sentences and particularly the dialogue query and response in our case.",
"GPT-2 (Radford et al., 2019) is a large-scale pre-trained language model based on the transformer architecture (Vaswani et al., 2017).",
"It is trained on a vast amount of diverse data and demonstrates impressive text generation capabilities.",
"In order to better capture the dependence between the queries and responses, GPT-2 can be fine-tuned using the next sentence prediction task on the dialogue dataset of interest.",
"Suppose a query q contains tokens { q t : t = 1 , ..., T q } and a response r has tokens { r t : t = 1 , ..., T r } .",
"Let P denote the fine-tuned GPT-2, then the context coherence is defined as the log-likelihood of the response conditional on the the query normalized by the length of the response length: c raw ( r | q ) = 1 T r log P ( q, r ) P ( q ) = 1 T r T r (cid:88) t log P ( r t | r <t , q ) .",
"(1) Note that c raw ( r | q ) is some negative number and unbounded from below.",
"A single value is then hard to explain absolutely and can only be interpreted relative to other values.",
"Also, the unboundedness renders it prone to extreme values.",
"Hence, a normalized score is utilized instead.",
"Since the score distribution varies as a function of the dataset, the lower bound is defined as 5th percentile, denoted as c 5 th , instead of some arbitrary value.",
"Then the normalized score, c ( r | q ) , is c ( r | q ) = max ( c 5 th , c raw ( r | q )) c 5 th c 5 th (2) which ranges from 0 to 1.",
"To capture the fluency of responses, we also adopt the pretrained language model, GPT-2.",
"In particular, the raw response fluency score, f raw ( r ) , is defined as, f raw ( r ) = 1 T r T r (cid:88) t log P ( r t | r <t ) .",
"Prior work (Mou et al., 2016; Serban et al., 2017) measured diversity by computing the n -gram entropy across all generated responses, which essentially reflects the marginal entropy of the responses.",
"Diversity of the responses conditional on the query (e.g., conditional entropy) are however more of interest for dialogue models.",
"On the other hand, if we measure diversity based on responses randomly sampled from a model conditional on a single query, the response quality is generally low (Caccia et al., 2018).",
"The current work instead proposes to measure response diversity utilizing augmented datasets with controlled paraphrasing, which allows for measuring diversity among top-ranked responses conditional on paraphrased queries and hence avoiding the tradeoff or dependency between diversity and quality.",
"In other words, for a given query, we slightly tilt the corresponding element in the query-response joint space along the query dimension (achieved by paraphrasing-augmentation) and then measure the entropy of high-quality responses in the neighbourhood of the targeted query.",
"While augmenting the queries to measure the conditional entropy of responses, we need to control the diversity of the augmented queries such that the augmented ones stay in the vicinity of the targeted query.",
"Hence the goal of controlled augmentation is to minimize diversity in both meaning and word use and avoid feeding the dialogue model identical inputs.",
"To achieve so, two augmentation approaches are considered: (1) WordNet (Miller, 1998) Substitution (WS) and (2) Conditional Text Generator (CTG).",
"WordNet Substitution (WS) is a word-level manipulation method that replaces some words with synonyms defined in WordNet.",
"Different from WS, Conditional Text Generator (CTG) is used to augment queries in multi-turn dialogue.",
"It requires a generator to produce augments conditioned on the context, which is defined as the prior utterance history to the selected query.",
"For instance, suppose [ u 1 ; ... ; u t 1 ] denotes the utterance history and u t indicates the query to be augmented, then the top-K beams, { u (1) t , ..., u ( K ) t } , from the CTG model conditional on the utterance history are produced.",
"Given the target query and a set of augmented queries for it with controlled paraphrasing, { u ( k ) t : k 0 , ..., K } where u (0) t := u t , the corresponding responses are generated by the model under test.",
"Then we can calculate the n -gram entropy for samples in the set { u ( k ) t +1 : k 0 , ..., K } .",
"Logical self-consistency measures if a generated response is logically contradictory to what the agent uttered in the multi-turn history.",
"The basic idea is to apply a pretrained Multi-Genre Natural Language Inference (MNLI; Williams et al. 2018) model to label if the relation of the response and the utterance history of the same agent is logically consistent.",
"More specifically, we train a ternary classifier that takes two utterances as input and predicts the relation as either contradiction, entailment or neutral on the MNLI dataset.",
"Then we average the contradiction class probabilities of the current utterance and each prior utterance from this agent as the contradiction score.",
"In order to match the human ratings, we use 1 minus the contradiction score as the final score of logical self-consistency evaluation.",
"Moreover, we measure logical self-consistency under augmented datasets with controlled paraphrasing, using WS and CTG introduced in Section 3.3.",
"The main idea is to generate augmented multi-turn utterance history that more likely induces the dialogue system to produce contradictory responses.",
"We assume that it is more likely for the agent producing self-contradictory responses when responding to similar queries.",
"We use WS and CTG to paraphrase the query and then calculate the contradiction score of the current utterance and each prior utterance from this agent.",
"To facilitate comparison with prior work (Ghazar-ian et al., 2019), the DailyDialog dataset (Li et al., 2017) is adopted for the empirical analysis of our proposed metrics.",
"This dataset contains 13,118 high-quality multi-turn dialogue dataset.",
"The dialogue is split into a 42,000 / 3,700 / 3,900 train-test-validation partitions.",
"A sequence-to-sequence (seq2seq) model with attention (Bahdanau et al., 2014) was trained with the train and validation partitions to generate dialogue responses.",
"The implementation in OpenNMT (Klein et al., 2017) was used to train the model.",
"The seq2seq consists of a 2-layer LSTM with 500 hidden units on both the encoder and decoder.",
"The model was trained with SGD and learning rate of 1.",
"To obtain responses on a wide spectrum of quality and diversity, we sample the data with topk sampling where k = { 1 , 10 , 100 } .",
"The base GPT-2 model with 12 layers was used to compute our metrics 2 .",
"The GPT-2 model was fine-tuned on the training and validation data.",
"In fine-tuning, the queries and responses were concatenated together as a single sentence to feed into GPT-2.",
"The perplexity of the fine-tuned language model on the test dataset was 16 .",
"5 .",
"WordNet substitution and conditional text generators were used to augment diversity-controlled queries.",
"The Stanford part-of-speech (POS) tagger (Toutanova and Manning, 2000) and the WordNet by Miller (1998) were utilized to do WordNet substitution.",
"It is achieved by first using Stanford POS tagger to tag tokens in a query.",
"Then four augmented inputs are generated by substituting verbs, nouns, adjectives & adverbs, or all of the above with synonyms in WordNet.",
"As for conditional text generator, we trained an OpenNMT Transformer 2 We also experimented with the medium GPT-2 with 24 layers and found that the results were generally the same.",
"And larger models (the 36and 48-layers GPT-2) might pose computational difficulty for some researchers and thus were not considered.",
"on the training and validation splits for query augmentation, which was applied to the testing dataset to augment the query with the topK beams.",
"For response diversity , five variants are obtained, the original query and four paraphrased ones; for logical self-consistency , two variants are obtained, the original query and one paraphrase.",
"To assess the validity of our proposed metrics, we utilize Amazon Turk to collect high quality human ratings from 10 subjects.",
"For each metric, we select a set of samples to be presented to humans and each datapoint is to be rated from 1 to 5, with 1 being the worst and 5 being the best on each metric.",
"On both context coherence and response fluency , we select 200 datapoints with a diverse range of generation quality.",
"There are 200 query-response pairs to be rated for context coherence and 200 responses to be rated for response fluency .",
"For response diversity , we select 100 datapoints, totaling 500 responses, to be rated in groups of 5, all of which are conditioned on the controlled inputs generated by CTG or WS given the same context.",
"For logical self-consistency , 100 datapoints are selected independent from response diversity .",
"After Amazon Turk results are collected, we compute the Pearson and Spearman correlation between our automatic metrics and human ratings to assess the validity of our metrics.",
"We normalize the human rating scores to be in the range of 0 to 1.",
"Table 3 demonstrates the Pearson and Spearman correlations between the proposed context coherence metric and human judgments.",
"Also, the results were compared to the previous best-performing",
"au-(a) GPT-2 w/o Fine-tune",
"tomatic metric, RUBER with BERT embeddings (Ghazvininejad et al., 2018).",
"Clearly both our language model based coherence metric shows higher correlation with human judgments than the classifier-based metric, RUBER.",
"In addition, we compared the proposed metric with a similar metric based on a GPT-2 language model without fine-tuning on the target dataset.",
"The fine-tuned version improved the results, indicating that fine-tuning on the dialogue dataset enables the language model to better capture the dependency between the queries and replies.",
"Interestingly, even the metric based on the language model without fine-tuning correlated with human ratings stronger than RUBER.",
"We also examined the inter-rater reliability.",
"It is computed by holding out the ratings of one rater at a time, calculating its correlation with the average of other rater's judgments, and finally averaging over or taking the maximum of all held-out correlation scores.",
"The inter-rater reliability results also support the strong performance of our proposed context coherence metric in that the correlation between the automatic metric and human evaluation was close to the inter-rater correlations.",
"tuning on GPT-2.",
"It helps to improve the consistency between human rating and automatic metric.",
"Table 2 displays a case study.",
"Our coherence metric and the human evaluation agreed that the generated response is not coherent with the given query, while RUBER indicated that this reply is coherent.",
"This might be because RUBER simply compares the embeddings of the query and response and business travel related words in the query such as vacation , workweek and in the reply such as travel , company make RUBER judge that they are similar.",
"Our findings show that the proposed fluency metric f ( r ) is highly correlated with human judgments.",
"Table 4 summarizes the relation between our proposed fluency metric and human ratings in terms of Pearson and Spearman correlation.",
"The importance of fine-tuning GPT-2 (as outlined in Section 4.3) is evident.",
"We observe an increase from 0 .",
"43 to 0 .",
"82 in Pearson correlation and an enhancement from 0 .",
"32 to 0 .",
"81 in Spearman correlation.",
"In addition, Figure 2 details the effect of fine-tuning.",
"Notably, a correction of outliers occurs.",
"Table 5 shows the evaluation of the proposed diversity metric on the basis of the augmented datasets with WS and CTG.",
"We also include a baseline dataset which consists of responses from randomly chosen queries from the testing data.",
"Unigram, bigram, and trigram entropy are utilized to calculate responses' diversity and are compared to human ratings with Pearson and Spearman correlation.",
"It is clear that automatic evaluations with the controlled paraphrasing datasets consistently achieve higher correlation compared to those with the baseline dataset.",
"Figure 3 display correlations between normalized human ratings and corresponding n -gram entropy based on the augmented dataset.",
"Entropy values based on WS and CTG datasets demonstrate stronger relations with human ratings, compared to those based on the baseline dataset, consistent with the reported correlations.",
"Human ratings based on the paraphrasing augmented datasets show high inter-rater correlations and lower variance, indicating that raters generally agree with each other.",
"The poor baseline performance is likely due to the uncontrolled nature of input sentences such that outputs of evaluated models are generally diverse, making it difficult for humans to judge the diversity performance of the model.",
"Furthermore, our diversity metrics have correlations with human ratings close to the corresponding mean inter-rater correlations, suggesting that the diversity evaluation based on the paraphrasing-augmented data can reveal the diversity of a dialogue system consistent with humans.",
"Table 8 displays the correlations between the proposed automatic ratings and human ratings on the the paraphrasing augmented data using WS and CTG and a baseline without augmentation.",
"The automatic metric based on augmented data has a",
"stronger relation with that based on the baseline.",
"In particular, the metric based on CTG augmentation aligns with human judgments the closet.",
"Inter-rater Pearson and Spearman correlations are reported in Table 9.",
"Human ratings on the augmented data are more consistent than those on the baseline, indicating the necessity and efficiency of using a refined dataset instead of the original one.",
"We show a case study in Table 7.",
"Although the four proposed metrics are intuitively and theoretically important in evaluating a dialogue system, it is not entirely clear whether they are independent from each other such that it is necessary to measure all of them.",
"We empirically investigate their association.",
"We randomly choose 50 dialogues from the testing dataset and construct the evaluation data for the four metrics.",
"Five human evaluators rate on the four aspects of each dialogue.",
"We then examine the pairwise correlation of human ratings on the four metrics.",
"Response fluency correlates with context coherence ( r = 0 . 42 , p = 0 . 003 ).",
"This is mainly due to the fact that inarticulate responses are often considered incoherent with the context.",
"All other pair-wise correlations are non-significant ( r (cid:48) s < 0 .",
"1 , p (cid:48) s > 0 .",
"25 ) 3 .",
"Thus, the four metrics are relatively independent from each other and it is critical to take into account all of them to obtain a holistic evaluation of a dialogue model.",
"This paper provides a holistic and automatic evaluation method for open-domain dialogue models.",
"In contrast to prior art, our means of evaluation captures not only the quality of generation, but also the diversity and logical consistency of responses.",
"We recruit GPT-2 as a strong language model to evaluate the context coherency and response fluency .",
"For response diversity and logical self-consistency , we propose to measure these two aspects under augmented utterances with controlled paraphrasing.",
"We leverage two effective approaches to generate augmented utterances: word substitution and text generator with k -best decoder.",
"Moreover, we utilize n -gram based entropy to capture response diversity and entailment based approach to measure logical self-consistency .",
"The proposed metrics show a strong correlation with human judgments.",
"It is our hope the proposed holistic metrics may pave the way towards the comparability of open-domain dialogue models.",
"Wenjuan Han, Yixian Liu and Kewei Tu were supported by the National Natural Science Foundation of China (61976139)."
] | [
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"abstain",
"objective",
"objective",
"method",
"method",
"objective",
"result",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"method",
"objective",
"method",
"method",
"abstain",
"objective",
"other"
] |
[
"Recent studies in deep learning have shown significant progress in named entity recognition (NER).",
"Most existing works assume clean data annotation, yet a fundamental challenge in real-world scenarios is the large amount of noise from a variety of sources (e.g., pseudo, weak, or distant annotations).",
"This work studies NER under a noisy labeled setting with calibrated confidence estimation.",
"Based on empirical observations of different training dynamics of noisy and clean labels, we propose strategies for estimating confidence scores based on local and global independence assumptions.",
"We partially marginalize out labels of low confidence with a CRF model.",
"We further propose a calibration method for confidence scores based on the structure of entity labels.",
"We integrate our approach into a self-training framework for boosting performance.",
"Experiments in general noisy settings with four languages and distantly labeled settings demonstrate the effectiveness of our method 1 .",
"Recent progress in deep learning has significantly advanced NER performances (Lample et al., 2016; Devlin et al., 2018).",
"While most existing works assume clean data annotation, real-world data inevitably involve different levels of noise (e.g., distant supervision from the dictionary (Peng et al., 2019), or weak supervision from the web Vrandecic and Krtzsch, 2014; Cao et al., 2019a).",
"Figure 1 gives an example of such noisy labels.",
"To train robust models with high performance, it is fundamentally critical to tackle the challenges associated with noisy data annotation.",
"In this work, we propose a confidence estimation approach for NER with noisy labels.",
"We motivate Equal Contribution.",
"our approach with important empirical observations of the training dynamics of clean and noisy labels: usually, clean data are easier to fit with faster convergence and smaller loss values (Jiang et al., 2018; Han et al., 2018a; Arazo et al., 2019).",
"Consequently, loss values (probabilities or scores of labels) can serve as strong indicators for the existence of noise, which we utilize to build our confidence estimation.",
"The key contribution of this work is a confidence estimation method with calibration.",
"We use probabilities of labels as confidence scores and apply two estimation strategies based on global or local normalization that assume different dependency structures about how the noisy labels are generated.",
"We further calibrate the confidence score for positive labels (labels representing entity parts, e.g., B-LOC ) based on the structure of these labels: we separately estimate scores for the position part (e.g., B in B-LOC ) and the type part (e.g., LOC in B-LOC ).",
"Such fine-grained calibration leads to a more accurate estimation and better performance in our experiments.",
"We apply our method in a CRF model (Bellare and McCallum, 2007; Yang et al., 2018), marginalize out labels we do not trust, and maximize the likelihood of trusted labels.",
"We use a self-training approach (Jie et al., 2019) that iteratively estimates confidence scores in multiple training iterations and re-annotates the data at each iteration.",
"Experiments show that our approach outperforms baselines on a general noisy-labeled setting with datasets in four languages and shows promising results on a distantly-labeled setting with four datasets.",
"Given a sentence x = [ x 1 , ..., x n ] and its tag sequence y 1 , ..., y n , n is the sentence length.",
"We model the conditional probability of y with a bidirectional LSTM-CRF (Huang et al., 2015): h = BiLSTM ( x ) i = Linear ( h i ) (1) p ( y | x ) = ( y ) /Z , Z = Forward () (2) Where h denotes LSTM states, Linear ( ) denotes a linear layer, ( y ) denotes the potential (weight) evaluated for tag sequence y , Z denotes the partition function, denotes the forward variables, and Forward ( ) denotes the Forward algorithm (Sutton and McCallum, 2006).",
"The advantage of the CRF model is that it gives us a probabilistically uniform way to handle labels we do or do not trust by partial marginalization, which we discuss later.",
"Our confidence estimation model reuses the base LSTM-CRF architecture and assigns a confidence score s i for each y i .",
"A natural choice is to use the CRF marginal probability: s i = p ( y i | x ) p ( y i | x ) = i i /Z (3) where is the backward variable and can be computed with the Backward algorithm (Sutton and McCallum, 2006).",
"This strategy infers s i based on global-normalization and assumes strong dependency between consecutive labels.",
"The intuition is that annotators are more likely to make mistakes on a label if they have already made mistakes on previous labels.",
"Our second strategy makes a stronger local independence assumption and considers a noisy label at step i only relies on the word context, not the label context.",
"To this end, we use a simple categorical distribution parameterized by a Softmax: s i = p ( y i | x ) p ( y i | x ) = Softmax ( i ) (4) Here we reuse the factor i as the logits of the Softmax because in the CRF context it also means how likely a label y i may be observed given the input h i .",
"Intuitively, this strategy assumes that annotators make mistakes solely based on words, no matter whether they have already made mistakes previously.",
"We use s i to decide if we want to trust a label y i and marginalize out labels we do not trust.",
"Our marginalization relies on a threshold to determine the portion of trusted labels and the noise ratio that we believe the data contain.",
"Given a batch of ( x, y ) pairs, after confidence estimation, we collect all word-label-confidence triples into a set D = { x j , y j , s j } Nj =1 , N denotes total number of the triples.",
"We further separate the estimation for positive labels (entities) and negative labels (i.e., the O label) because we empirically observe that their probabilities are consistently different.",
"To this end, we divide D into positive and negative groups D p = { ( x j , y j , s j ) , y j Y p } and D n = { ( x j , y j , s j ) , y j Y n } , Y p and Y n denotes sets of positive and negative labels.",
"We rank triples in D l ( l {p, n}) according to confidence scores and retain the most confident r l ( e ) |D l | triples at epoch e as clean for which we do maximum likelihood.",
"We view the remaining triples as noisy and marginalize them out.",
"We update the keep ratio r l ( e ) at each epoch following Han et al. (2018b): r l ( e ) = 1 min (cid:110) e K l , l (cid:111) , l { p , n } (5) where l is the ratio of noise that we believe in the training data.",
"Basically this says we gradually decrease the epoch-wise keep ratio r l ( e ) to the full ratio 1 l after K epochs.",
"We grid-search l heuristically in experiments (results in Figure",
"3(b)).",
"For positive cases in D p viewed as noisy according to the previous procedure, we do a further confidence calibration.",
"Noting that a y i always take the form y pi y ti (position-type) (e.g. if y i = B-LOC , Method General Noise Distant Supervision En Sp Ge Du CoNLL Tweet Webpage Wikigold 1. BiLSTM-CRF 73.3 61.9 57.7 58.3 59.5 21.8 43.3 42.9 2. BiLSTM-CRF (clean data upper bound) 90.3 85.2 77.3 81.1 91.2 52.2 52.3 54.9 3. RoBERTa (clean data upper bound) --90.1 52.2 72.4 86.4 Proposed for General Noise Setting 4. NA (Hedderich and Klakow, 2018) 61.5 57.3 46.1 41.5 --5. CBL (Mayhew et al., 2019) 82.6 76.1 65.6 68.5 75.4 18.2 31.7 42.6 6. Self-training (Jie et al., 2019) 84.0 71.4 66.5 59.6 77.8 42.3 49.6 51.3 Proposed for Distant Supervision Setting 7. AutoNER (Shang et al., 2018) --67.0 26.1 51.4 47.5 8. LRNT (Cao et al., 2019a) --69.7 23.8 47.7 46.2 9. BOND (RoBERTa Liang et al., 2020) --81.5 48.0 65.7 60.1 Ours, best configurations 10. Ours (local, ) 87.0 78.8 68.3 69.1 79.4 43.6 51.8 54.0 11. Ours (global, ) 86.4 79.0 69.2 71.2 79.2 43.1 50.0 53.0 Ours, other possible configurations 12. Ours (local, (cid:63) ) 86.2 79.2 68.2 67.2 --13. Ours (global, (cid:63) ) 85.4 75.4 68.4 69.0 --14. Ours (local, , w/o. calibration) 85.8 77.3 67.2 68.0 79.9 40.8 46.9 50.0 Ours with pretrained LM 15. Ours (local, , BERT) --77.2 46.7 59.3 57.3 16. Ours (global, , BERT) --78.9 47.3 61.9 57.7 Table 1: Results (F1%) on artificially perturbed datasets and distantly supervised datasets.",
"then y pi = B and y ti = LOC ), an important assumption is that annotators are unlikely to mistake both parts mistakes usually happen on only one of them.",
"So we calculate two calibrated confidence scores s pi and s ti for y pi and y ti : s pi = 1 | Y ( y pi ) | (cid:88) y i p ( y i | x ) where y pi = y pi (6) s ti = 1 | Y ( y ti ) | (cid:88) y i p ( y i | x ) where y ti = y ti (7) where Y ( y ti ) denotes the set of labels sharing the same y ti part, and Y ( y pi ) is defined similarily.",
"If s pi > s ti , we trust the y pi (position) part of the label and marginalize out all labels with different positions except for the O label.",
"For example, in Figure 2, for the word Brooklyn we trust the all labels with the position B ( B-PER and B-LOC ) and the O label, sum over the tag sequences passing these labels, and reject other labels.",
"Similar operation applies for cases where s pi < s t i (E.g., the word York ).",
"For labels we do not trust in the negative group D n , we simply marginalize all labels out (E.g., the word New ).",
"We maximize the partially marginalized probability (Bellare and McCallum, 2007): p ( y | x ) = (cid:88) y Y ( y ) /Z (8) where Y denotes the set of tag sequences compatible with y after confidence estimation.",
"A concrete example is given in Figure 2. The summation in equation 8 can be calculated exactly with Forward-styled dynamic programming (Sasada et al., 2016).",
"We integrate our approach into a self-training framework proposed by Jie et al. (2019).",
"At each round, the training set is randomly divided into two parts for cross-validation.",
"We iteratively reannotate half of the training set with a model trained on the other half.",
"After a round, we use the updated training set to train the next round.",
"General Noise.",
"Following Mayhew et al. (2019), we first consider general noise by artificially perturbing the CoNLL dataset (Sang and De Meulder, 2003) on four languages including English, Spanish, German, and Dutch.",
"Gold annotations are per-85 86 87 88 89 90 91 0 0.02 0.04 0.06 0.08 0.1 0.12 0.14 0.16 0.18 0.2 F 1 Noise rate Negative noise rate (local) Positive noise rate (local) Negative noise rate (global) Positive noise rate (global) Self-training (Jie et al., 2019) 00.10.20.30.40.50.60.70.80.9 0123456789 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 F 1 L o ss / NLL Epoch Negative NLL Positive NLL Training loss Negative noise detection F1 Positive noise detection F1 Dev F1 Oracle !",
"turbed by:",
"(a) tagging some entities to O to lower the recall to 0.5;",
"(b) introducing some random positive tags to lower the precision to 0.9.",
"We compare our methods with Noise Adaption (NA, Hedderich and Klakow, 2018), Self Training (Jie et al., 2019), and CBL (Mayhew et al., 2019).",
"This setting is for testing our approach in a controlled environment.",
"Distant Supervision.",
"We consider four datasets including CoNLL03 (Sang and De Meulder, 2003), Tweet (Godin et al., 2015), Webpage (Ratinov and Roth, 2009), and Wikigold (Balasuriya et al., 2009).",
"In this setting, the distantly supervised tags are generated by the dictionary following BOND (Liang et al., 2020).",
"We compare our methods with AutoNER (Shang et al., 2018), LRNT (Cao et al., 2019a), and BOND.",
"This setting aims to test our approach in a more realistic environment.",
"Table 1 shows our primary results.",
"We use local and global to denote locally / globally normalized confidence estimation strategies.",
"We use oracle (unavailable in real settings) / searched to denote how we obtain the prior noise ratio .",
"We note that the Self-training baseline (Jie et al., 2019, line 6) is the most comparable baseline since our confidence estimation is directly integrated into it.",
"We primarily compare this baseline with our best configurations (line 10 and 11).",
"We focus on the shaded results as they are the most informative for demonstrating our method.",
"General Noise.",
"Our methods (both local and global) outperforms the state-of-the-art method (Jie et al., 2019) by a large margin in three datasets (En, Sp, Du, line 10 and 11",
"v.s.",
"6), showing the effectiveness of our approach.",
"We observe the oracle does not necessarily give the best performance and an over estimate of confidence could leave a better performance.",
"Ablation results without calibration further show the effectiveness of our calibration methods (line 10",
"v.s.",
"14).",
"We note that the CoNLL dataset is an exception where the calibration slightly hurts performance.",
"Otherwise the improvements with calibration is clear in the other 7 datasets.",
"Distant Supervision.",
"Our method outperforms AutoNER and LRNT without pre-trained language models.",
"Reasons that we are worse than BOND (line 16",
"v.s.",
"6) are:",
"(a) many implementation aspects are different, and it is (currently) challenging to transplant their settings to ours;",
"(b) they use multiple tailored techniques for distantly-labeled data (e.g., the adversarial training), while ours is more general-purpose.",
"Though our method does not outperform BOND, it still outperforms AutoNER and LRNT (under the setting all without pretrained model, line 10 and 11",
"v.s.",
"7 and 8) and shows promising gain.",
"We conduct more detailed experiments on the general noise setting for more in-depth understanding.",
"Training Dynamics (Figure",
"3(a)).",
"As the model converges, as clean data converge faster, the confidence gap between the clean and the noisy is larger, thus the two are more confidently separated, so both noise detection F1 and dev F1 increase.",
"Noise Rate Search (Figure",
"3(b)).",
"Our method consistently outperforms baseline without confidence estimation.",
"Lines tend to be higher at the right side of the figure, showing an over-estimate of noise tends to give better performance.",
"Level of Noise (Figure",
"3(c)).",
"In many real-world scenarios, the noise",
"w.r.t. precision is more con-stant and it is the recall that varies.",
"So we simulate the level of noise with different recall (lower recall = larger noise ratio).",
"Our method outperforms baselines in all ratios and is particularly effective under a large noise ratio.",
"Case Studies (Figure 4).",
"The top three cases give examples of how our method detects: (1) false negative noise when an entity is not annotated, (2) entities with wrong boundaries and (3) wrong entity types.",
"The last example (case 4) gives a failure case when the model treats some correct tags as noise due to our over-estimate of noise (for better end performance).",
"State-of-the-art NER models (Ma and Hovy, 2016; Lample et al., 2016; Devlin et al., 2018) are all under the traditional assumption of clean data annotation.",
"The key motivation of this work is the intrinsic gap between the clean data assumption and noisy real-world scenarios.",
"We believe that the noisy label setting is fundamentally challenging in NER and all related supervised learning tasks.",
"Previous works on NER with noise could be organized into two threads:",
"(a) some works treat this task as learning with missing labels.",
"Bellare and McCallum (2007) propose a missing label CRF to deal with partial annotation.",
"Jie et al. (2019) propose a self-training framework with marginal CRF to re-annotate the missing labels.",
"(b) other works treat missing labels as noise and try to avoid them in the training process.",
"For example, Mayhew et al. (2019) train a binary classifier supervised by entity ratio to classify tokens into entities and nonentities.",
"A widely-used way to collect NER annotations is distant supervision, which consequently becomes an important source of noise.",
"Peng et al. (2019) formulate this task as the positive-unlabeled (PU) learning to avoid using noisy negatives.",
"AutoNER (Shang et al., 2018) trains the model by assigning ambiguous tokens with all possible labels and then maximizing the overall likelihood using a fuzzy LSTM-CRF model.",
"Cao et al. (2019b) and Yang et al. (2018) try to select high-quality sentences with less annotation errors for sequential model.",
"Liang et al. (2020) leverage pre-trained language models to improve the prediction performance of NER models under a self-training framework.",
"Our inspiration of confidence estimation comes from the so-called memorization effect observed in the computer vision (Jiang et al., 2018; Han et al., 2018a; Arazo et al., 2019).",
"It observes that neural networks usually take precedence over noisy data to fit clean data, which indicates that noisy data are more likely to have larger loss values in the early training epochs (Arpit et al., 2017).",
"In this work, we leverage it to estimate the confidence scores of labels.",
"In this work, we propose a calibrated confidence estimation approach for noisy-labeled NER.",
"We integrate our method in an LSTM-CRF model under a self-training framework.",
"Extensive experiments demonstrate the effectiveness of our approach.",
"Our method outperforms strong baseline models in a general noise setting (especially for larger noise ratios), and shows promising results in a distant supervision setting.",
"We thank all anonymous reviewers for their helpful comments.",
"This work is supported by Alibaba Group through Alibaba Research Intern Program and AZFT Joint Lab for Knowledge Engine."
] | [
"abstain",
"abstain",
"method",
"objective",
"abstain",
"objective",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"method",
"objective",
"method",
"abstain",
"abstain",
"method",
"method",
"result",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"other",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"other",
"objective",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"abstain",
"objective",
"abstain",
"objective",
"result",
"other",
"other"
] |
[
"Obtaining human-like performance in NLP is often argued to require compositional generalisation.",
"Whether neural networks exhibit this ability is usually studied by training models on highly compositional synthetic data.",
"However, compositionality in natural language is much more complex than the rigid, arithmetic-like version such data adheres to, and artificial compositionality tests thus do not allow us to determine how neural models deal with more realistic forms of compositionality.",
"In this work, we re-instantiate three compositionality tests from the literature and reformulate them for neural machine translation (NMT).",
"Our results highlight that:",
"i) unfavourably, models trained on more data are more compositional;",
"ii) models are sometimes less compositional than expected, but sometimes more, exemplifying that different levels of compositionality are required, and models are not always able to modulate between them correctly;",
"iii) some of the non-compositional behaviours are mistakes, whereas others reflect the natural variation in data.",
"Apart from an empirical study, our work is a call to action: we should rethink the evaluation of compositionality in neural networks and develop benchmarks using real data to evaluate compositionality on natural language, where composing meaning is not as straightforward as doing the math.",
"1 1 Introduction Although the successes of deep neural networks in natural language processing (NLP) are astounding and undeniable, they are still regularly criticised for lacking the powerful generalisation capacities that characterise human intelligence.",
"A frequently mentioned concept in such critiques is compositionality : the ability to build up the meaning of a complex expression by combining the meanings of its parts (e.g. Partee, 1984).",
"Compositionality is assumed 1 The data and code are available at https://github.",
"to play an essential role in how humans understand language, but whether neural networks also exhibit this property has since long been a topic of vivid debate (e.g. Fodor and Pylyshyn, 1988; Smolensky, 1990; Marcus, 2003; Nefdt, 2020).",
"Studies about the compositional abilities of neural networks consider almost exclusively models trained on synthetic datasets, in which compositionality can be ensured and isolated (e.g. Lake and Baroni, 2018; Hupkes et al., 2020).",
"2 In such tests, the interpretation of expressions is computed completely locally : every subpart is evaluated independently without taking into account any external context and the meaning of the whole expression is then formed by combining the meanings of its parts in a bottom-up fashion.",
"This protocol matches the type of compositionality observed in arithmetic: the meaning of (3 + 5) is always 8 , independent of the context it occurs in.",
"However, as exemplified by the sub-par performance of symbolic models that allow only strict, local protocols, compositionality in natural domains is far more intricate than this rigid, arithmetic-like variant of compositionality.",
"Natural language seems very compositional, but at the same time, it is riddled with cases that are difficult to interpret with a strictly local interpretation of compositionality.",
"Sometimes, the meaning of an expression does not derive from its parts (e.g. for idioms), but the parts themselves are used compositionally in other contexts.",
"In other cases, the meaning of an expression does depend on its parts in a compositional way, but arriving at this meaning requires a more global approach because the meanings of the parts need to be disambiguated by information from elsewhere.",
"For instance, consider the meaning of homonyms (these dates are perfect for our dish/wedding), potentially idiomatic expressions (the child kicked the bucket off the pavement), 2 Apart from Raunak et al. (2019), work on compositionality and natural' language considers highly structured subsets of language (e.g. Kim and Linzen, 2020; Keysers et al., 2019).",
"or scope ambiguities (every human likes a cat).",
"This paradoxical tension between local and global forms of compositionality inspired many debates on the compositionality of natural language.",
"Likewise, it impacts the evaluation of compositionality in NLP models.",
"On the one hand, local compositionality seems necessary for robust and reliable generalisation.",
"Yet, at the same time, global compositionality is needed to appropriately address the full complexity of language, which makes evaluating compositionality of state-of-the-art models in the wild' a complicated endeavour.",
"In this work, we face this challenge head-on.",
"We concentrate on the domain of neural machine translation (NMT), which is paradigmatically close to the tasks typically considered for compositionality tests, where the target represents the meaning' of the input.",
"3 Furthermore, MT is an important domain of NLP, for which compositional generalisation is important to produce more robust translations and train adequate models for low-resource languages (see, e.g. Chaabouni et al., 2021).",
"As an added advantage, compositionality is traditionally well studied and motivated for MT (Rosetta, 1994; Janssen and Partee, 1997; Janssen, 1998).",
"We reformulate three theoretically grounded tests from Hupkes et al. (2020): systematicity , substitutivity and overgeneralisation .",
"Since accuracy commonly used in artificial compositionality tests is not a suitable evaluation metric for MT, we base our evaluations on the extent to which models behave consistently , rather than correctly.",
"In our tests for systematicity and substitutivity, we consider whether processing is local ; in our overgeneralisation test, we consider how models treat idioms that are assumed to require global processing.",
"Our results indicate that models often do not behave compositionally under the local interpretation, but exhibit behaviour that is too local in other cases.",
"In other words, models have the ability to process phrases both locally and globally but do not always correctly modulate between them.",
"We further show that some inconsistencies reflect variation in natural language, whereas others are true compositional mistakes , exemplifying the need for both local and global compositionality as well as illustrating the need for tests that encompass them both.",
"With our study, we contribute to ongoing questions about the compositional abilities of neural networks, and we provide nuance to the nature of this question when natural language is concerned: 3 E.g. SCAN's inputs are instructions (walk twice) with executions as outputs (walk walk) (Lake and Baroni, 2018).",
"how local should the compositionality of models for natural language actually be?",
"Aside from an empirical study, our work is also a call to action: we should rethink the evaluation of compositionality in neural networks and develop benchmarks using real data to evaluate compositionality on natural language, where composing meaning is not as straightforward as doing the math.",
"Tests for compositional generalisation in neural networks typically assume an arithmetic-like version of compositionality, in which meaning can be computed bottom up.",
"The compositions require only local information they are context independent and unambiguous: walk twice after jump thrice (a fragment from SCAN by Lake and Baroni, 2018) is evaluated similarly to (2 + 1) (4 5) .",
"In MT, this type of compositionality would imply that a change in a word or phrase should affect only the translation of that word or phrase, or at most the smallest constituent it is a part of.",
"For instance, the translation of the girl should not change depending on the verb phrase that follows it, and in the translation of a conjunction of two sentences, making a change in the first conjunct should not change the translation of the second.",
"While translating in such a local way seems robust and productive, it is not always realistic e.g. consider the translation of dates in She hated bananas and she liked dates.",
"In linguistics and philosophy of language, the level of compositionality has been widely discussed, which led to a variety of definitions.",
"One of the most well-known ones is from Partee (1984): The meaning of a compound expression is a function of the meanings of its parts and of the way they are syntactically combined. 4 This definition hardly places restrictions on the relationship between expressions and their parts.",
"The type of function that relates them is unspeci-fied and could take into account the global syntactic structure or external arguments, and the parts' meanings can depend on global information.",
"Par-tee's definition is therefore called weak , global , or open compositionality (Szabo, 2012; Garca-Ramrez, 2019).",
"When, instead, the meaning of a compound depends only on the meanings of its largest parts, regardless of their internal structure (similar to arithmetic), that is strong , local or closed 4 This straightforwardly extends to translation, by replacing meaning with translation (Rosetta, 1994).",
"compositionality (Jacobson, 2002; Szabo, 2012).",
"Under the local interpretation, natural language can hardly be considered compositional many frequent phenomena such as homonyms, idioms and scope ambiguities cannot be resolved locally (Pagin and Westerstahl, 2010; Pavlick and Callison-Burch, 2016).",
"The global interpretation handles such cases straightforwardly but does not match up with many a person's intuitions about the compositionality of language.",
"After all, how useful is compositionality if composing the meanings of parts requires the entire rest of the sentence?",
"This paradox inspired debates on the compositionality of natural language and is also highly relevant in the context of evaluating compositionality in neural models.",
"Previous compositionality tests (6) considered only the local interpretation of compositionality, but to what extent is that relevant given the type of compositionality actually required to model natural language?",
"Here, we aim to open up the discussion about what it means for computational models of language to be compositional by considering properties that require composing meaning locally as well as globally and evaluating them in models trained on unadapted natural language corpora.",
"We focus on English-Dutch translation, for which we can ensure good command for both languages.",
"We train Transformer-base models (Vaswani et al., 2017) using Fairseq (Ott et al., 2019).",
"Our training data consists of a collection of MT corpora bundled in OPUS (Tiedemann and Thottingal, 2020), of which we use the English-Dutch subset provided by Tiedemann (2020), which contains 69M sentence pairs.",
"5 To examine the impact of the amount 5 Visit the Tatoeba challenge for the OPUS training data.",
"of training data a dimension that is relevant because compositionality is hypothesised to be more important when resources are scarcer we train one setup using the full dataset, one using 18 of the data ( medium ), and one using one million source-target pairs in the small setup.",
"For each setup, we train models with five seeds and average the results.",
"To evaluate our trained models, we adopt FLORES -101 (Goyal et al., 2021), which contains 3001 sentences from Wikinews, Wikijunior and WikiVoyage, translated by professional translators, split across three subsets.",
"We train the models until convergence on the dev' set.",
"Afterwards, we compute SacreBLEU scores on the devtest' set (Post, 2018), using beam search (beam size = 5), yielding scores of 20 .",
"6 .",
"4 , 24 .",
"4 .",
"3 and 25 .",
"8 .",
"1 for the small, medium and full datasets, respectively.",
"6 3.2 Evaluation data While all our models are trained on fully natural data, for evaluation we use different types of data: synthetic, semi-natural and natural data.",
"Synthetic data For our synthetic evaluation data, we consider the data generated by Lakretz et al. (2019), previously used to probe for hierarchical structure in neural language models.",
"This data consist of sentences with a fixed syntactic structure and diverse lexical material.",
"We extend the vocabulary and the templates used to generate the data and generate 3000 sentences for each of the resulting 10 templates (see Table 1a).",
"Semi-natural data In the synthetic data, we have full control over the sentence structure and lexical items, but the sentences are shorter (9 tokens vs 16 in OPUS) and simpler than typical in NMT data.",
"To obtain more complex yet plausible test sentences, we employ a data-driven approach 6 All training details are listed in Appendix E. 4156 synthetic semi-n.",
"to generate semi-natural data.",
"Using the tree substitution grammar Double DOP (Van Cranenburgh et al., 2016), we obtain noun and verb phrases (NP, VP) whose structures frequently occur in OPUS.",
"We then embed these NPs and VPs in ten synthetic templates with 3000 samples each (see Table 1b).",
"See Appendix A for details on the data generation.",
"Natural data Lastly, we extract natural data directly from OPUS, as detailed in the subsections of the individual tests (4).",
"In our experiments, we consider systematicity (4.1) and substitutivity (4.2), to test for local compositionality, and idiom translation to probe for a more global type of processing (4.3).",
"One of the most commonly tested properties of compositional generalisation is systematicity the ability to understand novel combinations made up from known components (most famously, Lake and Baroni, 2018).",
"In natural data, the number of potential recombinations to consider is infinite.",
"We chose to focus on recombinations in two sentence-level context-free rules: S NP VP and S S CONJ S .",
"Test design The first setup, S NP VP , concerns recombinations of noun and verb phrases.",
"We extract translations for input sentences from the templates from 3.2, as well as versions of them with the (1) noun (NP NP') or (2) verb phrase (VP VP') adapted.",
"In (1), a noun from the NP in the subject position is replaced with a different noun while preserving number agreement with the VP.",
"In (2), a noun in the VP is replaced.",
"NP NP' is applied to both synthetic and semi-natural data; VP VP' only to synthetic data.",
"We use 500 samples per template per condition per data type.",
"The second setup, S S CONJ S , involves phrases concatenated using and, and tests whether the translation of the second sentence is dependent on the first sentence.",
"We concatenate two sentences (S 1 and S 2 ) from different templates, and we consider again two different conditions.",
"First, in condition S 1 S (cid:48) 1 , we make a minimal change to S 1 yielding S (cid:48) 1 by changing the noun in its verb phrase.",
"In S 1 S 3 , instead, we replace S 1 with a sentence S 3 that is sampled from a template different from S 1 .",
"We compare the translation of S 2 in all conditions.",
"For consistency, the first conjunct is always sampled from the synthetic data templates.",
"The second conjunct is sampled from synthetic data, semi-natural data, or from natural sentences sampled from OPUS with similar lengths and word-frequencies as the semi-natural inputs.",
"We use 500 samples per template per condition per data type.",
"Figure 2 provides an illustration of the different setups experimented with.",
"Evaluation In artificial domains, systematicity is evaluated by leaving out combinations of known components' from the training data and using them for testing purposes.",
"The necessary familiarity of the components (the fact that they are known') is ensured by high training accuracies, and systematicity is quantified by measuring the test set accu-4157 synthetic semi natural natural 0.0 0.2 0.4 0.6 0.8 1.0 c o n s i s t e n c y f l a u t i s t m o u s t a c h e f l a u t i s t m o u s t a c h e f l a u t i s t t h e a t r e l a d y b i r d m o u s t a c h e f l a u t i s t m o u s t a c h e f l a u t i s t m o u s t a c h e a u b e r g i n e m o u s t a c h e f l a u t i s t s h o pp i n g t r o ll e y f l a u t i s t m o u s t a c h e training size small medium full",
"racy.",
"If the training data is a natural corpus and the model is evaluated with a measure like BLEU in MT, this strategy is not available.",
"We observe that being systematic requires being consistent in the interpretation assigned to a (sub)expression across contexts, both in artificial and natural domains.",
"Here, we, therefore, focus on consistency rather than accuracy, allowing us to employ a model-driven approach that evaluates the model's systematicity as the consistency of the translations when presenting words or phrases in multiple contexts.",
"We measure consistency as the equality of two translations after accounting for anticipated changes.",
"For instance, in the S NP VP setup, two translations are consistent if they differ in one word only, after accounting for determiner changes in Dutch (de vs het).",
"In the evaluation of S S CONJ S , we measure the consistency of the translations of the second conjunct.",
"Figure 1 shows the results for the S NP VP and S S CONJ S setups (numbers available in Appendix B).",
"The average performance for the natural data closely resembles the performance on semi-natural data, suggesting that the increased degree of control did not severely impact the results obtained using this generated data.",
"7 In general, the consistency scores are low, illustrating that models are prone to changing their translation of a (sub)sentence after small (unrelated) adaptations to the input.",
"It hardly matters whether that change occurs in the sentence itself ( S NP VP ), or in the other conjunct ( S S CONJ S ), suggesting that the processing of the models is not local as assumed in strong compositionality.",
"Models trained on more data seem more locally compositional, a somewhat contradictory solution to achieving compositional-7 In our manual analysis (5), however, we did observe a slightly different distribution of changes between these setups.",
"ity, which, after all, is assumed to underlie the ability to generalise usage from few examples (Lake et al., 2019).",
"This trend is also at odds with the hypothesis that inconsistencies are a consequence of the natural variation of language, which models trained on more data are expected to better capture.",
"Under a local interpretation of the principle of compositionality, synonym substitutions should be meaning-preserving: substituting a constituent in a complex expression with a synonym should not alter the complex expression's meaning, or, in the case of MT, its translation.",
"Here, we test to what extent models' translations abide by this principle, by performing the substitutivity test from Hupkes et al. (2020), that measures whether the outputs remain consistent after synonym substitution.",
"To find synonyms source terms that translate into the same target terms we exploit the fact that OPUS contains texts both in British and American English.",
"Therefore, it contains synonymous terms that are spelt different e.g. doughnut / donut and synonymous terms with a very different form e.g. aubergine / eggplant.",
"We use 20 synonym pairs in total (see Figure 3b).",
"Test design Per synonym pair, we select natural data from OPUS in which the terms appear and perform synonym substitutions.",
"Thus, each sample has two sentences, one with the British and one with the American English term.",
"We also insert the synonyms into the synthetic and semi-natural data using 500 samples per synonym pair per template, through subordinate clauses that modify a noun e.g. the king that eats the doughnut .",
"In Appendix C, Table 6, we list all clauses used.",
"Evaluation Like systematicity, we evaluate substitutivity using the consistency score, expressing whether the model translations for a sample are identical.",
"We report both the full sentence consistency and the consistency of the synonyms' translations only, excluding the context.",
"Cases in which the model omits the synonym from both translations are labelled as consistent if the rest of the translation is the same for both input sequences.",
"In Figure 3a, we summarise all substitutivity consistency scores (tables are in Appendix C).",
"We observe trends similar to the systematicity results: models trained on larger training sets perform better and synthetic data yields more consistent translations compared to (semi-)natural data.",
"We further observe large variations across synonyms, for which we further detail the performance aggregated across experimental setups in Figure 3b.",
"The three lowest scoring synonyms flautist, aubergine and ladybug are among the least frequent synonyms (see Appendix C), which stresses the importance of frequency for the model to pick up on synonymy.",
"In Figure 3b, we show both the regular consistency and the consistency of the synonym translations, illustrating that a substantial part of the inconsistencies are due to varying translations of the context rather than the synonym itself, stressing again the non-local processing of the models.",
"In our final test, we focus on exceptions to compositional rules.",
"In natural language, typical exceptions that constitute a challenge for local compositionality are idioms .",
"For instance, the idiom raining cats and dogs should be treated globally to arrive at its meaning of heavy rainfall.",
"A local approach would yield an overly literal, non-sensical translation (het regent katten en honden).",
"When a model's translation is too local, we follow Hupkes et al. (2020) in saying that it overgeneralises , or, in other words, it applies a general rule to an expression that is an exception to this rule.",
"Overgeneralisation indicates that a language learner has internalised the general rule (e.g. Penke, 2012).",
"We select 20 English idioms for which an accurate Dutch translation differs from the literal translation from the English MAGPIE corpus (Haagsma et al., 2020).",
"Because acquisition of idioms is dependent on their frequency in the corpus, we use idioms 1 40 80 120 160 epoch 0.0 0.2 0.4 0.6 0.8 1.0 o v e r g e n e r a li s a t i o n small 1 10 20 30 40 50 epoch medium 1 10 20 30 epoch full",
"with at least 200 occurrences in OPUS based on exact matches, for which over 80% of the target translations does not contain a literal translation.",
"Test design Per idiom, we extract natural sentences containing the idiom from OPUS.",
"For the synthetic and semi-natural data types, we insert the idiom in 500 samples per idiom per template, by attaching a subordinate clause to a noun e.g. the king that said I knew the formula by heart ' .",
"The clauses used can be found in Appendix D, Table 7. Evaluation Per idiom, we assess how often a model overgeneralises and how often it translates the idiom globally.",
"To do so, we identify keywords that indicate that a translation is translated locally (literal) instead of globally (idiomatic).",
"If the key-words' literal translations are present, the translation is labelled as an overgeneralised translation.",
"For instance, for by heart, the presence of hart (heart) suggests a literal translation.",
"An adequate paraphrase would say uit het hoofd (from the head).",
"See Appendix D, Table 7, for the full list of keywords.",
"We evaluate overgeneralisation for ten intermediate training checkpoints.",
"In Figure 4, we report our results.",
"8 For all evaluation data types and all training set sizes, three phases can be identified.",
"Initially, the translations do not contain the idiom's keyword, not because the idiom's meaning is paraphrased in the translation, but because the translations consist of high-frequency words in the target language only.",
"Afterwards, overgeneralisation peaks: the model emits a very literal translation of the idiom.",
"Finally, the model starts to memorise the idiom's translation.",
"This is in accordance with results from Hupkes et al. (2020), and earlier results presented in the past tense debate by among others Rumelhart and McClelland (1986).",
"Although the height of the overgeneralisation peak is similar across evaluation data types and training set sizes, overgeneralisation is more prominent in converged models trained on smaller datasets than it is in models trained on the full corpus.",
"9 In addition to training dataset size, the type of evaluation data used also matters: there is more overgeneralisation for synthetic and semi-natural data compared to natural data, stressing the impact of the context in which an idiom is embedded.",
"The extreme case of a context unsupportive of an idiomatic interpretation is a sequence of random words.",
"To evaluate the hypothesis that this yields local translations, we surround the idioms with ten random words.",
"The results (Appendix D, Table 7) indicate that, indeed, when the context provides no support at all for a global interpretation, the model provides a local translation for nearly all idioms.",
"Overall, the results of this test provide an interesting contrast with our substitutivity and systematicity results: where in those tests, we saw processing that was less local than we expected, here, the behaviour shown by the models is instead not global enough .",
"Our systematicity and substitutivity results demonstrate that models are not behaving compositional according to a strict definition of compositionality.",
"However, we ourselves have argued that strict compositionality is not always appropriate to handle natural language.",
"A reasonable question to ask is thus: are the inconsistencies we marked as non-compositional actually incorrect?",
"Annotation setup To address this question, we perform a manual analysis.",
"We annotate 900 inconsistent translation pairs of the systematicity and substitutivity tests to establish whether the inconsistencies are benign or concerning.",
"We consider four different types of changes: 1. cases of rephrasing , where both translations are equally (in)correct; 2. changes reflecting different interpretations of source ambiguities ; 3. cases in which one of the two translations contains an error ; 4. formatting (mostly punctuation) changes.",
"For substitutivity samples, we also annotate whether the changes are related to the translation of the synonym, where we distinguish cases where i.",
"one of the synonym translations is incorrect; ii.",
"both are incorrect but in a different manner; iii.",
"both are correct but translated differently; iv.",
"one synonym remains untranslated.",
"We annotate all changes observed per pair and report the relative frequency per class.",
"We summarise the results, aggregated over different training set sizes and the three data types, in Figure 5. For a more elaborate analysis and a breakdown per model and data type, we refer to Appendix F. Results In the systematicity test, 40% of the marked inconsistencies reflects wrongfully translated parts in one of the two sentences, whereas 38% contains examples of rephrasing, 16% reflects ambiguities in the source sentences and 6% is caused by formatting differences.",
"For substitutivity, most inconsistencies are similar to the ones observed in systematicity: only 24% involves the synonyms' translations, where one of them being untranslated was the most frequent category.",
"The distribution of these types of inconsistencies differ strongly per training data type.",
"For models trained on less data, inconsistencies are more likely to represent errors, whereas models trained on more data rephrase more often.",
"This result emphasises that 4160 for lower-resource settings, being compositional is particularly relevant.",
"Another demonstration of this relevance comes from the observation that although models can emit correct translations for nearly all synonyms, 10 they do not always do so, depending on the context.",
"To give a peculiar example: in The child admires the king that eats the { doughnut, donut } , the snack was occasionally translated as ezel (donkey).",
"Robustness and predictability Finally, we would like to stress that while rephrasing often might seem benign rather than concerning from the perspective of emitting adequate translations, its harmlessness still deserves some thought.",
"There is a fine line between rephrasing and mistranslating: whether the single largest business establishment is referred to as de grootste (the largest) or de enige grootste (the only largest) may make or break a translation.",
"Furthermore, if changes are unrelated to the contextual change (e.g. replacing soccer with football), this can be undesirable from a robustness and reliability perspective.",
"This point becomes even more pronounced in cases where both translations are correct but have a different meaning.",
"To analyse the extent to which inconsistencies are actually unmotivated, we investigated if we could trace them back to the contextual change, in particular focusing on whether changing synonyms from British to American spelling or vice versa might trigger a change in style or tone.",
"We could not find evidence of such motivations, indicating that even correct cases of rephrasing were not caused by contextual changes that were necessary to take into account.",
"In previous work, a variety of artificial tasks have been proposed to evaluate compositional generalisation using non-i.i.d. test sets that are designed to assess a specific characteristic of compositional behaviour.",
"Examples are systematicity (Lake and Baroni, 2018; Bastings et al., 2018; Hupkes et al., 2020), substitutivity (Mul and Zuidema, 2019; Hupkes et al., 2020), localism (Hupkes et al., 2020; Saphra and Lopez, 2020), productivity (Lake and Baroni, 2018) or overgeneralisation (Korrel et al., 2019; Hupkes et al., 2020; Dankers et al., 2021).",
"Generally, neural models struggle to generalise in such evaluation setups.",
"There are also studies that consider compositional generalisation on more natural data.",
"Such studies typically focus on either MT (Lake and Baroni, 2018; Raunak et al., 2019; Li et al., 2021) or semantic parsing (Finegan-Dollak et al., 2018; Keysers et al., 2019; Kim and Linzen, 2020; Shaw et al., 2021).",
"Most of these studies consider small and highly controlled subsets of natural language.",
"Instead, we focus on models trained on fully natural MT datasets, which we believe to be the setup for compositionality evaluation that does most jus-tice to the complexity of natural language: contrary to semantic parsing, where the outputs are structures created by expert annotators, in translation both inputs and outputs are fully-fledged natural language sentences.",
"To the best of our knowledge, the only attempt to explicitly measure compositional generalisation of NMT models trained on large natural MT corpora is the study presented by Raunak et al. (2019).",
"They measure productivity generalisation to longer sentence lengths of an LSTM-based NMT model trained on a full-size, natural MT dataset.",
"Other studies using NMT, instead, consider toy datasets generated via tem-plating (Lake and Baroni, 2018) or focus on short sentences excluding more complex constructions that contribute to the complexity of natural language for compositional generalisation, such as polysemous words or metaphors (Li et al., 2021).",
"Whether neural networks can generalise compositionally is often studied using artificial tasks that assume strictly local interpretations of compositionality.",
"We argued that such interpretations exclude large parts of language and that to move towards human-like productive usage of language, tests are needed that assess how compositional models trained on natural data are.",
"11 We laid out reformulations of three compositional generalisation tests systematicity, substitutivity and overgeneralisation for NMT models trained on natural corpora, and assessed models trained on different amounts of data.",
"Our work provides an empirical contribution but also highlights vital hurdles to overcome when considering what it means for models of natural language to be compositional.",
"Below, we reflect on these hurdles and our results.",
"The proxy-to-meaning problem Compositionality is a property of the mapping between the form and meaning of an expression.",
"Since translation is a meaning-preserving mapping from form in one language to form in another, it is an attractive task to evaluate compositionality: the translation of its sentence can be seen as a proxy to its meaning.",
"However, while expressions are assumed to have only one meaning, translation is a many-to-many mapping: the same sentence can have multiple correct translations.",
"This does not only complicate evaluation MT systems are typically evaluated with BLEU because accuracy is not a suitable option it also raises questions about how compositional the desired behaviour of an MT model should be.",
"On the one hand, one could argue that for optimal generalisation, robustness, and accountability, we like models to behave systematically and consistently: we expect the translations of expressions to be independent of unrelated contextual changes that do not affect their meaning (e.g. swapping out a synonym in a nearby sentence).",
"Additionally, model performance could be improved if small changes do not introduce errors in unrelated parts of the translation.",
"On the other hand, non-compositional behaviour is not always incorrect it is one of the main arguments in our plead to test compositionality in the wild' and we observe that indeed, not all non-compositional changes alter the correctness of the resulting translations.",
"Changing a translation from atleet (athlete) to sporter (sportsman) based on an unrelated word somewhat far away may not be (locally) compositional, but is it a problem?",
"And how do we separate such harmful' mistakes from helpful ones?",
"The locality problem Inextricably linked to the proxy-to-meaning problem is the locality problem.",
"In our tests we see that small, local source changes elicit global changes in translations .",
"For instance, in our systematicity tests, changing one noun in a sentence elicited changes in the translation of a sentence that it was conjoined with.",
"In our substitutivity test, even synonyms that merely differed in spelling (e.g. doughnut and donut) elicited changes to the remainder of the sentence.",
"This counters the idea of compositionality as a means of productively reusing language: if a phrase's translation depends on (unrelated) context that is not in its direct vicinity, this suggests that more evidence is required to acquire the translation of this phrase.",
"Tests involving synthetic data present the models with sentences in which maximally local behaviour is possible, and we argue that it is, therefore, also desirable.",
"Our experiments show that even in such setups, models do not translate in a local fashion: with varying degrees of correctness, they frequently change their translation when we slightly adapt the input.",
"On the one hand, this well-known volatility (see also Fadaee and Monz, 2020) might be essential for coping with ambiguities for which meanings are context-dependent.",
"On the other hand, our manual analysis shows that the observed non-compositional behaviour does not reflect the incorporation of necessary contextual information and that oftentimes it is even altering the correctness of the translations.",
"Furthermore, this erratic behaviour highlights a lack of default reasoning, which can, in some cases, be problematic or even harmful, especially if faithfulness (Parthasarathi et al., 2021) or consistency is important.",
"In linguistics, it has been discussed how to extend the syntax and semantics such that problem cases' can be a part of a compositional language (Westerstahl, 2002; Pagin and Westerstahl, 2010).",
"In such formalisations, global information is used to disambiguate the problem cases, while other parts of the language are still treated locally.",
"In our models, global behaviour appears in situations where a local treatment would be perfectly suitable and where there is no clear evidence for ambiguity.",
"We follow Baggio (2021) in suggesting that we should learn from strategies employed by humans, who can assign compositional interpretations to expressions but can for some inputs also derive non-compositional meanings.",
"For human-like linguistic generalisation, it is vital to investigate how models can represent both these types of processing, providing a locally compositional treatment when possible and deviating from that when needed.",
"Conclusion In conclusion, with this work, we contribute to the question of how compositional models trained on natural data are, and we argue that MT is a suitable and relevant testing ground to ask this question.",
"Focusing on the balance between local and global forms of compositionality, we formulate three different compositionality tests and discuss the issues and considerations that come up when considering compositionality in the context of natural data.",
"Our tests indicate that models show both local and global processing, but not necessarily for the right samples.",
"Furthermore, they underscore the difficulty of separating helpful and harmful types of non-compositionality, stressing the need to rethink the evaluation of compositionality using natural language, where composing meaning is not as straightforward as doing the math.",
"We thank Sebastian Riedel, Douwe Kiela, Thomas Wolf, Khalil Sima'an, Marzieh Fadaee, Marco Baroni, Brenden Lake and Adina Williams for providing feedback on this draft and our work in several different stages of it.",
"We thank Michiel van der Meer for contributing to the initial experiments that led to this paper.",
"A special thanks goes to Angela Fan, who assisted us at several points to get the ins and outs of training large MT models and double-checked several steps of our pipeline and to our ARR reviewers, who provided amazingly high quality feedback.",
"VD is supported by the UKRI Centre for Doctoral Training in Natural Language Processing, funded by the UKRI (grant EP/S022481/1) and the University of Edinburgh."
] | [
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"result",
"abstain",
"result",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"method",
"other",
"other",
"abstain",
"method",
"method",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"method",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"result",
"abstain",
"other",
"other",
"other",
"other"
] |
[
"Hierarchical Text Classification (HTC) is a challenging task that categorizes a textual description within a taxonomic hierarchy.",
"Most of the existing methods focus on modeling the text.",
"Recently, researchers attempt to model the class representations with some resources (e.g., external dictionaries).",
"However, the concept shared among classes which is a kind of domain-specific and fine-grained information has been ignored in previous work.",
"In this paper, we propose a novel concept-based label embedding method that can explicitly represent the concept and model the sharing mechanism among classes for the hierarchical text classification.",
"Experimental results on two widely used datasets prove that the proposed model outperforms several state-of-the-art methods.",
"We release our complementary resources (concepts and definitions of classes) for these two datasets to benefit the research on HTC.",
"Text classification is a classical Natural Language Processing (NLP) task.",
"In the real world, the text classification is usually cast as a hierarchical text classification (HTC) problem, such as patent collection (Tikk et al., 2005), web content collection (Du-mais and Chen, 2000) and medical record coding (Cao et al., 2020).",
"In these scenarios, the HTC task aims to categorize a textual description within a set of labels that are organized in a structured class hierarchy (Silla and Freitas, 2011).",
"Lots of researchers devote their effort to investigate this challenging problem.",
"They have proposed various HTC solutions, which are usually categorized into flat (Aly et al., 2019), local (Xu and Geng, 2019), global (Qiu et al., 2011) and combined approaches (Wehrmann et al., 2018).",
"simply represented as one-hot vectors (Zhu and Bain, 2017; Wehrmann et al., 2018).",
"Actually, the one-hot vectors act as IDs without any semantic information.",
"How to describe a class is also worthy of discussion.",
"There is some work that embeds labels into a vector space which contains more semantic information.",
"Compared with one-hot representations, label embeddings have advantages in capturing domain-specific information and importing external knowledge.",
"In the field of text classification (includes the HTC task), researchers propose several forms of label embeddings to encode different kinds of information, such as 1) anchor points (Du et al., 2019), 2) compatibility between labels and words (Wang et al., 2018; Huang et al., 2019; Tang et al., 2015), 3) taxonomic hierarchy (Cao et al., 2020; Zhou et al., 2020) and 4) external knowledge (Rivas Rojas et al., 2020).",
"Although the external knowledge has been proven effective for HTC, it comes from a dictionary or knowledge base that humans constructed for entity definition, and it doesn't focus on the class explanations of a certain HTC task.",
"In this sense, external knowledge is a type of domain-independent information.",
"The taxonomic hierarchy encoding can capture the structural information of classes, which is a sort of domain-specific information for HTC.",
"However, actually it only models the hypernym-hyponym relations in the class hierarchy.",
"The process is implicit and difficult to be interpreted.",
"Besides the structural connections between classes, we find that the information of concept shared between adjacent levels of classes is ignored by previous work.",
"For instance, there is a parent node named Sports in a concrete class hierarchy (Qiu et al., 2011).",
"Its subclasses Surfing and Swimming are water related sports.",
"The subclasses Basketball and Football are ball related sports.",
"The water and ball are a type of abstract concept included in the parent class Sports and can be shared by the subclasses.",
"As shown in Figure 1, we have a similar observation in WOS (Kowsari et al., 2017), which is a widely used public dataset (details in our experiments).",
"The concept design of the parent class Computer Science is shared by the child classes Soft engineering and Algorithm design.",
"The concept distributed is shared by Network security and Distributed computing.",
"The concept information can help to group the classes and measure the correlation intensity between parent and child classes.",
"Compared with the information of node connections in the class hierarchy, the concept is more semantic and fine-grained, but rarely investigated.",
"Although Qiu et al. (2011) have noticed the concept in HTC, they define the concept in a latent way and the process of represent learning is also implicit.",
"Additionally, few of previous work investigates how to extract the concepts or model the sharing interactions among class nodes.",
"To further exploit the information of concept for HTC, we propose a novel concept-based label embedding method which can explicitly represent the concepts and model the sharing mechanism among classes.",
"More specifically, we first construct a hierarchical attention-based framework which is proved to be effective by Wehrmann et al. (2018) and Huang et al. (2019).",
"There is one concept-based classifier for each level.",
"The prior level classification result (i.e. predicted soft label embedding) is fed into the next level.",
"A label embedding attention mechanism is utilized to measure the compatibility between texts and classes.",
"Then we design a concept sharing module in our model.",
"It firstly extracts the concepts explicitly in the corpus and represents them in the form of embeddings.",
"Inspired by the CapsNet (Sabour et al., 2017), we employ the dynamic routing mechanism.",
"The iterative routing helps to share the information from the lower level to the higher level with the agreement in CapsNet.",
"Taking into account the characters of HTC, we modify the dynamic routing mechanism for modeling the concepts sharing interactions among classes.",
"In detail, we calculate the agreement between concepts and classes.",
"An external knowledge source is taken as an initial reference of the child classes.",
"Different from the full connections in CapsNet, we build routing only between the class and its own child classes to utilize the structured class hierarchy of HTC.",
"Then the routing coefficients are iteratively refined by measuring the agreement between the parent class concepts embeddings and the child class embeddings.",
"In this way, the module models the concept sharing process and outputs a novel label representation which is constructed by the concepts of parent classes.",
"Finally, our hierarchical network adopts such label embeddings to represent the input document with an attention mechanism and makes a classification.",
"In summary, our major contributions include: This paper investigates the concept in HTC problem, which is a type of domain-specific information ignored by previous work.",
"We summarize several kinds of existing label embeddings and propose a novel label representation: concept-based label embedding.",
"We propose a hierarchical network to extract the concepts and model the sharing process via a modified dynamic routing algorithm.",
"To our best knowledge, this is the first work that explores the concepts of the HTC problem in an explicit and interpretable way.",
"The experimental results on two widely used datasets empirically demonstrate the effective performance of the proposed model.",
"We complement the public datasets WOS (Kowsari et al., 2017) and DBpedia (Sinha et al., 2018) by exacting the hierarchy concept and annotating the classes with the definitions from Wikipedia.",
"We release these complementary resources and the code of the proposed model for further use by the community 1 .",
"In this section, we detailedly introduce our model CLED (Figure 2).",
"It is designed for hierarchical text classification with C oncept-based L abel 1 https://github.com/wxpkanon/ CLEDforHTC.git Figure 2: Illustration of our Concept-based Label Embedding via Dynamic routing (CLED) for HTC.",
"E mbeddings via a modified D ynamic routing mechanism.",
"Firstly, we construct a hierarchical attention-based framework.",
"Then a concept sharing module is designed for extracting concepts and modeling the sharing mechanism among classes.",
"The module learns a novel label representation with concepts.",
"Finally, the model takes the concept-based label embeddings to categorize a textual description.",
"In recent years, the hierarchical neural network has been proven effective for the HTC task by much work (Sinha et al., 2018; Wehrmann et al., 2018; Huang et al., 2019).",
"We adopt it as the framework of our model.",
"Text Encoder We first map each document d = ( w 1 , w 2 , ..., w | d | ) into a low dimensional word embedding space and denote it as X = ( x 1 , x 2 , ..., x | d | ) .",
"A CNN layer is used for extracting n-gram features.",
"Then a bidirectional GRU layer extracts contextual features and represents the document as S = ( s 1 , s 2 , ..., s | d | ) .",
"Label Embedding Attention To measure the compatibility between labels and texts, we adopt the label embedding attention mechanism.",
"Given a structured class hierarchy, we denote the label embeddings of the i -th level as C = ( c 1 , c 2 , ..., c | l i | ) , where | l i | is the number of classes in the i -th level.",
"Then we calculate the cosine similarity matrix G R | d || l i | between words and labels via g kj = ( s (cid:62) k c j ) / ( (cid:107) s k (cid:107) (cid:107) c j (cid:107) ) for the i -th level.",
"Inspired by Wang et al. (2018) and Wang et al. (2019), we adopt convolutional filters F to measure the correlations r p between the p -th phrase of length 2 k + 1 and the classes at i -th level, r p = ReLU ( F G p k : p + k + b ) , where b R | l i | .",
"We denote the largest correlation value of the p th phrase with regard to the labels of i -th level as t p = max-pooling ( r p ) .",
"Then we get the label-to-text attention score R | d | by normalizing t R | d | with the SoftMax function.",
"Finally, the document representation d att can be obtained by averaging the word embeddings, weighted by label-to-text attention score: d att = (cid:80) | d | k k s k .",
"Most of researchers focus on measuring the correlations of classes by modeling the structured class hierarchy.",
"In fact, they only get the information of graphic connections.",
"By contrast, the concepts are more semantic, fine-grained and interpretable, but have been ignored.",
"To further exploit the concepts, we design a concept module to explicitly model the mechanism of sharing concepts among classes and measure the intensity of interactions.",
"Concepts Encoder Given the corpus of class c , we extract the keywords from the documents and take top-n ranked keywords as the concepts of class Algorithm 1 Pseudo Code of Concepts Sharing via Dynamic Routing Input: all the classes c and their concepts e in level l ; all the classes in level ( l + 1) Output: c CL j : the concept-based label embedding of the class in level ( l + 1); 1: for each concept i of a class c in level l and each of its child class j in level ( l + 1): b ij 0 ; 2: for r iterations do 3: for each concept i of class c in level l : i softmax( b i ) ; (cid:46) softmax computes Eq.",
"c .",
"In the WOS dataset, every document is already annotated with several keywords.",
"So we rank the keywords by term frequency within each class.",
"For the DBpedia dataset, there is no annotated keyword available.",
"We carry out the Chi-square ( 2 ) statistical test, which has been widely accepted as a statistical hypothesis test to evaluate the dependency between words and classes (Barnard, 1992; Palomino et al., 2009; Kuang and Davison, 2017).",
"The words are ranked by the 2 values.",
"Having extracted concepts for each class, we represent them with word embeddings.",
"To further encode the concepts, we exploit two different ways and make a comparison in experiments.",
"A simple and efficient way is to feed the concept embeddings into the sharing networks directly.",
"Alternatively, we try the k-means clustering algorithm (Hartigan and Wong, 1979) in consideration of the similarity between concepts, then get the embeddings of cluster centers.",
"The outputs (word embeddings or cluster centers) of concepts encoder are denoted as E c = ( e 1 , e 2 , ..., e n ) for class c .",
"Concepts Sharing via Dynamic Routing For the HTC task, we find that there are concepts of parent classes shared by their child classes.",
"The semantically related classes share some concepts in common.",
"The concepts describe a class in different views.",
"We adopt the dynamic routing mechanism in the CapsNet (Sabour et al., 2017), which is effective for sharing the information from lower levels to higher levels.",
"Considering the characters of HTC, we modify it to explicitly model the interactions among classes and quantitatively measure the intensity.",
"To utilize the taxonomic hierarchy, we build routing only between the class and its own child classes, which is different from the full connections in CapsNet.",
"We take the coupling coefficients between concepts of a parent class and all its child classes as the intensities of the sharing interactions.",
"The intensity (coupling coefficient) ij sums to 1 and is determined by a routing softmax.",
"The logit b ij is the log prior probability that concept i of a parent class should be shared to its child class j in level l n .",
"ij = exp( b ij ) (cid:80) | l n | k exp( b ik ) (1) The logit b ij is iteratively refined by adding with the agreement.",
"b ij b ij + e i c CL j (2) The agreement is the scalar product between the concept embedding e i and the concept-based label embedding (CL) of the child class c CL j .",
"The v j is the intermediate label embedding of the child class, which is generated by weighting over all the concepts of its parent class.",
"Finally, we get the concept-based label embedding for class c j by modeling the sharing mechanism.",
"The new generated label embedding c CL j is constructed with several concepts e i in different views and affected in different intensities ij .",
"Compared with randomly initializing c CL j , an external knowledge source is taken as an initial reference which is more effective in experiments.",
"The procedures are illustrated in Algorithm 1.",
"where W o , b o , W m , b m are learnable parameters and [; ] is the vector concatenating operator.",
"The d EK att and d CL att are document representations weighted respectively by the label-to-text attention scores via external knowledge (EK) initialized label embeddings and concepts-based label embeddings (CL).",
"To utilize the predictions in the ( i -1)-th level, we feed the document represent d PRE att into the i -th level classifier.",
"d PRE att is weighted by the attention scores of the predicted soft label embedding c P .",
"d PRE att = (cid:80) | d | k k s k , where k = ( s (cid:62) k c P ) / ( (cid:107) s k (cid:107) (cid:13) (cid:13) c P (cid:13)(cid:13) ) , c P = (cid:80) | l i 1 | j y l i 1 j c EK j and c EK j is the label embedding represented by averaging word embeddings of class definition in external knowledge (EK encoder in Figure 2).",
"We calculate the loss of classifier in i -th level as follows: L l i = 1 NN (cid:88) n =1 CE( y l i n , y l i n ) (7) where y l i n is the one-hot vector of ground truth label in the i -th level for document n and CE( , ) is the cross entropy between two probability vectors.",
"We optimize the model parameters by minimize the overall loss function: L = H (cid:88) i =1 L l i (8) where H is the total number of levels in the structured class hierarchy.",
"We evaluate our model on two widely used hierarchical text classification datasets: Web of Science (WOS; Kowsari et al. (2017)) and DBpedia (Sinha et al., 2018).",
"The former includes published papers available from the Web of Science (Reuters, 2012).",
"The latter is curated by Sinha et al. (2018) from DBpedia 2 .",
"The general information of datasets 2 https://wiki.dbpedia.org/ WOS DBpedia # Classes in level 1 7 9 # Classes in level 2 134 70 # Classes in level 3 NA 219 # Documents 46,985 342,782 Train 28,479 278,408 Val 3,000 30,000 Test 15,506 34,374 Table 1: Statistics of WOS and DBpedia is shown in Table 1.",
"We complement these two datasets by extracting the hierarchy concepts and annotating the classes with the definitions from Wikipedia 3 .",
"As the state-of-the-art methods do, we take the accuracy of each level and the overall accuracy as metrics.",
"Hyper-parameters are tuned on a validation set by grid search.",
"We take Stanford's publicly available GloVe 300-dimensional embeddings trained on 42 billion tokens from Common Crawl (Pen-nington et al., 2014) as initialization for word embeddings.",
"The number of filters in CNN is 128 and the region size is { 2, 3 } .",
"The number of hidden units in bi-GRU is 150.",
"We set the maximum length of token inputs as 512.",
"The rate of dropout is 0.5.",
"The number of routing iterations is",
"3. We compare two different inputs of the sharing networks: 1) top 30 ranked concepts of each parent class as inputs; 2) 40 cluster centers generated by the k-means clustering algorithm on 1k concepts for each parent class.",
"We train the parameters by the Adam Optimizer (Kingma and Ba, 2014) with an initial learning rate of 1e-3 and a batch size of 128.",
"HDLTex Kowsari et al. (2017) prove that the hierarchical deep learning networks outperform the conventional approaches (Nave Bayes or SVM).",
"HNATC Sinha et al. (2018) propose a Hierarchical Neural Attention-based Text Classifier.",
"They build one classifier for each level and concatenate the predicted category embedding at ( i -1)-th level with each of the encoder's outputs to calculate attention scores for i -th level.",
"HARNN Huang et al. (2019) propose a model called Hierarchical Attention-based Recurrent Neural Network with one classifier for each class level.",
"They focus on modeling the dependencies among class levels and the text-label compatibility.",
"A-PNC-B Rivas Rojas et al. (2020) define the HTC as a sequence-to-sequence problem and propose a synthetic task of bottom-up-classification.",
"They represent classes with external dictionaries.",
"Their best combined strategy is Auxiliary task + Parent Node Conditioning (PNC) + Beam search.",
"HiAGM Zhou et al. (2020) propose a hierarchy-aware global model.",
"They employ Tree-LSTM and hierarchy-GCN as the hierarchy encoder.",
"Text feature Propagation (TP) and Label Attention (LA) are utilized for measuring the label-word compatibility.",
"There are four HiAGM variants: TP-LSTM, TP-GCN, LA-LSTM, and LA-GCN.",
"To illustrate the practical significance of our proposed model, we make comparisons with several competitive state-of-the-art methods.",
"The results of experiments conducted on the public datasets are shown in Table",
"2. Most of the state-of-the-art methods referred to in Section 3.3 adopt a hierarchical attention-based network as their models' framework.",
"Within their models, the hierarchical framework is effective in utilizing the classification results of the previous levels for the next levels.",
"The label embedding attention mechanism helps to import external knowledge sources and the taxonomic hierarchy.",
"On both of the two datasets, the state-of-the-art methods obtain competitive performance.",
"With a similar framework, our model focuses on the concept-based label embedding and outperforms the other methods on both level and overall accuracy.",
"The results indicate the effectiveness of the concepts among classes which have been ignored by previous work.",
"The concept-based label embedding models related classes by the sharing mechanism with common concepts (visualiza-tions in Section 3.6).",
"The ablation comparisons are shown in Section 3.5.",
"The experimental results of the two variants of our model are also shown in Table",
"2. Compared with directly feeding the concepts into the sharing networks (CLED), the variant CLED cluster performs slightly better.",
"It indicates that cluster centers generated by the k-means algorithm are more informative and effective.",
"To investigate the effectiveness of different parts in our model, we carry out ablation studies.",
"The experiment results are shown in Table",
"3. Effectiveness of Concept-based Label Embedding By comparing the results of CLED and the model without the learnt concept-based label embedding (w/o CL), we further confirm that the concepts shared among classes help to improve the performance.",
"Effectiveness of Dynamic Routing We remove the dynamic routing networks from the model CLED.",
"Because there is no dynamic routing to share the concepts from the parent classes to their Model WOS DBpedia l 1 l 2 Overall l 1 l 2 l 3 Overall CLED 93.40 85.69 84.36 99.41 97.30 95.53 95.28 w/o CL 93.35 85.36 84.10 99.40 97.22 95.40 95.15 w/o EK 93.27 85.29 84.04 99.39 97.23 95.47 95.19 w/o PRE 93.34 85.33 84.03 99.39 97.18 95.35 95.05 w/o reference in CSM 93.30 85.45 84.17 99.40 97.18 95.45 95.15 w/o DR 93.29 85.41 84.23 99.36 97.23 95.38 95.12 Table 3: Ablation studies for different parts in our model.",
"child classes, it is an intuitive way to represent the label embeddings by averaging the word embeddings of the child classes' concepts.",
"Specifically, there are top-30 ranked concepts for each parent class to share with their child classes.",
"So for the model without dynamic routing (w/o DR), we represent the child class label embedding with the top-30 ranked concepts of each child class.",
"Although the concepts of child classes are more fine-grained and informative than the concepts of parent classes, the model CLED with the dynamic routing networks to share the concepts among classes performs better.",
"It indicates that modeling the sharing mechanism and learning to represent the child classes with common concepts are more effective.",
"Effectiveness of External Knowledge We take an external knowledge source as the initial reference of child classes in the concepts sharing module.",
"When we remove the reference (w/o reference in CSM), the results are slightly worse on accuracy.",
"It demonstrates that the external knowledge makes an efficient reference for the concept sharing.",
"Similar to the state-of-the-art methods, the external knowledge is also used individually as the representation of each class in our model.",
"It helps to measure the compatibility between labels and texts via the attention mechanism.",
"When we fully remove the external knowledge and initialize the label embeddings randomly (w/o EK), the performances are slightly worse than that with external knowledge (CLED).",
"It indicates the effectiveness of external knowledge.",
"Besides, the experiment which removes the predicted soft label embedding (w/o PRE) proves that, it is effective to utilize the predictions of previous level.",
"In this paper, we explicitly investigate the concept sharing process.",
"A concept sharing module is designed to model the mechanism of sharing concepts among classes and measure the intensity of interactions.",
"The heat map of the learnt dynamic routing scores between the concepts of class Computer Science and its child classes is illustrated in Figure",
"3. The color changes from white to blue while the score increases.",
"The score indicates the intensity between the concept and class in the sharing process.",
"In Figure 3, we find that the concept design is shared by the classes Soft engineering and Algorithm design.",
"The concept distributed is shared by the classes Network security and Distributed computing.",
"The concept is shared by related classes.",
"We use t-SNE (Van der Maaten and Hinton, 2008) to visualize the concept embeddings of class Computer Science and the concept-based label embeddings of its child classes on a 2D map in Figure",
"4. The label embedding (red triangle) is constructed with the embeddings of concepts (blue dot).",
"As shown, the class Software engineering is surrounded by the concepts optimization and design.",
"Network security is surrounded by cloud, machine and security.",
"The class is described by several concepts in different views.",
"The visualizations in Figure 3 and 4 indicate that we successfully model the concept sharing mechanism in a semantic and explicit way.",
"Hierarchical text classification with label embeddings Recently, researchers try to adopt the label embeddings in the hierarchical text classification task.",
"Huang et al. (2019) propose hierarchical attention-based recurrent neural network (HARNN) by adopting label embeddings.",
"Mao et al. (2019) propose to learn a label assignment policy via deep reinforcement learning with label embeddings.",
"Peng et al. (2019) propose hierarchical taxonomy-aware and attentional graph RCNNs with label embeddings.",
"Rivas Rojas et al. (2020) Figure 3: Dynamic routing scores between the concepts of class Computer Science (Y-axis) and its child classes (X-axis).",
"define the HTC task as a sequence-to-sequence problem.",
"Their label embedding is defined by external knowledge.",
"For modeling label dependencies, Zhou et al. (2020) formulate the hierarchy as a directed graph and introduce hierarchy-aware structure encoders.",
"Cao et al. (2020) and Chen et al. (2020a) exploit the hyperbolic representation for labels by encoding the taxonomic hierarchy.",
"Hierarchical text classification besides label embeddings According to the motivation of this work, we separate previous work with label embeddings from the HTC task and present it in the above paragraph.",
"Besides, existing work is usually categorized into flat, local and global approaches (Silla and Freitas, 2011).",
"The flat classification approach completely ignores the class hierarchy and only predicts classes at the leaf nodes (Aly et al., 2019).",
"The local classification approaches could be grouped as a local classifier per node (LCN), a local classifier per parent node (LCPN) and a local classifier per level (LCL).",
"The LCN approach train one binary classifier for each node of the hierarchy (Fagni and Sebastiani, 2007).",
"Banerjee et al. (2019) apply transfer learning in LCN by fine-tuning the parent classifier for the child class.",
"For the LCPN, a multi-class classifier for each parent node is trained to distinguish between its child nodes (Wu et al., 2005; Dumais and Chen, 2000).",
"Xu and Geng (2019) investigate the correlation among labels by the label Figure 4: t-SNE plot of the concept embeddings of the class Computer Science and the concept-based label embeddings of its child classes.",
"distribution as an LCPN approach.",
"The LCL approach consists of training one multi-class classifier for each class level (Kowsari et al., 2017; Shimura et al., 2018).",
"Zhu and Bain (2017) introduce a B-CNN model which outputs predictions corresponding to the hierarchical structure.",
"Chen et al. (2020b) propose a multi-level learning to rank model with multi-level hinge loss margins.",
"The global approach learns a global classification model about the whole class hierarchy (Cai and Hofmann, 2004; Gopal and Yang, 2013; Wing and Baldridge, 2014; Karn et al., 2017).",
"Qiu et al. (2011) exploit the latent nodes in the taxonomic hierarchy with a global approach.",
"For the need for a large amount of training data, a weakly-supervised global HTC method is proposed by Meng et al. (2019).",
"Meta-learning is adopted by Wu et al. (2019) for HTC in a global way.",
"In addition, there is some work combined with both local and global approach (Wehrmann et al., 2018).",
"A local flat tree classifier is introduced by Peng et al. (2018) which utilizes the graph-CNN.",
"In this paper, we investigate the concept which is a kind of domain-specific and fine-grained information for the hierarchical text classification.",
"We propose a novel concept-based label embedding model.",
"Compared with several competitive state-of-the-art methods, the experimental results on two widely used datasets prove the effectiveness of our proposed model.",
"The visualization of the concepts and the learnt concept-based label embeddings reveal the high interpretability of our model.",
"We sincerely thank Bingning Wang 4 for helpful discussions, and all reviewers and ACs for their insightful comments, time and efforts."
] | [
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"objective",
"objective",
"objective",
"objective",
"objective",
"abstain",
"objective",
"abstain",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"method",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"objective",
"objective",
"result",
"other"
] |
[
"Exact structured inference with neural network scoring functions is computationally challenging but several methods have been proposed for approximating inference.",
"One approach is to perform gradient descent with respect to the output structure directly (Belanger and McCallum, 2016).",
"Another approach, proposed recently, is to train a neural network (an inference network) to perform inference (Tu and Gimpel, 2018).",
"In this paper, we compare these two families of inference methods on three sequence labeling datasets.",
"We choose sequence labeling because it permits us to use exact inference as a benchmark in terms of speed, accuracy, and search error.",
"Across datasets, we demonstrate that inference networks achieve a better speed/accuracy/search error trade-off than gradient descent, while also being faster than exact inference at similar accuracy levels.",
"We find further benefit by combining inference networks and gradient descent, using the former to provide a warm start for the latter.",
"1 1 Introduction Structured prediction models commonly involve complex inference problems for which finding exact solutions is intractable (Cooper, 1990).",
"There are generally two ways to address this difficulty.",
"One is to restrict the model family to those for which inference is feasible.",
"For example, state-of-the-art methods for sequence labeling use structured energies that decompose into label-pair potentials and then use rich neural network architectures to define the potentials (Collobert et al., 2011; Lample et al., 2016; Ma and Hovy, 2016, inter alia ).",
"Exact dynamic programming algorithms like the Viterbi algorithm can be used for inference.",
"The second approach is to retain computationally-intractable scoring functions but then use approximate methods for inference.",
"For example, some researchers relax the structured output space from a discrete space to a continuous one and then use gradient descent to maximize the score function with respect to the output (Belanger and McCallum, 2016).",
"Another approach is to train a neural network (an inference network) to output a structure in the relaxed space that has high score under the structured scoring function (Tu and Gimpel, 2018).",
"This idea was proposed as an alternative to gradient descent in the context of structured prediction energy networks (Belanger and McCallum, 2016).",
"In this paper, we empirically compare exact inference, gradient descent, and inference networks for three sequence labeling tasks.",
"We train conditional random fields (CRFs) for sequence labeling with neural networks used to define the potentials.",
"We choose a scoring function that permits exact inference via Viterbi so that we can benchmark the approximate methods in terms of search error in addition to speed and accuracy.",
"We consider three families of neural network architectures to serve as inference networks: convolutional neural networks (CNNs), recurrent neural networks (RNNs), and sequence-to-sequence models with attention (seq2seq; Sutskever et al., 2014; Bahdanau et al., 2015).",
"We also use multi-task learning while training inference networks, combining the structured scoring function with a local cross entropy loss.",
"Our empirical findings can be summarized as follows.",
"Gradient descent works reasonably well for tasks with small label sets and primarily local structure, like part-of-speech tagging.",
"However, gradient descent struggles on tasks with long-distance dependencies, even with small label set sizes.",
"For tasks with large label set sizes, inference networks and Viterbi perform comparably, with Viterbi taking much longer.",
"In this regime, it is difficult for gradient descent to find a good solution, even with many iterations.",
"In comparing inference network architectures, (1) CNNs are the best choice for tasks with primarily local structure, like part-of-speech tagging; (2) RNNs can handle longer-distance dependencies while still offering high decoding speeds; and (3) seq2seq networks consistently work better than RNNs, but are also the most computationally expensive.",
"We also compare search error between gradient descent and inference networks and measure correlations with input likelihood.",
"We find that inference networks achieve lower search error on instances with higher likelihood (under a pretrained language model), while for gradient descent the correlation between search error and likelihood is closer to zero.",
"This shows the impact of the use of dataset-based learning of inference networks, i.e., they are more effective at amortizing inference for more common inputs.",
"Finally, we experiment with two refinements of inference networks.",
"The first fine-tunes the inference network parameters for a single test example to minimize the energy of its output.",
"The second uses an inference network to provide a warm start for gradient descent.",
"Both lead to reductions in search error and higher accuracies for certain tasks, with the warm start method leading to a better speed/accuracy trade-off.",
"For sequence labeling tasks, given an input sequence x = h x 1 , x 2 , ..., x | x | i , we wish to output a sequence y = h y 1 , y 2 , ..., y | x | i Y ( x ) .",
"Here Y ( x ) is the structured output space for x .",
"Each label y t is represented as an L -dimensional one-hot vector where L is the number of labels.",
"Conditional random fields (CRFs; Lafferty et al., 2001) form one popular class of methods for structured prediction, especially for sequence labeling.",
"We define our structured energy function to be similar to those often used in CRFs for sequence labeling: E ( x , y ) = X t LX i =1 y t,i (cid:16) u i f ( x , t ) (cid:17) + X t y t 1 Wy t ! where y t,i is the i th entry of the vector y t .",
"one-hot vector, but this energy is generalized to be able to use both discrete labels and continuous relaxations of the label space, which we will introduce below.",
"Also, we use f ( x , t ) R d to denote the input feature vector for position t , u i R d is a label-specific parameter vector used for modeling the local scoring function, and W RL L is a parameter matrix learned to model label transitions.",
"For the feature vectors we use a bidirectional long short-term memory (BLSTM; Hochreiter and Schmidhuber, 1997), so this forms a BLSTM-CRF (Lample et al., 2016; Ma and Hovy, 2016).",
"For training, we use the standard conditional log-likelihood objective for CRFs, using the forward and backward dynamic programming algorithms to compute gradients.",
"For a given input x at test time, prediction is done by choosing the output with the lowest energy: argmin y Y ( x ) E ( x , y ) The Viterbi algorithm can be used to solve this problem exactly for the energy defined above.",
"For our experimental comparison, we consider two CRF variants.",
"The first is the basic model described above, which we refer to as BLSTM-CRF.",
"Below we describe three additional techniques that we add to the basic model.",
"We will refer to the CRF with these three techniques as BLSTM-CRF+.",
"Using these two models permits us to assess the impact of model complexity and performance level on the inference method comparison.",
"Word Embedding Fine-Tuning.",
"We used pretrained, fixed word embeddings when using the BLSTM-CRF model, but for the more complex BLSTM-CRF+ model, we fine-tune the pretrained word embeddings during training.",
"Character-Based Embeddings.",
"Character-based word embeddings provide consistent improvements in sequence labeling (Lample et al., 2016; Ma and Hovy, 2016).",
"In addition to pretrained word embeddings, we produce a character-based embedding for each word using a character convolutional network like that of Ma and Hovy (2016).",
"The filter size is 3 characters and the character embedding dimensionality is 30.",
"We use max pooling over the character sequence in the word and the resulting embedding is concatenated with the word embedding before being passed to the BLSTM.",
"Dropout.",
"We also add dropout during training (Hinton et al., 2012).",
"Dropout is applied before the character embeddings are fed into the CNNs, at the final word embedding layer before the input to the BLSTM, and after the BLSTM.",
"The dropout rate is 0.5 for all experiments.",
"To use gradient descent (GD) for structured inference, researchers typically relax the output space from a discrete, combinatorial space to a continuous one and then use gradient descent to solve the following optimization problem:",
"where YR is the relaxed continuous output space.",
"For sequence labeling, YR ( x ) consists of length| x | sequences of probability distributions over output labels.",
"To obtain a discrete labeling for evaluation, the most probable label at each position is returned.",
"There are multiple settings in which gradient descent has been used for structured inference, e.g., image generation (Johnson et al., 2016), structured prediction energy networks (Belanger and McCallum, 2016), and machine translation (Hoang et al., 2017).",
"Gradient descent has the advantage of simplicity.",
"Standard autodifferentiation toolkits can be used to compute gradients of the energy with respect to the output once the output space has been relaxed.",
"However, one challenge is maintaining constraints on the variables being optimized.",
"Therefore, we actually perform gradient descent in an even more relaxed output space YR ( x ) which consists of length| x | sequences of vectors, where each vector y t RL .",
"When computing the energy, we use a softmax transformation on each y t , solving the following optimization problem with gradient descent: argmin y Y R ( x ) E ( x , softmax( y )) (1) where the softmax operation above is applied independently to each vector y t in the output structure y .",
"Tu and Gimpel (2018) define an inference network (infnet) A : X YR and train it with the goal that",
"where YR is the relaxed continuous output space as defined in Section 3.",
"For sequence labeling, for example, an inference network A takes a sequence x as input and outputs a distribution over labels for each position in x .",
"Below we will consider three families of neural network architectures for A .",
"For training the inference network parameters , Tu and Gimpel (2018) explored stabilization and regularization terms and found that a local cross entropy loss consistently worked well for sequence labeling.",
"We use this local cross entropy loss in this paper, so we perform learning by solving the following: argmin X h x , y i E ( x , A ( x ))+ token ( y , A ( x )) where the sum is over h x , y i pairs in the training set.",
"The token-level loss is defined: token ( y , A ( x )) = | y | X t =1 CE( y t , A ( x ) t ) (2) where y t is the L -dimensional one-hot label vector at position t in y , A ( x ) t is the inference net-work's output distribution at position t , and CE stands for cross entropy.",
"We will give more details on how token is defined for different inference network families below.",
"It is also the loss used in our non-structured baseline models.",
"We now describe options for inference network architectures for sequence labeling.",
"For each, we optionally include the modeling improvements described in Section 2.1.",
"When doing so, we append + to the setting's name to indicate this (e.g., infnet+).",
"CNNs are frequently used in NLP to extract features based on symbol subsequences, whether words or characters (Collobert et al., 2011; Kalchbrenner et al., 2014; Kim, 2014;",
"Kim et al., 2016; Zhang et al., 2015).",
"CNNs use filters that are applied to symbol sequences and are typically followed by some sort of pooling operation.",
"We apply filters over a fixed-size window centered on the word being labeled and do not use pooling.",
"The feature maps f n ( x , t ) for (2 n + 1) gram filters are defined: f n ( x , t ) = g ( W n [ v x t n ; ... ; v x t + n ] + b n ) where g is a nonlinearity, v x t is the embedding of word x t , and W n and b n are filter parameters.",
"We consider two CNN configurations: one uses n = 0 and n = 1 and the other uses n = 0 and n = 2 .",
"For each, we concatenate the two feature maps and use them as input to the softmax layer over outputs.",
"In each case, we use H filters for each feature map.",
"For sequence labeling, it is common to use a BLSTM that runs over the input sequence and produces a softmax distribution over labels at each position in the sequence.",
"We use this BLSTM tagger as our RNN inference network architecture.",
"The parameter H refers to the size of the hidden vectors in the forward and backward LSTMs, so the full dimensionality passed to the softmax layer is 2 H .",
"Sequence-to-sequence (seq2seq; Sutskever et al. 2014) models have been successfully used for many sequential modeling tasks.",
"It is common to augment models with an attention mechanism that focuses on particular positions of the input sequence while generating the output sequence (Bahdanau et al., 2015).",
"Since sequence labeling tasks have equal input and output sequence lengths and a strong connection between corresponding entries in the sequences, Goyal et al. (2018) used fixed attention that deterministically attends to the i th input when decoding the i th output, and hence does not learn any attention parameters.",
"It is shown as follows: P ( y t | y <t , x ) = softmax( W s [ h t , s t ]) where s t is the hidden vector at position t from a BLSTM run over x , h t is the decoder hidden vector at position t , and W s is a parameter matrix.",
"The concatenation of the two hidden vectors is used to produce the distribution over labels.",
"When using this inference network, we redefine the local loss to the standard training criterion for seq2seq models, namely the sum of the log losses for each output conditioned on the previous outputs in the sequence.",
"We always use the previous predicted label as input (as used in scheduled sampling, Bengio et al., 2015) during training because it works better for our tasks.",
"In our experiments, the forward and backward encoder LSTMs use hidden dimension H , as does the LSTM decoder.",
"Thus the model becomes similar to the BLSTM tagger except with conditioning on previous labeling decisions in a left-to-right manner.",
"We also experimented with the use of beam search for both the seq2seq baseline and inference networks and did not find much difference in the results.",
"Also, as alternatives to the deterministic position-based attention described above, we experimented with learned local attention ( Luong et al., 2015) and global attention, but they did not work better on our tasks.",
"To further improve the performance of an inference network for a particular test instance x , we propose two novel approaches that leverage the strengths of inference networks to provide effective starting points and then use instance-level fine-tuning in two different ways.",
"For each test example x , we initialize an instance-specific inference network A ( x ) using the trained inference network parameters, then run gradient descent on the following loss:",
"This procedure fine-tunes the inference network parameters for a single test example to minimize the energy of its output.",
"For each test example, the process is repeated, with a new instance-specific inference network being initialized from the trained inference network parameters.",
"Given a test example x , we initialize y YR ( x ) using the inference network and then use gradient descent by solving Eq.",
"1 described in Section 3 to update y .",
"However, the inference network output is in YR ( x ) while gradient descent works with the more relaxed space YR ( x ) .",
"We perform experiments on three tasks: Twitter part-of-speech tagging (POS), named entity recognition (NER), and CCG supersense tagging (CCG).",
"POS.",
"We use the annotated data from Gimpel et al. (2011) and Owoputi et al. (2013) which contains 25 POS tags.",
"For training, we combine the 1000-tweet OCT 27T RAIN set and the 327-tweet OCT 27D EV set.",
"For validation, we use the 500-tweet OCT 27T EST set and for testing we use the 547-tweet DAILY 547 test set.",
"We use the 100-dimensional skip-gram embeddings from Tu et al. (2017) which were trained on a dataset of 56 million English tweets using word2vec (Mikolov et al., 2013).",
"The evaluation metric is tagging accuracy.",
"NER.",
"We use the CoNLL 2003 English data (Tjong Kim Sang and De Meulder, 2003).",
"There are four entity types: PER, LOC, ORG, and MISC.",
"There is a strong local dependency between neighboring labels because this is a labeled segmentation task.",
"We use the BIOES tagging scheme, so there are 17 labels.",
"We use 100-dimensional pretrained GloVe (Pennington et al., 2014) embeddings.",
"The task is evaluated with micro-averaged F1 score using the conlleval script.",
"CCG.",
"We use the standard splits from CCG-bank ( Hockenmaier and Steedman, 2002).",
"We only keep sentences with length less than 50 in the original training data when training the CRF.",
"The training data contains 1,284 unique labels, but because the label distribution has a long tail, we use only the 400 most frequent labels, replacing the others by a special tag .",
"The percentages of in train/development/test are 0.25/0.23/0.23 % .",
"When the gold standard tag is , the prediction is always evaluated as incorrect.",
"We use the same GloVe embeddings as in NER.",
"Because of the compositional nature of supertags, this task has more non-local dependencies.",
"The task is evaluated with per-token accuracy.",
"For the optimization problems mentioned below, we use stochastic gradient descent with momen-tum as the optimizer.",
"Full details of hyperparame-ter tuning are in the appendix.",
"Local Baselines.",
"We consider local (non-structured) baselines that use the same architectures as the inference networks but train using only the local loss token .",
"Structured Baselines.",
"We train the BLSTM-CRF and BLSTM-CRF+ models with the standard conditional log-likelihood objective.",
"We tune hyperparameters on the development sets.",
"Gradient Descent for Inference.",
"We use gradient descent for structured inference by solving Eq.",
"1.",
"We randomly initialize y YR ( x ) and, for N iterations, we compute the gradient of the energy with respect to y , then update y using gradient descent with momentum, which we found to generally work better than constant step size.",
"We tune N and the learning rate via instance-specific oracle tuning, i.e., we choose them separately for each input to maximize performance (accuracy or F1 score) on that input.",
"Even with this oracle tuning, we find that gradient descent struggles to compete with the other methods.",
"Inference Networks.",
"To train the inference networks, we first train the BLSTM-CRF or BLSTM-CRF+ model with the standard conditional log-likelihood objective.",
"The hidden sizes H are tuned in that step.",
"We then fix the energy function and train the inference network A using the combined loss from Section 4.",
"For instance-tailored inference networks and when using inference networks as a warm start for gradient descent, we tune the number of epochs N and the learning rate on the development set, and report the performance on the test set, using the same values of N and the learning rate for all test examples.",
"This first section of results uses the simpler BLSTM-CRF modeling configuration.",
"In Section 7 below we present results with the stronger BLSTM-CRF+ configuration and also apply the same modeling improvements to the baselines and inference networks.",
"Table 1 shows test results for all tasks and architectures.",
"The inference networks use the same architectures as the corresponding local baselines, but their parameters are trained with both the local loss and the BLSTM-CRF energy, leading to consistent improvements.",
"CNN inference networks work well for POS, but struggle on NER and CCG compared to other architectures.",
"BLSTMs work well, but are outperformed slightly by seq2seq models across all three tasks.",
"Using the Viterbi algorithm for exact inference yields the best performance for NER but is not best for the other two tasks.",
"It may be surprising that an inference network trained to mimic Viterbi would outperform Viterbi in terms of accuracy, which we find for the CNN for POS tagging and the seq2seq inference network for CCG.",
"We suspect this occurs for two reasons.",
"One is due to the addition of the local loss in the inference network objective; the inference networks may be benefiting from this multi-task training.",
"Edunov et al. (2018) similarly found benefit from a combination of token-level and sequence-level losses.",
"The other potential reason is beneficial inductive bias with the inference network architecture.",
"For POS tagging, the CNN architecture is clearly well-suited to this task given the strong performance of the local CNN baseline.",
"Nonetheless, the CNN inference network is able to improve upon both the CNN baseline and Viterbi.",
"Hidden Sizes.",
"For the test results in Table 1, we did limited tuning of H for the inference networks based on the development sets.",
"Figure 1 shows the impact of H on performance.",
"Across H values, the inference networks outperform the baselines.",
"For NER and CCG, seq2seq outperforms the BLSTM which in turn outperforms the CNN.",
"Tasks and Window Sizes.",
"Table 2 shows that CNNs with smaller windows are better for POS, while larger windows are better for NER and CCG.",
"This suggests that POS has more local dependencies among labels than NER and CCG.",
"Asymptotically, Viterbi takes O ( nL 2 ) time, where n is the sequence length.",
"The BLSTM and our deterministic-attention seq2seq models have time complexity O ( nL ) .",
"CNNs also have complexity O ( nL ) but are more easily parallelizable.",
"Table 3 shows test-time inference speeds for inference networks, gradient descent, and Viterbi for the BLSTM-CRF model.",
"We use GPUs and a minibatch size of 10 for all methods.",
"CNNs are 1-2 orders of magnitude faster than the others.",
"BLSTMs work almost as well as seq2seq models and are 2-4 times faster in our experiments.",
"Viterbi is actually faster than seq2seq when L is small, but for CCG, which has L = 400 , it is 4-5 times slower.",
"Gradient descent is slower than the others because it generally needs many iterations (20-50) for competitive performance.",
"We can view inference networks as approximate search algorithms and assess characteristics that affect search error.",
"To do so, we train two LSTM language models (one on word sequences and one on gold label sequences) on the Twitter POS data.",
"We also compute the difference in the BLSTM-CRF energies between the inference network output y inf and the Viterbi output y vit as the search error: E ( x , y inf ) E ( x , y vit ) .",
"We compute the same search error for gradient descent.",
"For the BLSTM inference network, Spearman's between the word sequence perplexity and search error is 0.282; for the label sequence perplexity, it is 0.195.",
"For gradient descent inference, Spearman's between the word sequence perplexity and search error is 0.122; for the label sequence perplexity, it is 0.064.",
"These positive correlations mean that for frequent sequences, inference networks and gradient descent exhibit less search error.",
"We also note that the correlations are higher for the inference network than for gradient descent, showing the impact of amortization during learning of the inference network parameters.",
"That is, since we are learning to do inference from a dataset, we would expect search error to be smaller for more frequent sequences, and we do indeed see this correlation.",
"We now compare inference methods when using the improved modeling techniques described in Section 2.1 (i.e., the setting we called BLSTM-CRF+).",
"We use these improved techniques for all models, including the CRF, the local baselines, gradient descent, and the inference networks.",
"When training inference networks, both the inference network architectures and the structured energies use the techniques from Section 2.1.",
"So, when referring to inference networks in this section, we use the name infnet+.",
"The results are shown in Table 4.",
"With a more powerful local architecture, structured prediction is less helpful overall, but inference networks still POS NER CCG local baseline 91.3 90.5 94.1 infnet+ 91.3 90.8 94.2 gradient descent 90.8 89.8 90.4 Viterbi 90.9 91.6 94.3 Table 4: Test results with BLSTM-CRF+.",
"POS.",
"As in the BLSTM-CRF setting, the local CNN baseline and the CNN inference network outperform Viterbi.",
"This is likely because the CRFs use BLSTMs as feature networks, but our results show that CNN baselines are consistently better than BLSTM baselines on this task.",
"As in the BLSTM-CRF setting, gradient descent works quite well on this task, comparable to Viterbi, though it is still much slower.",
"NER.",
"We see slightly higher BLSTM-CRF+ results than several previous state-of-the-art results (cf. 90.94; Lample et al., 2016 and 91.37; Ma and Hovy, 2016).",
"The stronger BLSTM-CRF+ configuration also helps the inference networks, improving performance from 90.5 to 90.8 for the seq2seq architecture over the local baseline.",
"Though gradient descent reached high accuracies for POS tagging, it does not perform well on NER, possibly due to the greater amount of non-local information in the task.",
"While we see strong performance with infnet+, it still lags behind Viterbi in F1.",
"We consider additional experiments in which we increase the number of layers in the inference networks.",
"We use a 2-layer BLSTM as the inference network and also use weight annealing of the local loss hyper-parameter , setting it to = e 0 .",
"01 t where t is the epoch number.",
"Without this annealing, the 2-layer inference network was difficult to train.",
"The weight annealing was helpful for encouraging the inference network to focus more on the non-local information in the energy function rather than the token-level loss.",
"As shown in Table 5, these changes yield an improvement of 0.4 in F1.",
"CCG.",
"Our BLSTM-CRF+ reaches an accuracy of 94.3%, which is comparable to several recent results (93.53, Xu et al., 2016; 94.3, Lewis et al., 2016; and 94.50, Vaswani et al., 2016).",
"The local baseline, the BLSTM inference network, and Viterbi are all extremely close in accuracy.",
"Gradient descent struggles here, likely due to the large number of candidate output labels.",
"Table 6 compares inference methods in terms of both accuracy and energies reached during inference.",
"For each number N of gradient descent iterations in the table, we tune the learning rate per-sentence and report the average accuracy/F1 with that fixed number of iterations.",
"We also report the average energy reached.",
"For inference networks, we report energies both for the output directly and when we discretize the output (i.e., choose the most probable label at each position).",
"Gradient Descent Across Tasks.",
"The number of gradient descent iterations required for competitive performance varies by task.",
"For POS, 20 iterations are sufficient to reach accuracy and energy close to Viterbi.",
"For NER, roughly 40 iterations are needed for gradient descent to reach its highest F1 score, and for its energy to become very close to that of the Viterbi outputs.",
"However, its F1 score is much lower than Viterbi.",
"For CCG, gradient descent requires far more iterations, presumably due to the larger number of labels in the task.",
"Even with 1000 iterations, the accuracy is 4% lower than Viterbi and the inference networks.",
"Unlike POS and NER, the inference network reaches much lower energies than gradient descent on CCG, suggesting that the inference network may not suffer from the same challenges of searching high-dimensional label spaces as those faced by gradient descent.",
"Inference Networks Across Tasks.",
"For POS, the inference network does not have lower energy than gradient descent with 20 iterations, but it does have higher accuracy.",
"This may be due in part to our use of multi-task learning for inference networks.",
"The discretization of the inference network outputs increases the energy on average for this task, whereas it decreases the energy for the other two tasks.",
"For NER, the inference network reaches a similar energy as gradient descent, especially when discretizing the output, but is considerably better in F1.",
"The CCG tasks shows the largest difference between gradient descent and the inference network, as the latter is much better in both accuracy and energy.",
"Instance Tailoring and Warm Starting.",
"Across tasks, instance tailoring and warm starting lead to lower energies than infnet+.",
"The improvements in energy are sometimes joined by improvements in accuracy, notably for NER where the gains range from 0.4 to 0.7 in F1.",
"Warm starting gradient descent yields the lowest energies (other than Viterbi), showing promise for the use of gradient descent as a local search method starting from inference network output.",
"Wall Clock Time Comparison.",
"Figure 2 shows the speed/accuracy trade-off for the inference methods, using wall clock time for test set inference as the speed metric.",
"On this task, Viterbi is time-consuming because of the larger label set size.",
"The inference network has comparable accuracy to Viterbi but is much faster.",
"Gradient descent needs much more time to get close to the others but plateaus before actually reaching similar accuracy.",
"Instance-tailoring and warm starting reside between infnet+ and Viterbi, with warm starting being significantly faster because it does not require updating inference network parameters.",
"The most closely related prior work is that of Tu and Gimpel (2018), who experimented with RNN inference networks for sequence labeling.",
"We compared three architectural families, showed the relationship between optimal architectures and downstream tasks, compared inference networks to gradient descent, and proposed novel variations.",
"We focused in this paper on sequence labeling, in which CRFs with neural network potentials have emerged as a state-of-the-art approach (Lample et al., 2016; Ma and Hovy, 2016; Strubell et al., 2017; Yang et al., 2018).",
"Our results suggest that inference networks can provide a feasible way to speed up test-time inference over Viterbi without much loss in performance.",
"The benefits of inference networks may be coming in part from multi-task training; Edunov et al. (2018) similarly found benefit from combining token-level and sequence-level losses.",
"We focused on structured prediction in this paper, but inference networks are useful in other settings as well.",
"For example, it is common to use a particular type of inference network to approximate posterior inference in neural approaches to latent-variable probabilistic modeling, such as variational autoencoders (Kingma and Welling, 2013) and, more closely related to this paper, variational sequential labelers (Chen et al., 2018).",
"In such settings, Kim et al. (2018) have found benefit with instance-specific updating of inference network parameters, which is related to our instance-level fine-tuning.",
"There are also connections between structured inference networks and amortized structured inference (Srikumar et al., 2012) as well as methods for neural knowledge distillation and model compression (Hinton et al., 2015; Ba and Caruana, 2014; Kim and Rush, 2016).",
"Gradient descent is used for inference in several settings, e.g., structured prediction energy networks (Belanger and McCallum, 2016), image generation applications (Mordvintsev et al., 2015; Gatys et al., 2015), finding adversarial examples (Goodfellow et al., 2015), learning paragraph embeddings (Le and Mikolov, 2014), and machine translation (Hoang et al., 2017).",
"Gradient descent has started to be replaced by inference networks in some of these settings, such as image transformation (Johnson et al., 2016; Li and Wand, 2016).",
"Our results provide more evidence that gradient descent can be replaced by inference networks or improved through combination with them.",
"We compared several methods for approximate inference in neural structured prediction, finding that inference networks achieve a better speed/accuracy/search error trade-off than gradient descent.",
"We also proposed instance-level inference network fine-tuning and using inference networks to initialize gradient descent, finding further reductions in search error and improvements in performance metrics for certain tasks.",
"We would like to thank Ke Li for suggesting experiments that combine inference networks and gradient descent, the anonymous reviewers for their feedback, and NVIDIA for donating GPUs used in this research."
] | [
"abstain",
"abstain",
"abstain",
"method",
"method",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"result",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"objective",
"method",
"abstain",
"other",
"method",
"method",
"abstain",
"other",
"other",
"other",
"abstain",
"result",
"objective",
"other"
] |
[
"Recent studies have shown that language models pretrained and/or fine-tuned on randomly permuted sentences exhibit competitive performance on GLUE, putting into question the importance of word order information.",
"Somewhat counter-intuitively, some of these studies also report that position embeddings appear to be crucial for models' good performance with shuffled text.",
"We probe these language models for word order information and investigate what position embeddings learned from shuffled text encode, showing that these models retain information pertaining to the original, naturalistic word order.",
"We show this is in part due to a subtlety in how shuffling is implemented in previous work before rather than after subword segmentation.",
"Surprisingly, we find even Language models trained on text shuffled after subword segmentation retain some semblance of information about word order because of the statistical dependencies between sentence length and unigram probabilities.",
"Finally, we show that beyond GLUE, a variety of language understanding tasks do require word order information, often to an extent that cannot be learned through fine-tuning.",
"Transformers (Vaswani et al., 2017), when used in the context of masked language modelling (Devlin et al., 2018), consume their inputs concurrently.",
"There is no notion of inherent order, unlike in autoregressive setups, where the input is consumed token by token.",
"To compensate for this absence of linear order, the transformer architecture originally proposed in Vaswani et al. (2017) includes a fixed, sinusoidal position embedding added to each token embedding; each token carries a different position embedding, corresponding to its position in the sentence.",
"The transformer-based BERT (De-vlin et al., 2018) replaces these fixed sinusoidal Equal contribution.",
"embeddings with unique, learned embeddings per position; RoBERTa (Liu et al., 2019), the model investigated in this work, does the same.",
"Position embeddings are the only source of order information in these models; in their absence, contextual representations generated for tokens are independent of the actual position of the tokens in a sentence, and the models thus resemble heavily overparameterised bags-of-words.",
"Sinha et al. (2021) pre-trained RoBERTa models on shuffled corpora to demonstrate that the performance gap between these shuffled' language models and models trained on unshuffled corpora is minor (when fine-tuned and evaluated downstream on the GLUE (Wang et al., 2018) benchmark).",
"They further show that this gap is considerably wider when a model is pre-trained without position embeddings.",
"In this paper, we attempt to shed some light on why these models behave the way they do, and in doing so, seek to answer a set of pertinent questions: Do shuffled language models still have traces of word order information?",
"Why is there a gap in performance between models without position embeddings and models trained on shuffled tokens, with the latter 6907 Figure 2: Correlations between position embeddings when shuffling training data before segmentation (left), i.e, at the word level, and after segmentation (middle), i.e., at the subword level, as well as when replacing all subwords with random subwords based on their corpus-level frequencies (right).",
"Are there NLU benchmarks, other than GLUE, on which shuffled language models perform poorly?",
"Contributions We first demonstrate, in Section 3, that shuffled language models do contain word order information, and are quite responsive to simple tests for word order information, particularly when compared to models trained without position representations.",
"In Section 4, we demonstrate that pre-training is sufficient to learn this: position embeddings provide the appropriate inductive bias, and performing BPE segmentation after shuffling results in sensible n-grams appearing in the pre-training corpus; this gives models the capacity to learn word order within smaller local windows.",
"Other minor cues like correlations between sentence lengths and token distributions also play a role.",
"We further corroborate our analysis by examining attention patterns across models in Sec. 5.",
"In Section 6, we show that, while shuffled models might be almost as good as their unshuffled counterparts on GLUE tasks, there exist NLU benchmarks that do require word order information to an extent that cannot be learned through fine-tuning alone.",
"Finally, in Section 7, we describe miscellaneous experiments addressing the utility of positional embeddings when added just prior to fine-tuning.",
"Sinha et al. (2021) train several full-scale RoBERTa language models on the Toronto Book Corpus (Zhu et al., 2015) and English Wikipedia.",
"1 Four of their models are trained on shuffled text, i.e., sentences in which n -grams are reordered at random.",
"2 We dub the original, unperturbed model ORIG , and the scrambled models SHUF .",
"N 1, SHUF .",
"N 2, SHUF .",
"N 3 and SHUF .",
"N 4 depending on the size of the shuffled n -grams: SHUF .",
"N 1 reorders the unigrams in a sentence, SHUF .",
"N 2 reorders its bigrams, etc.",
"For comparison, Sinha et al. (2021) also train a RoBERTa language model entirely without position embeddings (NOPOS ), as well as a RoBERTa language model trained on a corpus drawn solely from unigram distributions of the original Book Corpus, i.e., a reshuffling of the entire corpus (SHUF .C ORPUS ).",
"We experiment with their models, as well as with smaller models that we can train with a smaller carbon footprint.",
"To this end, we downscale the RoBERTa architecture used in Sinha et al. (2021).",
"Concretely, we train single-headed RoBERTa models, dividing the embedding and feed-forward dimensionality by 12, for 24 hours on a single GPU, on 100k sentences sampled from the Toronto Book Corpus.",
"To this end, we train a custom vocabulary of size 5,000, which we use for indexing in all our subsequent experiments.",
"While these smaller models are in no way meant to be fine-tuned and used downstream, they are useful proofs-of-concept that we later analyse.",
"We begin by attempting to ascertain the extent to which shuffled language models are actually capable of encoding information pertaining to the naturalistic word order of sentences.",
"We perform two simple tests on the full-scale models, in line with Wang and Chen (2020): the first of these is a classification task where a logistic regressor is trained to predict whether a randomly sampled token precedes another in an unshuffled sentence, and the second involves predicting the position of a word in an unshuffled sentence.",
"The fact that we do not fine-tune any of the model parameters is noteworthy: the linear models can only learn word order information if it reflects in the representations the models generate somehow.",
"Pairwise Classification For this experiment, we train a logistic regression classification model on word representations extracted from the final layer of the Transformer encoder, mean pooling over sub-tokens when required.",
"For each word pair x and y , the classifier is given a concatenation of our model m 's induced representations m ( x ) m ( y ) and trained to predict a label indicating whether x precedes y or not.",
"Holding out two randomly sampled positions, we use a training sets sized 2 k, 5 k, and 10 k, from the Universal Dependencies English-GUM corpus (Zeldes, 2017) (excluding sentences with more than 30 tokens to increase learnability) and a test set of size 2 , 000 .",
"We report the mean accuracy from three runs.",
"Regression Using the same data, we also train a ridge-regularised linear regression model to predict the position of a word p ( x ) in an unshuffled sentence, given that word's model-induced representa-Model Classification (acc.) Regression ( R 2 ) 2k 5k 10k ORIG 81.50 81.74 80.40 0.68 SHUF .",
"tion m ( x ) .",
"R 2 score is reported per model.",
"To prevent the regressors from memorising word to position mappings, we perform 6-fold cross-validation, where the heldout part of the data contains no vocabulary overlap with the corresponding train set.",
"Results For both tasks (see Table 1), our results indicate that position encodings are particularly important for encoding word order: Classifiers and regressors trained on representations from ORIG and SHUF .",
"N 1 achieve high accuracies and R 2 scores, while those for NOPOS are close to random.",
"Both ORIG and SHUF .",
"N 1 appear to be better than random given only 2 k examples.",
"These results imply that, given positional encodings and a modest training set of 2 k or more examples, a simple linear model is capable of extracting word order information, enabling almost perfect extrapolation to unseen positions.",
"Whether the position encodings come from a model trained on natural or shuffled text does not appear to matter, emphasizing that shuffled language models do indeed contain substantial information about the original word order.",
"In Section 3, we observed that Sinha et al. (2021)'s shuffled language models surprisingly exhibit information about naturalistic word order.",
"That these models contain positional information can also be seen by visualizing position embedding similarity.",
"Figure 1 displays Pearson correlations 3 for position embeddings with themselves, across positions.",
"Here, we see that the shuffled models satisfy the idealised criteria for position embeddings described by Wang et al. (2021): namely, they appear to be",
"a) monotonous within smaller context windows, and",
"b) invariant to translation.",
"If position embedding correlations are consistent across offsets over the entire space of embeddings, the model can be said to have learned' distances between tokens.",
"Since transformers process all positions in parallel, 3 We see similar patterns with dot products for all our plots; we use Pearson correlations to constrain our range to [ 1 , 1] .",
"and since language models without position embeddings do not exhibit such information, position embeddings have to be the source of this information.",
"In what follows, we discuss this apparent paradox.",
"Subword vs. word shuffling An important detail when running experiments on shuffled text, is when the shuffling operation takes place.",
"When tokens are shuffled before BPE segmentation, this leads to word-level shuffling, in which sequences of subwords that form words remain contiguous.",
"Such sequences become a consistent, meaningful signal for language modelling, allowing models to efficiently utilise the inductive bias provided by position embeddings.",
"Thus, even though our pretrained models have, in theory, not seen consecutive tokens in their pre-training data, they have learned to utilise positional embeddings to pay attention to adjacent tokens.",
"The influence of this is somewhat visible in Figure 2: while models trained on text shuffled before and after segmentation both exhibit shifts in the polarity of their position correlations, only the former show bands of varying magnitude , similar to the full-scale models.",
"Ravishankar and Sgaard (2021) discuss the implications of these patterns in a multilingual context; we hypothesise that in our context, the periodicity in magnitude is a visible artefact of the model's ability to leverage position embeddings to enable offset attention.",
"In Section 5, we analyse the effect of shuffling the pretraining data on the models' attention mechanisms.",
"Accidental overlap In addition to the n -gram information which results from shuffling before segmentation, we also note that short sentences tend to include original bigrams with high probability, leading to stronger associations for words that are adjacent in the original texts.",
"This effect is obviously much stronger when shuffling before segmentation than after segmentation.",
"Figure 3 shows how frequent overlapping bigrams (of any sort) are, comparing word and subword shuffling over 50k sentences.",
"Sentence length Finally, we observe some preserved information about the original word order even when shuffling is performed after segmentation.",
"We hypothesize that this is a side-effect of the non-random relationship between sentence length and unigram probabilities.",
"That unigram probabilities correlate with sentence length follows from the fact that different genres exhibit different sentence Figure 3: (Cumulative) plot showing subword bigram overlap after shuffling either words or subwords, as a percentage of the total number of seen bigrams.",
"length distributions (Sigurd et al., 2004; Jin and Liu, 2017).",
"Also, some words occur very frequently in formulaic contexts, e.g., thank in thank you .",
"This potentially means that there is an approximately learnable relationship between the distribution of words and sentence boundary symbols.",
"To test for this, we train two smaller language models on unigram-sampled corpora: for the first, we use the first 100k BookCorpus sentences as our corpus, shuffling tokens at a corpus level (yet keeping the original sentence lengths).",
"The stark difference in position embedding correlations between that and shuffling is seen in Figure 2.",
"For the second, we sample from two different unigram distributions: one for short sentences and one for longer sentences (details in Appendix B).",
"While the first model induces no correlations at all, the second does, as shown in Figure 4, implying that sentence length and unigram occurrences is enough to learn some order information.",
"Transformer-based language models commonly have attention heads that attend to neighboring positions (Voita et al., 2019; Ravishankar et al., 2021).",
"Such attention heads are positional and can only be learned in the presence of order information.",
"We attempt to visualise the attention mechanism for pre-trained models by calculating, for each head and layer, the offset between a token and the token 6910 Figure 4: Similarity matrix between models with sentences sampled based on unigram corpus statistics; disjoint vocab implies a correlation between token choice and sentence length.",
"that it pays maximum attention to 4 .",
"We then plot how frequent each offset is, as a percentage, over 100 Book Corpus sentences, in Figure 5, where we present results for two full-scale models, and two smaller models (see 2).",
"When compared to NOPOS , SHUF .",
"N 1 has a less uniform pattern to its attention mechanism: it is likely, even at layer 0, to prefer to pay attention to adjacent tokens, somewhat mimicking a convolutional window (Cordon-nier et al., 2020).",
"We see very similar differences in distribution between our smaller models: Shuffling after segmentation, i.e., at the subword level, influences early attention patterns.",
"SuperGLUE and WinoGrande Sinha et al. (2021)'s investigation is conducted on GLUE and on the Paraphrase Adversaries from Word shuffling (PAWS) dataset (Zhang et al., 2019).",
"For these datasets, they find that models pretrained on shuffled text perform only marginally worse than those pretrained on normal text.",
"This result, they argue can be explained in two ways: either",
"a) these tasks do not need word order information to be solved, or",
"b) the required word order information can be acquired during finetuning.",
"While GLUE has been a useful benchmark, several of the tasks which constitute it have been shown to be solvable using various spurious artefacts and heuristics (Gu-rurangan et al., 2018; Poliak et al., 2018).",
"If, for instance, through finetuning, models are learning to rely on such heuristics as lexical overlap for MNLI (McCoy et al., 2019), then it is unsurprising that their performance is not greatly impacted by the 4 This method of visualisation is somewhat limited, in that it examines only the maximum attention paid by each token.",
"Evaluating on the more rigorous set of SuperGLUE tasks 5 (Wang et al., 2019) and on the adversarially-filtered Winograd Schema examples (Levesque et al., 2012) of the WinoGrande dataset (Sakaguchi et al., 2020) produces results which paint a more nuanced picture compared to those of Sinha et al. (2021).",
"The results, presented in Table 2, show accuracy or F1 scores for all models.",
"For two of the tasks (MultiRC (Khashabi et al., 2018), COPA (Roemmele et al., 2011)), we observe a pattern in line with that seen in Sinha et al. (2021)'s GLUE and PAWS results: the drop in performance from ORIG to SHUF .",
"N 1 is minimal (mean: 1.75 points; mean across GLUE tasks: 3.3 points) 6 , while that to NOPOS is more substantial (mean: 10.5 points; mean across GLUE tasks: 18.6 points).",
"This pattern alters for the BoolQ Yes/No question answering dataset (Clark et al., 2019), the CommitmentBank (De Marneffe et al., 2019), the ReCoRD reading comprehension dataset (Zhang et al., 2018), both the Winograd Schema tasks, 5 Results are reported for an average of 3 runs per task.",
"The RTE task is excluded from our results as it is also part of GLUE; RTE results can be found in Sinha et al. (2021).",
"and to some extent the Words in Context dataset (Pilehvar and Camacho-Collados, 2018).",
"For these tasks we observe a larger gap between ORIG and SHUF .",
"N 1 (mean: 8.1 points), and an even larger one between ORIG and NOPOS (mean: 19.78 points).",
"We note that this latter set of tasks requires inferences which are more context-sensitive, in comparison to the two other tasks or to the GLUE tasks.",
"Consider the Winograd schema tasks, for example.",
"Each instance takes the form of a binary test with a statement comprising of two possible referents (blue) and a pronoun (red) such as: Sid explained his theory to Mark but he couldn't convince him.",
"The correct referent of the pronoun must be inferred based on a special discriminatory segment (underlined).",
"In the above example, this depends on",
"a) the identification of Sid as the subject of explained and",
"b) inferring that the pronoun serving as the subject of convinced should refer to the same entity.",
"Since the Winograd schema examples are designed so that the referents are equally associated with their context 7 , word order is crucial 8 for establishing the roles of Sid and Mark as subject and object of explained and he and him as those of convinced.",
"If these roles cannot be established, making the correct inference becomes impossible.",
"A similar reasoning can be applied to the Words in Context dataset and the CommitmentBank.",
"The former task tests the ability of a model to distinguish the senses of a polysemous word based on context.",
"While this might often be feasible via a notion of contextual association that higher-order distributional statistics are sufficient for, some instances will require awareness of the word's role as an argument in the sentence.",
"The latter task investigates the projectivity of finite clausal complements under entailment cancelling operators.",
"This is dependent on both the scope of the entailment operator and the identity of the subject of the matrix predicate (De Marneffe et al., 2019), both of which are sensitive to word order information.",
"A final consideration to take into account is dataset filtering.",
"Two of the tasks where we observe 7 e.g. Sid and Mark are both equally likely subjects/objects here.",
"Not all Winograd schema examples are perfect in this regard, however, which could explain why scrambled models still perform above random.",
"See Trichelair et al. (2018) for a discussion of the latter point.",
"8 Particularly in a language with limited morphological role marking such as English.",
"the largest difference between ORIG , SHUF .",
"N 1, and NOPOS WinoGrande and ReCoRD apply filtering algorithms to remove cues or biases which would enable models to heuristically solve the tasks.",
"This indicates that by filtering out examples containing cues that make them solvable via higher order statistics, such filtering strategies do succeed at compelling models to (at least partially) rely on word order information.",
"Dependency Tree Probing Besides GLUE and PAWS, Sinha et al. (2021)'s analysis also includes several probing experiments, wherein they attempt to decode dependency tree structure from model representations.",
"They show, interestingly, that the SHUF",
".N4, SHUF",
".N3 and SHUF",
".N2 models perform only marginally worse than ORIG , with SHUF",
".N1 producing the lowest scores (lower, in fact, than SHUF .C ORPUS ).",
"Given the findings of Section 3, we are interested in taking a closer look at this phenomenon.",
"Here, we surmise that dependency length plays a crucial role in the probing setup, where permuted models may succeed on par with ORIG in capturing local, adjacent dependencies, but increasingly struggle to decode longer ones.",
"To evaluate the extent to which this is true, we train a bilinear probe (used in Hewitt and Liang (2019)) on top of all model representations and evaluate its accuracy across dependencies binned by length, where length between words w i and w j is defined as | i j | .",
"We opt for using the bilinear probe over the Pareto probing framework (Pimentel et al., 2020), as the former learns a transformation directly over model representations, while the latter adds the parent and child MLP units from Dozat et al. (2017) acting more like a parser.",
"We train probes on the English Web Treebank (Silveira et al., 2014) and evaluate using UAS, the standard parsing 6912 Model BoolQ CB COPA MultiRC ReCoRD WiC WSC WinoGrande ORIG 77.6 88.2 / 87.4 61.6 67.8 / 21.9 73.5 / 72.8 67.4 73.5 62.9 SHUF .",
"Figure 6 shows probing accuracy across various dependency lengths for NOPOS and SHUF",
".N1, with respect to ORIG 9 ; we include detailed s for all models in Appendix C. For NOPOS , parsing difficulty increases almost linearly with distance, often mimicking the actual frequency distribution of dependencies at these distances in the original treebank (Appendix C); for SHUF .",
"N 1, the picture is a lot more nuanced, with dependencies at a distance of 1 consistently being closer in terms of parseabil-ity to ORIG , which, we hypothesise, is due to its adjacency bias.",
"Random position embeddings are difficult to add post-training We tried to quantify the degree to which the inductive bias imparted by positional embeddings can be utilised, solely via fine-tuning.",
"To do so, for a subset of GLUE tasks (MNLI, QNLI, RTE, SST-2, CoLA), we evaluate NOPOS , and a variant where we randomly initialised learnable position embeddings and add them to the model, with the rest of the model equivalent to NOPOS .",
"We see no improvement in results, except for MNLI, that we hypothesise stems from position embeddings acting as some sort of regularisation parameter.",
"To test this, we repeat the above set of experiments, this time injecting Gaussian noise instead; this has been empirically shown to have a regularising effect on the network (Bishop, 1995; Camuto et al., 2021).",
"Adding Gaussian noise led to a slight increase in score for just MNLI, backing up our regularisation hypothesis.",
"Models learn to expect specific embeddings Replacing the positional embeddings in ORIG with fixed, sinusoidal embeddings before fine-tuning significantly hurts scores on the same subset of 9 Note that Layer 13 refers to a linear mix of all model layers, as is done for ELMo (Peters et al., 2018).",
"GLUE tasks, implying that the models expect embeddings that resemble the inductive bias imparted by random embeddings, and that fine-tuning tasks do not have sufficient data to overcome this.",
"The addition of fixed, sinusoidal to NOPOS also does not improve model performance on a similar subset of tasks; this implies, given that sinusoidal embeddings are already meaningful, that model weights also need to learn to fit the embeddings they are given, and that they need a substantial amount of data to do so.",
"In Humans It is generally accepted that a majority of languages have canonical or base' word orderings (Comrie, 1989) (e.g. Subject-Verb-Object in English, and Subject-Object-Verb in Hindi). Linguists consider word order to be a coding property mechanisms by which abstract, syntactic structure is encoded in the surface form of utterances. Beyond word order, other coding properties include, e.g. subject-verb agreement, morphological case marking, or function words such as adpositions. In English, word order is among the most prominent coding properties, playing a crucial role in the expression of the main verb's core arguments: subject and object. For more morphologically complex languages, on the other hand, (e.g. Finnish and Turkish), word order is primarily used to convey pragmatic information such as topicalisation or focus. In such cases, argument structure is often signalled via case-marking, where numerous orderings become possible (shift in topic or focus nonwithstanding). We refer the reader to Kulmizev and Nivre (2021) for a broader discussion of these topics and their implications when studying syntax through language models.",
"More generally, evidence for the saliency of word order in linguistic processing and comprehension comes from a variety of studies using acceptability judgements, eye-tracking data, and neu-6913",
"ral response measurements (Bever, 1970; Danks and Glucksberg, 1971; Just and Carpenter, 1980; Friederici et al., 2000, 2001; Bahlmann et al., 2007; Lerner et al., 2011; Pallier et al., 2011; Fedorenko et al., 2016; Ding et al., 2016). Psycholinguistic research has, however, also highlighted the robustness of sentence processing mechanisms to a variety of perturbations, including those which violate word order restrictions (Ferreira et al., 2002; Gibson et al., 2013; Traxler, 2014). In recent work, Mollica et al. (2020) tested the hypothesis that composition is the core function of the brain's language-selective network and that it can take place even when grammatical word order constrains are violated. Their findings confirmed this, showing that stimuli with shuffled word order where local dependencies were preserved as is, roughly speaking, the case for many dependencies in the sentences SHUF . N 4 is trained on elicited a neural response in the language network that is comparable to that elicited by normal sentences. When interword dependencies were disrupted so combinable words were so far apart that composition among nearby words was highly unlikely as in SHUF . N 1, neural response fell to a level compared to unconnected word lists.",
"In Machines Recently, many NLP researchers have attempted to investigate the role of word order information in language models. For example, Lin et al. (2019) employ diagnostic classifiers and attention analyses to demonstrate that lower (but not higher) layers of BERT encode word order information. Papadimitriou et al. (2021) find that Multilingual BERT is sensitive to morphosyntactic alignment, where numerous languages (out of 24 total) rely on word order to mark subjecthood (English among them). Alleman et al. (2021) implement an input perturbation framework (n-gram shuffling, phrase swaps, etc.), and employ it towards testing the sensitivity of BERT's representations to various types of structure in sentences. They report a sensitivity to larger constituent units of sentences in higher layers, which they deduce to be influenced by hierarchical phrase structure. O'Connor and Andreas (2021) examine the contribution of various contextual features to the ability of GPT-2 (Radford et al., 2019) to predict upcoming tokens. Their findings show that several destructive manipulations, including in-sentence word shuffling, applied to midand long range contexts lead only to a modest increase in usable information as defined according",
"to the V-information framework of Xu et al. (2020). Similarly, word order information has been found not to be essential for various NLU tasks and datasets. Early work showed that Natural Language Inference tasks are largely insensitive to permutations of word order (Parikh et al., 2016; Sinha et al., 2020). Pham et al. (2020) and Gupta et al. (2021) discuss this in greater detail, demonstrating that test-time word order perturbations applied to GLUE benchmark tasks have little impact on LM performance. Following up on this, Sinha et al. (2021), which our work builds on, found that pretraining on scrambled text appears to only marginally affect model performance. Most related to this study, Clouatre et al. (2021) introduce two metrics for gauging the local and global ordering of tokens in scrambled texts, observing that only the latter is altered by the perturbation functions found in prior literature. In experiments with GLUE, they find that local (sub-word) perturbations show a substantially stronger performance decay compared to global ones.",
"In this work, we present an in-depth analysis of these results, showing that LMs trained on scrambled text can actually retain word information and that as for humans their sensitivity to word order is dependent on a variety of factors such as the nature of the task and the locality of perturbation. While performance on some understanding evaluation tasks is not strongly affected by word order scrambling, the effect on others such as the Winograd Schema is far more evident.",
"Much discussion has resulted from recent work showing that scrambling text at different stages of testing or training does not drastically alter the performance of language models on NLU tasks.",
"In this work, we presented analyses painting a more nuanced picture of such findings.",
"Primarily, we demonstrate that, as far as altered pre-training is concerned, models still do retain a semblance of word order knowledge largely at the local level.",
"We show that this knowledge stems from cues in the altered data, such as adjacent BPE symbols and correlations between sentence length and content.",
"The order in which shuffling is performed before or after BPE tokenization is influential in models' acquisition of word order, which calls for caution in interpreting previous results.",
"Finally, we show that there exist NLU tasks that are far more 6914 sensitive to sentence structure as expressed by word order.",
"We thank Stephanie Brandl, Desmond Elliott, Yova Kementchedjhieva, Douwe Kiela and Miryam de Lhoneux for their feedback and comments.",
"We acknowledge the CSC-IT Centre for Science, Finland, for providing computational resources.",
"Vinit worked on this paper while on a research visit to the University of Copenhagen.",
"Mostafa and Anders are supported by a Google Focused Research Award."
] | [
"abstain",
"abstain",
"objective",
"result",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"other",
"other",
"other",
"abstain",
"method",
"objective",
"result",
"abstain",
"result",
"other",
"other",
"other",
"other"
] |
[
"Massively Multilingual Transformer based Language Models have been observed to be surprisingly effective on zero-shot transfer across languages, though the performance varies from language to language depending on the pivot",
"language(s) used for fine-tuning.",
"In this work, we build upon some of the existing techniques for predicting the zero-shot performance on a task, by modeling it as a multi-task learning problem.",
"We jointly train predictive models for different tasks which helps us build more accurate predictors for tasks where we have test data in very few languages to measure the actual performance of the model.",
"Our approach also lends us the ability to perform a much more robust feature selection, and identify a common set of features that influence zero-shot performance across a variety of tasks.",
"Multilingual models like mBERT (Devlin et al., 2019) and XLM-R (Conneau et al., 2020) have been recently shown to be surprisingly effective for zero-shot transfer (Pires et al., 2019) (Wu and Dredze, 2019), where on fine-tuning for a task on one or a few languages, called pivots , they can perform well on languages unseen during training.",
"The zero-shot performance however, is often not uniform across the languages and the multilingual models turn out to be much less effective for low resource languages (Wu and Dredze, 2020; Lauscher et al., 2020) and the languages that are typologically distant from the pivots (Lauscher et al., 2020).",
"What affects the zero-shot transfer across different languages is a subject of considerable interest and importance (K et al., 2020; Pires et al., 2019; Wu and Dredze, 2019; Lauscher et al., 2020), however there is little conclusive evidence and a few papers even show contradictory findings.",
"Lauscher et al. (2020) recently, showed that it is possible to predict the zero shot performance of Equal contribution mBERT and XLM-R on different languages by formulating it as a regression problem, with pretraining data size and typological similarities between the pivot and target languages as the input features, and the performance on downstream task as the prediction target.",
"Along similar lines Srinivasan et al. (2021) and Dolicki and Spanakis (2021) explore zero-shot performance prediction with a larger set of features and different regression techniques.",
"However, the efficacy of these solutions are severely limited by the lack of training data, that is, the number of languages for which performance metrics are available for a given task.",
"For instance, for most tasks in the popular XTREME-R (Ruder et al., 2021) benchmark, there are data points for 7-11 languages.",
"This not only makes zero-shot performance prediction a challenging problem, but also a very important one because for practical deployment of such multilingual models, one would ideally like to know its performance for all the languages the model is supposed to handle.",
"As Srinivasan et al. (2021) shows, accurate performance predictors can also help us build better and fairer multilingual models by suggesting data labeling strategies.",
"In this work, we propose multi-task learning (Zhang and Yang, 2017) as an approach to mitigate training-data constraints and consequent over-fitting of the performance predictors to tasks and/or datasets.",
"The contributions of our work are fourfold.",
"First, we experiment with different multitask learning approaches, such as Group Lasso (Yuan and Lin, 2006), Collective Matrix Factorization (Cortes, 2018), Multi-Task Deep Gaussian Process Regression (Bonilla et al., 2008) and Meta Agnostic Meta Learning (Finn et al., 2017) for 11 tasks.",
"We observe an overall 10% reduction in performance prediction errors compared to the best performing single-task models.",
"The gains are even stronger when we just consider the tasks with very few data points ( 10 ), where we see a 20% 5454 drop in the mean absolute errors.",
"Second, an interesting consequence of modelling this problem via multi-task learning is that we are able to predict performance on low resource languages much more accurately, where in some cases single-task approaches may perform even worse than the simple averaging baselines.",
"Third, apart from the features used for zero-shot performance prediction in the previous work (Lauscher et al., 2020; Srinivasan et al., 2021; Dolicki and Spanakis, 2021), we also utilize metrics quantifying the quality of multilingual tokenizers as proposed in (Rust et al., 2021) as features in our predictive models, which turn out to have strong predictive power for certain tasks.",
"To the best of our knowledge, our work is the first to explore the impact of tokenizer quality specifically on zero-shot transfer.",
"And fourth, our multi-task framework in general lends us with a much more robust selection of features affecting the zero-shot performance.",
"This, in turn, lets us investigate the critical open question on what influences the zero-shot performances across languages more rigorously.",
"As we shall see, our findings corroborate some of the previous conclusions, while others are extended or annulled.",
"Zero Shot Transfer.",
"Multilingual models like mBERT (Devlin et al., 2019) and XLM-R (Con-neau et al., 2020) have shown surprising effectiveness in zero-shot transfer, where fine-tuning the MMLM on a task in some source language often leads to impressive performance on the same task in other languages as well without explicitly training on them.",
"Pires et al. (2019) first observed this phenomenon for NER (Tjong Kim Sang, 2002; Tjong Kim Sang and De Meulder, 2003; Levow, 2006) and POS tagging (Nivre et al., 2018) tasks.",
"Concurrently, Wu and Dredze (2019) also showed this surprisingly cross lingual transfer ability of mBERT additionally on tasks like Document Classification (Schwenk and Li, 2018), Natural Language Inference (Conneau et al., 2018) and Dependency Parsing (Nivre et al., 2018).",
"Factors Affecting Zero Shot Transfer.",
"Pires et al. (2019) showed that vocabulary memorization played little role in zero-shot generalization as language pairs with little word piece overlap also exhibited impressive crosslingual performance.",
"K et al. arrived at a similar conclusion by training BERT on an artificially generated language to zero out the word overlap with the target languages, and observed only minor drops in the performance compared to training the model on English.",
"On the contrary Wu and Dredze (2019), observed strong correlations between the sub-word overlap and the zero-shot performance in four out of five tasks.",
"Wu and Dredze (2020) showed that mBERT performed much worse for zero-shot transfer to low resource languages (i.e., less pre-training data) than high resource ones on POS Tagging, NER and Dependency Parsing tasks.",
"Lauscher et al. (2020) also had a similar observation on tasks like XNLI and XQuAD (Artetxe et al., 2020), though they found that the zero-shot performance on NER, POS tagging and Dependency Parsing tasks might not strictly depend on the pre-training size and could be better explained by different linguistic relatedness features like syntactic and phonological similarities between the language pair.",
"Similar dependence on the typological relatedness such as word order had also been observed by Pires et al. (2019).",
"Performance Prediction.",
"Prior work has explored predicting the performance of machine learning models from unlabelled data by either measuring (dis)agreements between multiple classifiers (Pla-tanios et al., 2014, 2017) or by utilizing underlying information about data distribution (Domhan et al., 2015).",
"In the context of NLP Birch et al. (2008) explored predicting the performance of a Machine Translation system by utilizing different explanatory variables for the language pairs.",
"Lin et al. (2019) proposed a learning to rank approach to choose transfer languages for cross lingual learning using several linguistic and dataset specific features.",
"Recently, there has been an interest in predicting the performance of NLP models without actually training or testing them, by formulating it as a regression problem.",
"Xia et al. (2020) showed that using experimental settings for an NLP experiment as inputs it is possible to accurately predict the performance on different languages and model architectures.Ye et al. (2021) extended this work by proposing methods to do a fine-grained estimation of the performance as well as predicting well-callibrated confidence intervals.",
"Specifically predicting the zero-shot performance of MMLMs was first explored in Lauscher et al. (2020), where they used a linear regression model to estimate the cross-lingual transfer performance based on pretraining data size and linguistic relatedness features.",
"Srinivasan et al. (2021) tackled this problem by utilizing XGBoost Regressor for the prediction along with a larger set of features.",
"Dolicki and Spanakis (2021) explored individual syntactic features for zero-shot performance prediction instead of working with aggregate similarity values, and showed about 2 to 4 times gain in performance.",
"We extend all of these works by considering a multi-task learning approach, where performance prediction in a task utilizes not only the data available for that task, but also the patterns observed for other tasks.",
"We begin by defining the multi-task performance prediction problem and then describe the different linguistic and MMLM specific features used.",
"Consider a pre-trained multilingual model M , trained using self supervision on a set of languages L .",
"Let T be the set of downstream NLP tasks, P be the set of pivot (source) languages for which training data is available for the downstream tasks for fine-tuning and T be the set of target languages for which validation/test data is available.",
"Note that P L and T L .",
"We use the zero-shot setting similar to Lauscher et al. (2020) which enforces P and T to be disjoint sets 1 , i.e., P T = .",
"We then define y M , t p,t R as the zero-shot performance on language t T on finetuning M on task t T in pivot language p P .",
"Let x M p,t R n be the n -dimensional feature vector representing the corresponding train-test configuration.",
"Since for our experiments we train and evaluate the performance prediction for a single model at a time, we will simplify the notations to y t p,t and x p,t .",
"The predictor model can then be defined as the function f , : R n T R , where R d g denotes the shared parameters across the tasks and the task specific parameters are given by R d s | T | .",
"The objective function for training such a predictor model can be defined as: J ( , ) = (cid:88) t T (cid:88) p P (cid:88) t T f ( x p,t , t ; , ) y t p,t 22 + g 1 + s 1 , 1 + group 1 ,q (1) 1 Though beyond the scope of the current work, it is possible to extend this to a few-shot setting as discussed in Srinivasan et al. (2021).",
"The second and third terms regularize the global and task specific parameters independently, while the last term, l 1 /l q norm with q > 1 , ensures a block sparse selection of the task specific parameters.",
"This term ensures a multi-task learning behavior even when there are no parameters shared across the tasks (i.e., = ) through selection of common features across the tasks.",
"Setting = and group = 0 leads to the single task setup of Lauscher et al. (2020) and Srinivasan et al. (2021).",
"We divide the set of features into two higher level categories, viz. the pairwise features defined for the pivot and target that measure the typological relatedness of the languages, and the individual features defined for the target language reflecting the state of its representation in M .",
"Instead of directly using the different typological properties of the the two languages as features, we use the pairwise relatedness to avoid feature explosion.",
"Subword Overlap : We define the subword overlap as the percentage of unique tokens that are common to the vocabularies of both the pivot and target languages.",
"Let V p and V t be the subword vocabularies of p and t .",
"The subword overlap is then defined as : o sw ( p, t ) = | V p V t | | V p V t | (2) Similarity between Lang2Vec vectors : Following Lin et al. (2019) and Lauscher et al. (2020), we compute the typological relatedness between p and t from the linguistic features provided by the URIEL project (Littell et al., 2017).",
"We use syntactic ( s syn ( p, t ) ), phonological similarity ( s pho ( p, t ) ), genetic similarity ( s gen ( p, t ) ) and geographic distance ( d geo ( p, t ) ).",
"For details, please see Littell et al. (2017).",
"Pre-training Size : We use the log 10 of the size (in words) of the pre-training corpus in the target language, SIZE ( t ) , as a feature.",
"Rare Typological Traits : Srinivasan et al. (2021) proposed this metric to capture the rarity of the typological features of a language in the representation of M .",
"Every typological feature in WALS 5456 database is ranked based on the amount of pretraining data for the languages that contain the feature.",
"For the language t , Mean Reciprocal Rank (MRR) of all of its features is then calculated and used as a feature WMRR ( t ) .",
"Tokenizer Features : In their recent work, Rust et al. (2021) proposed two metrics, viz. tokenizer's fertility and proportion of continued words, to evaluate the quality of multilingual tokenizers on a given language.",
"For target t , they define the tokenizer's fertility, FERT ( t ) , as the average number of sub-words produced for every tokenized word in t 's corpus.",
"On the other hand, the proportion of continued words, PCW ( t ) , measures how often the tokenizer chooses to continue a word across at least two tokens.",
"They show that the multilingual models perform much worse on a task than their monolingual counterparts when the values of these metrics are higher for the multilingual tokenizer.",
"We include FERT ( t ) and PCW ( t ) as features.",
"An important thing to note here is that the we do not use identity of a language as a feature while training the models, hence the performance prediction models are capable of generating predictions on new languages unseen during training.",
"However, if the features of the new languages deviate significantly from the features seen during training, the predictions are expected to be less accurate as also observed in Xia et al. (2020); Srinivasan et al. (2021) and is one of the main reasons for exploring a multi-task approach.",
"We extensively experiment with a wide-array of multi-task as well as single-task regression models to provide a fair comparison between different approaches to zero-shot performance prediction.",
"Average Score Within a Task (AWT) : The performance for a pivot-target pair ( p , t ) on a task t is approximated by taking the average of the performance on all other target languages (pivot being fixed) in the same task t , i.e., f ( x p,t , t ) = 1 |T | 1 (cid:80) t T { t } y t p,t .",
"Average Score across the Tasks (AAT) : Here instead of averaging over all the target languages within a task, we approximate the performance on a given target language by averaging the scores for that language across the other tasks, i.e., f ( x p,t , t ) = 1 | T | 1 (cid:80) t T { t } y t p,t .",
"Lasso Regression : Lauscher et al. (2020) train different linear regression models for each task.",
"Along similar lines, we experiment with linear regression, but also add an L1 regularization term, as we observed it usually leads to better predictors.",
"XGBoost Regressor : As shown in Srinivasan et al. (2021), XGBoost (Chen and Guestrin, 2016) generally obtains impressive performance on this task, and hence we include it in our experiments as well.",
"Group Lasso : l 1 /l q norm based block-regularization has been shown to be effective for multi-task learning in the setting of multi-linear regression (Yuan and Lin, 2006; Argyriou et al., 2008).",
"For each task, consider separate linear regression models represented by the weight matrix R n | T | .",
"The l 1 /l q regularization term is given as: 1 ,q = (cid:80) nj =1 ( (cid:80) | T | t =1 | j t | q ) 1 /q , where j t denotes the weight for the feature j in the task t .",
"For q > 1 , minimizing this term pushes the l q -norms corresponding to the weights of a given feature across the tasks to be sparse, which encourages multiple predictors to share similar sparsity patterns.",
"In other words, a common set of features is selected for all the tasks.",
"We use q = 2 for the group regularization term.",
"Since this can be restrictive in certain scenarios, some natural extensions to Group Lasso, such as Dirty Models (Jalali et al., 2010) and Multi Level Lasso (Lozano and Swirszcz, 2012), have been proposed that separate out the task specific and global parameters.",
"We experimented with these methods and observed equivalent or worse performance compared to Group Lasso.",
"Collective Matrix Factorization (CMF) with Side Information : Low rank approximation for the task weights matrices forms one family of methods for multi-task learning (Zhang and Yang, 2017; Pong et al., 2010; Ando et al., 2005).",
"As a direct analogue with collaborative filtering, here we can think of the tasks as users and pivot-target pairs as items .",
"Consider the matrix Y R | T ||PT | , where each element of the matrix correspond to y t p,t .",
"We can then decompose the matrix into task and language-pair specific factors as Y TLT (3) 5457 where T R | T | d latent and L R |PT | d latent are the task and language-pair factor matrices, and d latent is the number of factors.",
"Additionally, in order to incorporate the feature information about the language pairs as discussed in section 3.2, we incorporate Collective Matrix Factorization approach (Cortes, 2018).",
"It incorporates the attribute information about items and/or users in the factorization algorithm by decomposing the language-pair feature matrix X R |PT | n as LFT , such that L is shared across both decompositions.",
"This helps to learn the latent representations for the pivot-language pairs from the task-wise performance as well as different linguistic and MMLM specific features 2 .",
"In relation to Equation 1, we can think of task factors T to correspond to the task specific parameters , language-pair factors L as the shared parameters and the predictor model as f ( x p,t , t ; , ) = ( TLT ) ( p,t ) , t .",
"Both L and T are regularized seperately, but there is no group regularization term ( group = 0 ).",
"Ye et al. (2021) also uses a Tensor Factorization approach for performance prediction which is similar to our CMF method.",
"However, they train separate models for each task and factorize over metric specific attributes instead for a fine-grained prediction.",
"Multi-Task Deep Gaussian Process Regression (MDGPR) : We use the multi-task variant of Gaussian Processes proposed in Bonilla et al. (2008) and utilize deep neural networks to define the kernel functions as in Deep GPs (Wilson et al., 2016).",
"For comparison, we also report the scores of the single-task variant of this method which we denote as DGPR.",
"See Appendix (section A.1) for details.",
"Apart from these we also explore other multitask methods like Model Agnostic Meta Learning (MAML) (Finn et al., 2017), details of which we leave in the appendix (section A.1).",
"In this section, we discuss our test conditions, datasets and training parameters for the different experiments.",
"We consider two different test conditions: Leave One Language Out (LOLO) and Leave Low Resource Languages Out (LLRO).",
"2 Note that we can use a similar approach for providing side information for the tasks as well.",
"Leave One Language Out : LOLO is a popular setup for multilingual performance prediction (Lauscher et al., 2020; Srinivasan et al., 2021), where for a given task, we choose a target language and move all of its instances from the prediction dataset to the test data.",
"The models are then trained on the remaining languages and evaluated on the unseen test language.",
"This is done for all the target languages available for a task, and the Mean Absolute Error (MAE) across languages is reported.",
"In the multi-task setting we evaluate on one task at a time while considering the rest as helper tasks for which the entire data is used including the test language 3 .",
"Leave Low Resource Languages Out : Through this evaluation strategy we try to emulate the real world use case where we only have test data available in high resource languages such as English, German and Chinese, and would like to estimate the performance on under-represented languages such as Swahili and Bengali.",
"We use the language taxonomy provided by Joshi et al. (2020) to categorize the languages into six classes (0 = low to 5 = high) based on the number of resources available.",
"We then move languages belonging to class 3 or below to our test set and train the models on class 4 and 5 languages only.",
"Similar to LOLO, here too we allow the helper tasks to retain all the languages.",
"We use the following 11 tasks provided in XTREME (Hu et al., 2020) and XTREME-R (Ruder et al., 2021) benchmarks: 1. Classification : XNLI (Conneau et al., 2018) , PAWS-X (Yang et al., 2019), and XCOPA (Ponti et al., 2020) 2. Structure Prediction : UDPOS (Nivre et al., 2018), and NER (Pan et al., 2017) 3. Question Answering : XQUAD (Artetxe et al., 2020), MLQA (Lewis et al., 2020), and TyDiQA-GoldP (Clark et al., 2020) 4. Retrieval : Tatoeba (Artetxe and Schwenk, 2019), Mewsli-X (Botha et al., 2020; Ruder et al., 2021), and LAReQA (Roy et al., 2020) All of these datasets have training data present",
"3 Note that this is a reasonable relaxation to make as it is closer to the real world use case where we would have the evaluation data for some languages in the standard tasks and would like to utilize that to make predictions on the same languages for the new ftask.",
"For training XGBoost, we used 100 estimators with a maximum depth of 10.",
"For Group Lasso, we used the implementation provided in the MuTaR software package 4 , and used a regularization strength of 0 .",
"01 .",
"We optimized CMF's objective function using Alternating Least Squares (ALS), used 5 latent factors with a regularization parameter equal to 0 .",
"1 , and used the Collective Matrix Factorization python library 5 .",
"In case of MDGPR, we used Radial Basis Function as the kernel and a two-layer MLP for learning latent features, with 50 and 10 units followed by ReLU activation.",
"We set the learning rate and epochs as 0.01 and 200, and implemented it using GPyTorch 6 .",
"Table 1 shows MAE (in %) for LOLO for different single-task and multi-task models on the tasks.",
"For XLMR, we observe that multi-task models, primarily MDGPR, often outperform the best single-task models by significant margins, and for tasks like MewsliX we even see about 36% reduction in MAE.",
"Overall, we see about 10% drop in LOLO errors on average for MDGPR compared to the best performing single-task model i.e. Lasso Regression.",
"As expected, the benefit of multi-task 4 https://github.com/hichamjanati/mutar 5 https://github.com/david-cortes/ cmfrec 6 https://gpytorch.ai/ learning is even more prominent when we consider the tasks for which only a few ( 10 ) data points are available.",
"Here we see about 20% reduction in errors.",
"For mBERT as well, we have similar observations, except that CMF performs slightly better than MDGPR.",
"Note that the Average across task baseline is quite competitive and performs better than single-task XGBoost and MAML in average, and better than all models for LAReQA.",
"Figure 2 plots the dependence of the number of helper tasks on the performance of the multi-task models.",
"As expected, MAE decreases as helper tasks increase, especially for MDGPR and CMF.",
"On a related note, the Pearson Correlation coefficient between MAE and number of tasks a target language is part of is found to be 0 .",
"39 , though the trend in this case is not as clear.",
"Predicting the performance on low resource languages, for which often standard training and test datasets are not available, can be an important use case where multi-task performance prediction can be helpful.",
"Figure 6 in appendix shows the classwise (Joshi et al., 2020) distribution of languages for the tasks that we consider in our experiments.",
"As one would expect, for most tasks, test data is available for languages belonging to class-4 and class-5.",
"Training performance prediction models without any task to transfer from can therefore, possibly lead to poor generalization on the low resource languages.",
"On the other hand, for the same reason lack of test data, building accurate predictors for low-resource languages is necessary.",
"MAE values for the LLRO evaluation setup are shown in figure 1 for XLMR.",
"Results for mBERT follow similar trends and are reported in the Appendix (figure 7).",
"For both XLMR and mBERT we observe that the three main multi-task models Group Lasso, CMF and MDGPR outperform the single-task models and baselines.",
"Interestingly, for XLMR, the single task models XGBoost and Lasso perform even worse than the Average within Tasks baseline.",
"Overall we see around 18% and 11% drop in MAE for Group Lasso over the best performing single-task model, for XLMR and mBERT respectively.",
"An interesting consequence of zero-shot performance prediction is that the models can be directly used to infer the correlation (and possibly causation) between linguistic relatedness and pretraining conditions and zero-shot transferability.",
"Multi-task learning, in this context, help us make more robust inferences, as the models are less prone to overfitting to a particular task or dataset.",
"Figure 3 shows the SHAP values of the features for the Group Lasso model trained on XLMR's zero-shot performance data.",
"As expected for Group Lasso, we see a block-sparsity behavior among the tasks.",
"Features such as Rare Typological Traits",
"(WMRR(t)), Tokenizer's Fertility (FERT ( t ) ) and Genetic Similarity ( s gen ( p, t ) ) are ignored in all the tasks.",
"In contrast, for the single-task lasso regression (Figure 9 in Appendix), we see different sets of features selected for different tasks, which for the scale at which we operate, might not be indicative of the actual factors that affect the zero-shot performance in these tasks.",
"Subword Overlap.",
"Among the features that get selected for all tasks, we observe that Subword Overlap ( o sw ( p, t ) ) typically gets higher importance in XGB o o s t MAMLDGPRL a ss o AWTAATCMFG r o up L a ss o MDGPR 6 8 10 12 14 A v e r a g e LLROE rr o r 11.86 10.72 10.34 10.16 9.66 9.59 9.30 9.03 8.28 XLMR Figure 1: Leave Low Resource Out (LLRO) results for XLMR 1 2 3 4 5 6 7 8 9 10 Number of Helper Tasks 0.40.50.60.70.80.91.0 N o r m a li z e d LOLOE rr o r XLMR Model Group Lasso CMF MDGPR Figure 2: Number of helper tasks vs. LOLO MAE.",
"retrieval (LAReQA and MewsliX) and sentence classification tasks (PAWS-X, XNLI).",
"Since the retrieval tasks that we consider, as described in Ruder et al. (2021), measure the alignment between the cross lingual representations of semantically similar sentences, having a shared vocabulary between the languages can leak information from one to an-other (Wu and Dredze, 2019) which might improve the retrieval performance.",
"Interestingly, if we compare this with the feature importance scores for the single task lasso model (Figure 9 in Appendix), we do see MewsliX task getting higher importance for the subword overlap, but LAReQA gets virtually zero SHAP value for this feature, showcasing how single-task models can misinterpret two similar tasks as requiring very different features.",
"Our observation reinforce the generally held notion that vocabulary overlap between the pivot and target is beneficial for zero-shot transfer (Wu and Dredze, 2019), especially for retrieval tasks, though some studies have argued otherwise (Pires et al., 2019; K et al., 2020).",
"(XQUAD and TyDiQA) tasks that require making predictions for each token in the input, we see that the tokenizer feature, PCW ( t ) , receive a higher SHAP value.",
"In contrast, for single-task lasso, here too we do not observe high importance of this feature across these related tasks.",
"Rust et al. (2021) note that languages such as Arabic where mBERT's multilingual tokenizer was found to be much worse than it's monolingual counterpart, there was a sharper drop in performance of mBERT compared to the monolingual model for QA, UDPOS and NER tasks than for sentiment classification.",
"We believe that XLMR's surprisingly worse performance than mBERT for Chinese and Japanese on UDPOS might be correlated with it's significantly worse tokenizer for these languages based on the fertility (FERT) and Percentage Continued Words (PCW) feature values (see Appendix A.2 for exact values).",
"The high SHAP values for PCW ( t ) further strengthen our belief 7 .",
"Pre-training Size.",
"Similar to the findings of Lauscher et al. (2020), we observe that pre-training corpus size has low SHAP value, and therefore, lower importance for lower level tasks such as UDPOS and NER, and higher SHAP values for higher level tasks like XNLI.",
"Additionally, we extend their observations to tasks such as XCOPA, Tatoeba, MLQA and LAReQA where pre-training size seem to play a significant role in the performance prediction.",
"Again, compared to single Lasso Regression model, we see a different selection pattern: Pre-training size receives a high SHAP value for UDPOS while for XNLI it is negligible.",
"This neither fully conforms with our observations on the multi-task feature selections, nor with the previous work (Lauscher et al., 2020).",
"Typological Relatedness Features.",
"Out of all the typological relatedness features, we found Geographical Distance ( d geo ( p, t ) ) receiving highest SHAP values for all tasks, implying that geographical proximity between the pivot-target pair is an important factor in determining the zero-shot transferability between them.",
"Lauscher et al. (2020) also observe positive correlations between geographical relatedness and zero-shot performance.",
"The cross-task importance of geographic distance (unlike the other relatedness features) might be attributed to the 100% coverage across languages for the geo-7 Note that Rust et al. (2021) shows the importance of tokenizer metrics for the case where the multilingual models are fine-tuned on the target language, whereas we analyze their importance for zero-shot transfer.",
"graphical vectors in the URIEL database.",
"In contrast, Syntactic and Phonological vectors have missing values for a majority of the languages (Littell et al., 2017).",
"Like Lauscher et al. (2020), we also see some dependence on syntactic ( s syn ( p, t ) ) and phonological ( s pho ( p, t ) ) similarities for XLMR's zero shot performance on XNLI and XQUAD tasks respectively.",
"However, in both cases we found that the tokenizer feature PCW ( t ) receives a much higher SHAP value.",
"Interestingly, genetic similarity ( s gen ( p, t ) ) is not selected for any task, arguably due to the block sparsity in feature selection of Group Lasso.",
"We do see some tasks receiving high SHAP values for s gen ( p, t ) in single-task lasso (Figure 9 in Appendix).",
"However, the number of such tasks as well as the SHAP values are on the lower side, implying that genetic similarity might not provide any additional information for zero-shot transfer over and above the geographical, syntactic and phonological similarities.",
"Similar trends are observed in the case of mBERT as well (Figure 10 in appendix), with some minor differences.",
"For instance, instead of PCW ( t ) , FERT ( t ) receives higher SHAP value; s syn ( p, t ) also receives higher importance, especially for tasks like UDPOS and XNLI, which is consistent with the findings of Lauscher et al. (2020).",
"In this paper, we showed that the zero-shot performance prediction problem can be much more effectively and robustly solved by using multi-task learning approaches.",
"We see significant reduction in errors compared to the baselines and single-task models, specifically for the tasks which have test sets available in a very few languages or when trying to predict the performance for low resource languages.",
"Additionally, this approach allows us to robustly identify factors that influence zero-shot performance.",
"Our findings in this context can be summarized as follows.",
"1. Subword overlap between the pivot and target has a strong positive influence on zero-shot transfer, especially for Retrieval tasks.",
"2. Quality of the target tokenizer , defined in terms of how often or how aggressively it splits the target tokens negatively influences zero-shot performance for word-level tasks such as POS tagging and Span extraction.",
"3. Pre-training size of the target positively 5461 influences zero-shot performance in many tasks, including XCOPA, Tatoeba, MLQA and LAReQA.",
"4. Geographical proximity between pivot and target is found to be uniformly important across all the tasks, unlike syntactic and phonological similarities, which are important for only some tasks.",
"This last finding is especially interesting.",
"As described earlier, geographical proximity is a more clear, noise-free and complete feature compared to the other relatedness metrics.",
"However, one could also argue that since neighboring languages tend to have high vocabulary and typological feature overlap due to contact processes and shared areal features, geographical distance is an extremely informative feature for zero-shot transfer.",
"Two direct implications of these findings are: (1) for effective use of MMLMs, one should develop resources in at least one pivot language per geographic regions, and (2) one should work towards multilingual tokenizers that are effective for most languages.",
"There are a number of directions that can be explored in future related to our work.",
"The prediction models can be extended to a multi-pivot and few-shot settings, as described in Srinivasan et al. (2021).",
"Further probing experiments could be designed to understand the role of sub-word overlap on zero-shot transfer of Retrieval tasks.",
"We would like to thank the LITMUS team at Microsoft for their valuable inputs and feedback over the course of this project."
] | [
"abstain",
"abstain",
"method",
"method",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"objective",
"abstain",
"result",
"result",
"method",
"abstain",
"objective",
"abstain",
"objective",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"other",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"result",
"result",
"other",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"result",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"other"
] |
[
"We introduce a novel top-down end-to-end formulation of document level discourse parsing in the Rhetorical Structure Theory (RST) framework.",
"In this formulation, we consider discourse parsing as a sequence of splitting decisions at token boundaries and use a seq2seq network to model the splitting decisions.",
"Our framework facilitates discourse parsing from scratch without requiring discourse segmentation as a prerequisite; rather, it yields segmentation as part of the parsing process.",
"Our unified parsing model adopts a beam search to decode the best tree structure by searching through a space of high scoring trees.",
"With extensive experiments on the standard English RST discourse treebank, we demonstrate that our parser outperforms existing methods by a good margin in both end-to-end parsing and parsing with gold segmentation.",
"More importantly, it does so without using any handcrafted features, making it faster and easily adaptable to new languages and domains.",
"In a document, the clauses, sentences and paragraphs are logically connected together to form a coherent discourse.",
"The goal of discourse parsing is to uncover this underlying coherence structure, which has been shown to benefit numerous NLP applications including text classification (Ji and Smith, 2017), summarization (Gerani et al., 2014), sentiment analysis (Bhatia et al., 2015), machine translation evaluation (Joty et al., 2017) and conversational machine reading (Gao et al., 2020).",
"Rhetorical Structure Theory or RST (Mann and Thompson, 1988), one of the most influential theories of discourse, postulates a hierarchical discourse structure called discourse tree (DT).",
"The leaves of a DT are clause-like units, known as elementary discourse units (EDUs).",
"Adjacent EDUs and higher-order spans are connected hierarchically through coherence relations ( e.g., Contrast, Explanation ).",
"Spans connected through a relation are categorized based on their relative importance nucleus being the main part, with satellite being the subordinate one.",
"Fig. 1 exemplifies a DT spanning over two sentences and six EDUs.",
"Finding discourse structure generally requires breaking the text into EDUs (discourse segmentation) and linking the EDUs into a DT (discourse parsing).",
"Discourse parsers can be singled out by whether they apply a bottom-up or top-down procedure.",
"Bottom-up parsers include transition-based models (Feng and Hirst, 2014; Ji and Eisenstein, 2014; Braud et al., 2017; Wang et al., 2017) or globally optimized chart parsing models (Soricut and Marcu, 2003; Joty et al., 2013, 2015).",
"The former constructs a DT by a sequence of shift and reduce decisions, and can parse a text in asymptotic running time that is linear in number of EDUs.",
"However, the transition-based parsers make greedy local decisions at each decoding step, which could propagate errors into future steps.",
"In contrast, chart parsers learn scoring functions for sub-trees and adopt a CKY-like algorithm to search for the highest scoring tree.",
"These methods normally have higher accuracy but suffer from a slow parsing speed with a complexity of O ( n 3 ) for n EDUs.",
"The top-down parsers are relatively new in discourse (Lin et al., 2019; Zhang et al., 2020; Kobayashi et al., 2020).",
"These methods focus on finding splitting points in each iteration to build a DT.",
"However, the local decisions could still affect the performance as most of the methods are still greedy.",
"Like most other fields in NLP, language parsing has also undergone a major paradigm shift from traditional feature-based statistical parsing to end-to-end neural parsing.",
"Being able to parse a document end-to-end from scratch is appealing for several key reasons.",
"First, it makes the overall development procedure easily adaptable to new languages, domains and tasks by surpassing the expensive feature engineering step that often requires more time and domain/language expertise.",
"Second, the lack of an explicit feature extraction phase makes the training and testing (decoding) faster.",
"Because of the task complexity, it is only recently that neural approaches have started to outperform traditional feature-rich methods.",
"However, successful document level neural parsers still rely heavily on handcrafted features (Ji and Eisenstein, 2014; Yu et al., 2018; Zhang et al., 2020; Kobayashi et al., 2020).",
"Therefore, even though these methods adopt a neural framework, they are not end-to-end and do not enjoy the above mentioned benefits of an end-to-end neural parser.",
"Moreover, in existing methods (both traditional and neural), discourse segmentation is detached from parsing and treated as a prerequisite step.",
"Therefore, the errors in segmentation affect the overall parsing performance (Soricut and Marcu, 2003; Joty et al., 2012).",
"In view of the limitations of existing approaches, in this work we propose an end-to-end top-down document level parsing model that: Can generate a discourse tree from scratch without requiring discourse segmentation as a prerequisite step; rather, it generates the EDUs as a by-product of parsing.",
"Crucially, this novel formulation facilitates solving the two tasks in a single neural model.",
"Our formulation is generic and works in the same way when it is provided with the EDU segmentation.",
"Treats discourse parsing as a sequence of splitting decisions at token boundaries and uses a seq2seq pointer network (Vinyals et al., 2015) to model the splitting decisions at each decoding step.",
"Importantly, our seq2seq parsing model can adopt beam search to widen the search space for the highest scoring tree, which to our knowledge is also novel for the parsing problem.",
"Does not rely on any handcrafted features, which makes it faster to train or test, and easily adaptable to other domains and languages.",
"Achieves the state of the art (SoTA) with an F 1 score of 46.6 in the Full (label+structure) metric for end-to-end parsing on the English RST Discourse Treebank, which outperforms many parsers that use gold EDU segmentation.",
"With gold segmentation, our model achieves a SoTA F 1 score of 50.2 (Full), outperforming the best existing system by 2.1 absolute points.",
"More im-porantly, it does so without using any handcrafted features (not even part-of-speech tags).",
"We make our code available at https://ntunlpsg.github.io/project/rst-parser 2 Model Assuming that a document has already been segmented into EDUs, following the traditional approach, the corresponding discourse tree (DT) can be represented as a set of labeled constituents.",
"where m = | C | is the number of internal nodes in the tree and r t is the relation label between the discourse unit containing EDUs i t through k t and the one containing EDUs k t + 1 through j t .",
"Traditionally, in RST parsing, discourse segmentation is performed first to obtain the sequence of EDUs, which is followed by the parsing process to assemble the EDUs into a labeled tree.",
"In other words, traditionally discourse segmentation and parsing have been considered as two distinct tasks that are solved by two different models.",
"On the contrary, in this work we take a radically different approach that directly starts with parsing the (unsegmented) document in a top-down manner and treats discourse segmentation as a special case of parsing that we get as a by-product.",
"Importantly, this novel formulation of the problem allows us to solve the two problems in a single neural model.",
"Our parsing model is generic and also works in the same way when it is fed with an EDU-segmented text.",
"Before presenting the model architecture, we first formulate the problem as a splitting decision problem at the token level.",
"We reformulate the discourse parsing problem from Eq.",
"(1) as a sequence of splitting decisions at token boundaries (instead of EDUs).",
"Specifically, the input text is first prepended and appended with the special start ( < sod > ) and end ( < eod > ) tokens, respectively.",
"We define the token-boundary as the indexed position between two consecutive tokens.",
"For example, the constituent spanning But he added : in Fig. 2 is defined as (0 , 4) .",
"Following the standard practice, we convert the discourse tree by transforming each multi-nuclear constituent into a hierarchical right-branching binary sub-tree.",
"Every internal node in the resulting binary tree will have a left and a right constituent, allowing us to represent it by its split into the left and right children.",
"Based on this, we define the Boundary-based splitting representation when EDUs are provided S edu = { (0 , 44) (cid:41) 4 , (4 , 44) (cid:41) 25 , (4 , 25) (cid:41) 17 , (25 , 44) (cid:41) 37 , (25 , 37) (cid:41) 33 } Boundary-based splitting representation for end-to-end parsing S = { (0 , 44) (cid:41) 4 , ( 0 , 4 ) (cid:41) 4 , (4 , 44) (cid:41) 25 , (4 , 25) (cid:41) 17 , ( 4 , 17 ) (cid:41) 17 , ( 17 , 25 ) (cid:41) 25 , (25 , 44) (cid:41) 37 , (25 , 37) (cid:41) 33 , ( 25 , 33 ) (cid:41) 33 , ( 33 , 37 ) (cid:41) 37 , ( 37 , 44 ) (cid:41) 44 } Figure 1: A discourse tree for two sentences in the RST discourse treebank.",
"Proposition 1 Given a binarized discourse tree for a document containing n tokens, the tree can be converted into a set of token-boundary splitting decisions S = { ( i, j ) (cid:41) k | i < k j } such that the parent constituent ( i, j ) either gets split into two child constituents ( i, k ) and ( k, j ) for k < j , or forms a terminal EDU unit for k = j , i.e., the span will not be split further ( i.e., marks segmentation).",
"Notice that S is a generalized formulation of RST parsing, which also includes the decoding of EDUs as a special case ( k = j ).",
"It is quite straightforward to change this formulation to the parsing scenario, where discourse segmentation (sequence of EDUs) is provided.",
"Formally, in that case, the tree can be converted into a set of splitting decisions S edu = { ( i, j ) (cid:41) k | i < k < j } such that the constituent ( i, j ) gets split into two constituents ( i, k ) and ( k, j ) for k < j , i.e., we simply omit the special case of k = j as the EDUs are given.",
"In other words, in our generalized formulation, discourse segmentation is just one extra step of parsing, and can be done top-down end-to-end.",
"An example of our formalism of the parsing problem is shown in Fig. 1 for a discourse tree spanning over two sentences (44 tokens); for simplicity, we do not show the relation labels corresponding to the splitting decisions (marked by (cid:41) ).",
"Since each splitting decision corresponds to one and only one internal node in the tree, it guarantees that the transformation from the tree to S (and S edu ) has a one-to-one mapping.",
"Therefore, predicting the sequence of such splitting decisions is equivalent to predicting the discourse tree (DT).",
"Seq2Seq Parsing Model.",
"In this work, we adopt a structure-then-label framework.",
"Specifically, we factorize the probability of a DT into the probability of the tree structure and the probability of the relations ( i.e., the node labels) as follows: P ( DT | x ) = P ( S , L | x ) = P ( L | S , x ) P ( S | x ) (2) where x is the input document, and S and L respectively denote the structure and labels of the DT.",
"This formulation allows us to first infer the best tree structure ( e.g., using beam search), and then find the corresponding labels.",
"As discussed, we consider the structure prediction problem as a sequence of splitting decisions to generate the tree in a top-down manner.",
"We use a seq2seq pointer network (Vinyals et al., 2015) to model the sequence of splitting decisions (Fig. 3).",
"We adopt a depth-first order of the decision sequence, which showed more consistent Figure 3: Our discourse parser along with a few decoding steps for a given document.",
"performance in our preliminary experiments than other alternatives, such as breath-first order.",
"First, we encode the tokens in a document x = ( x 0 , . . . , x n ) with a document encoder and get the token-boundary representations ( h 0 , . . . , h n ).",
"Then, at each decoding step t , the model takes as input an internal node ( i t , j t ) , and produces an output y t (by pointing to the token boundaries) that represents the splitting decision ( i t , j t ) (cid:41) k t to split it into two child constituents ( i t , k t ) and ( k t , j t ) .",
"For example, the initial span (0 , 44) in Fig. 1 is split at boundary position 4 , yielding two child spans (0 , 4) and (4 , 44) .",
"If the span (0 , 4) is given as an EDU ( i.e., segmentation given), the splitting stops at (0 , 4) , thus omitted in S edu (Fig. 1).",
"Otherwise, an extra decision (0 , 4) (cid:41) 4 S needs to be made to mark the EDUs for end-to-end parsing.",
"With this, the probability of S can be expressed as: P ( S | x ) = (cid:89) y t SP ( y t | y <t , x ) = | S | (cid:89) t =1 P (cid:16) ( i t , j t ) (cid:41) k t | (( i, j ) (cid:41) k ) <t , x (cid:17) This end-to-end conditional splitting formulation is the main novelty of our method and is in contrast to previous approaches which rely on offline-inferred EDUs from a separate discourse segmenter.",
"Our formalism streamlines the overall parsing process, unifies the neural components seamlessly and smoothens the training process.",
"In the following, we describe the components of our parsing model: the document encoder, the",
"Document Encoder.",
"Given an input document of n words x = ( x 1 , . . . , x n ) , we first add < sod > and < eod > markers to the sequence.",
"After that, each token x i in the sequence is mapped into its dense vector representation e i as: e i = [ e char i , e word i ] , where e char i , and e word i are respectively the character and word embeddings of token x i .",
"For word embedding, we experiment with ( i ) randomly initialized, ( ii ) pretrained static embeddings , e.g., GloVe (Pennington et al., 2014)).",
"To represent the character embedding of a token, we apply a character bidirectional LSTM i.e., Bi-LSTM (Hochreiter and Schmidhuber, 1997) or pretrained contextualized embeddings, e.g., XLNet (Yang et al., 2019).",
"The token representations are then passed to a sequence encoder of a three-layer Bi-LSTM to obtain their forward f i and backward b i contextual representations.",
"Token-boundary Span Representations.",
"To represent each token-boundary position k between token positions k and k + 1 , we use the fencepost representation (Cross and Huang, 2016): h k = [ f k ; b k +1 ] (3) where f k and b k +1 are the forward and backward LSTM hidden vectors of positions k and k + 1 respectively, and [ ; ] is the concatenation operation.",
"Then, to represent the token-boundary span ( i, j ) , we use the linear combination of the two endpoints i and j as: h i,j = W 1 h i + W 2 h j (4) Figure 4: Illustration of token-boundary span encoder.",
"where W 1 and W 2 are trainable weights.",
"These span representations will be used as input to the decoder or the label classifier.",
"Fig. 4 illustrates an example boundary span representation.",
"The Decoder.",
"Our model uses a unidirectional LSTM as the decoder.",
"At each decoding step t , the decoder takes as input the corresponding span ( i, j ) ( i.e., h i,j ) and its previous LSTM state d t 1 to generate the current state d t and then the biaffine function (Dozat and Manning, 2017) is applied between d t and all the encoded token-boundary representations ( h 0 , h 1 , . . . , h n ) as follows: d (cid:48) t = MLP d ( d t ) h (cid:48) i = MLP h ( h i ) (5) s it = d (cid:48) tT W dh h (cid:48) i + h (cid:48) iT w h (6) a it = exp( s it ) (cid:80) ni =0 exp( s it ) for i = 0 , . . . , n (7) where each MLP operation comprises a linear transformation with LeakyReLU activation (Maas et al., 2013) to transform d i and h i into equal-sized vectors d (cid:48) t , h (cid:48) i IR d , and W dh IR d d and w h IR d are respectively the weight matrix and weight vector for the biaffine function.",
"The resulting biaffine scores s i t are then fed into a softmax layer to acquire the pointing distribution a it [0 , 1] n +1 for the splitting decision.",
"During inference, when decoding the tree at step t , we only examine the valid splitting points between i and j , and we look for k such that i < k j .",
"Label Classifier.",
"We perform label assignment after decoding the entire tree structure.",
"Each assignment takes into account the splitting decision that generated it since the label represents the relation between the child spans.",
"Specifically, for a constituent ( i, j ) that was split into two child constituents ( i, k ) and ( k, j ) , we determine the coherence relation between them as follows: h lik = MLP l ([ h i ; h k ]); h rkj = MLP r ([ h k ; h j ]) (8) P ( l | ( i, k ) , ( k, j )) = softmax (cid:16) ( h lik ) TW lr h rkj +( h lik ) TW l + ( h rkj ) TW r + b (cid:17) (9) l ( i,k ) , ( k,j ) = arg max l LP ( l | ( i, k ) , ( k, j )) (10) where L is the total number of labels ( i.e., coherence relations with nuclearity attached); each of MLP l and MLP r includes a linear transformation with LeakyReLU activation to transform the left and right spans into equal-sized vectors h lik , h rkj IR d ; W lr IR d L d , W l IR d L , W r IR d L are the weights and b is a bias vector.",
"L ( e , d , l ) = L s ( e , d ) + L l ( e , l )",
"where structure L s and label L l losses are cross-entropy losses computed for the splitting and labeling tasks respectively, and e , d and l denote the encoder, decoder and labeling parameters.",
"Having presented the generic framework, we now describe how it can be easily adapted to the two parsing scenarios: ( i ) end-to-end parsing and ( ii ) parsing with EDUs.",
"We also describe the incorporation of beam search for inference.",
"End-to-End Parsing.",
"As mentioned, previous work for end-to-end parsing assumes a separate segmenter that provides EDU-segmented texts to the parser.",
"Our method, however, is an end-to-end framework that produces both the EDUs as well as the parse tree in the same inference process.",
"To guide the search better, we incorporate an inductive bias into our inference based on the finding that most sentences have a well-formed subtree in the document-level tree (Soricut and Marcu, 2003), i.e., discourse structure tends to align with the text structure (sentence boundary in this case); for example, Fisher and Roark (2007); Joty et al. (2013) found that more than 95% of the sentences have a well-formed subtree in the RST discourse treebank.",
"Our goal is to ensure that each sentence corresponds to an internal node in the tree.",
"This can be achieved by a simple adjustment in our inference.",
"When decoding at time step t with the span ( i t , j t ) as input, if the span contains M > 0 sentence boundaries within it, we pick the one that Algorithm 1 Discourse Tree Inference (end-to-end) Input: Document length n ; boundary encoder states: ( h 0 , h 1 ,..., h n ) ; sentence boundary set SB ; label scores: P ( l | ( i,k ) , ( k,j )) , 0 i < k j n,l L , initial decoder state st .",
"has the highest pointing score (Eq. 7) among the M alternatives as the split point k t .",
"If there is no sentence boundary within the input span ( M = 0 ), we find the next split point as usual.",
"In other words, sentence boundaries in a document get the chance to be split before the token boundaries inside a sentence.",
"This constraint is indeed similar to the 1S-1S (1 subtree for 1 sentence) constraint of Joty et al. (2013)'s bottom-up parsing, and is also consistent with the property that EDUs are always within the sentence boundary.",
"Algorithm 1 illustrate the end-to-end inference algorithm.",
"Parsing with EDUs.",
"When segmentation information is provided, we can have a better encoding of the EDUs to construct the tree.",
"Specifically, rather than simply taking the token-boundary representation corresponding to the EDU boundary as the EDU representation, we adopt a hierarchical approach, where we add another Bi-LSTM layer (called Boundary LSTM) that connects EDU boundaries (a figure of this framework is in the Appendix).",
"In other words, the input sequence to this LSTM layer is ( h 0 , . . . , h m ) , where h 0 = h 0 , h m = h n and h j { h 1 , . . . , h n 1 } such that h j is an EDU boundary.",
"For instance, for the example in Fig. 1, the input to the Boundary LSTM layer is ( h 0 , h 4 , h 17 , h 25 , h 33 , h 37 , h 44 ) .",
"This hierarchical representation facilitates better modeling of relations between EDUs and higher order spans, and can capture long-range dependencies better, especially for long documents.",
"Incorporating Beam Search.",
"Previous work (Lin et al., 2019; Zhang et al., 2020) which also uses a seq2seq architecture, computes the pointing scores over the token or span representations only within the input span.",
"For example, for an input span ( i, j ) , the pointing scores are computed considering only ( h i , . . . , h j ) as opposed to ( h 1 , . . . , h n ) in our Eq.",
"7.",
"This makes the scales of the scores uneven across different input spans as the lengths of the spans vary.",
"Thus, such scores cannot be objectively compared across sub-trees globally at the full-tree level.",
"In addition, since efficient global search methods like beam search cannot be applied properly with non-uniform scores, these previous methods had to remain greedy at each decoding step.",
"In contrast, our decoder points to all the encoded token-boundary representations in every step (Eq. 7).",
"This ensures that the pointing scores are evenly scaled, allowing fair comparisons between the scores of all candidate sub-trees.",
"Therefore, our method enables the effective use of beam search through highly probable candidate trees.",
"Algorithm 2 illustrates the beam search inference when EDUs are given.",
"We conduct our experiments on discourse parsing with and without gold segmentation.",
"We use the standard English RST Discourse Treebank or RST-DT (Lynn et al., 2002) for training and evaluation.",
"It consists of 385 annotated Wall Street Journal news articles: 347 for training and 38 for testing.",
"We randomly select 10% of the training set as our development set for hyper-parameter tuning.",
"Following prior work, we adopted the same 18 courser relations defined in (Carlson and Marcu, 2001).",
"For evaluation, we report the standard metrics Span, Nuclearity, Relation and Full F1 scores, computed using the standard Parseval (Morey et al., 2017, 2018) and RST-Parseval (Marcu, 2000) metrics.",
"Settings.",
"Discourse parsing with gold EDUs has been the standard practice in many previous studies.",
"We compare our model with ten different baselines as shown in Table 1.",
"We report most results from Morey et al. (2018); Zhang et al. (2020); Kobayashi et al. (2020), while we reproduce Yu et al. (2018) using their provided source code.",
"For our model setup, we use the encoder-decoder framework with a 3-layer Bi-LSTM encoder and 3-layer unidirectional LSTM decoder.",
"The LSTM hidden size is 400, the word embedding size is 100 for random initialization, while the character embedding size is 50.",
"The hidden dimension in MLP modules and biaffine function for structure prediction is 500.",
"The beam width B is 20.",
"Our model is trained by Adam optimizer (Kingma and Ba, 2015) with a batch size of 10000 tokens.",
"Our learning rate is initialized at 0 .",
"002 and scheduled to decay at an exponential rate of 0 .",
"75 for every 5000 steps.",
"Model selection for testing is performed based on the Full F1 score on the development set.",
"When using pretrained word embeddings, we use the 100D vectors from GloVe (Pennington et al., 2014).",
"For pretrained model, we use the XLNet-base-cased version (Yang et al., 2019).",
"1 The pretrained mod-els/embeddings are kept frozen during training.",
"Results.",
"From the results in Table 1, we see that our model with GloVe (static) embeddings achieves a Full F1 score of 46.8, the highest among all the parsers that do not use pretrained models (or contextual embeddings).",
"This suggests that a BiLSTM-based parser can be competitive with effective modeling.",
"The model also outperforms the one proposed by Zhang et al. (2020), which is closest to ours in terms of modelling, by 3.9%, 4.1%, 2.4% and 2.5% absolute in Span, Nuclearity, Relation 1 Our initial attempt with BERT did not offer significant gain as BERT is not explicitly designed to process long documents and has a limit of maximum 512 tokens.",
"and Full, respectively.",
"More importantly, our system achieves such results without relying on external data or features, in contrast to previous approaches.",
"In addition, by using XLNet-base pretrained model, our system surpasses all existing methods (with or without pretraining) in all four metrics, achieving the state of the art with 2.9%, 4.0%, 2.4% and 2.1% absolute improvements.",
"It also reduces the gap between system performance and human agreement.",
"When evaluated with the RST-Parseval (Marcu, 2000) metric, our model outperforms the baselines by 0.6%, 1.4% and 1.8% in Span, Nuclearity and Relation, respectively.",
"For end-to-end parsing, we compare our method with the model proposed by Zhang et al. (2020).",
"Their parsing model uses the EDU segmentation from Li et al. (2018).",
"Our method, in contrast, predicts the EDUs along with the discourse tree in a unified process (2.3).",
"In terms of model setup, we use a setup identical to the experiments with gold segmentation (3.1).",
"Table 2 reports the performance for document-level end-to-end parsing.",
"Compared to Zhang et al. (2020), our model with GloVe embeddings yields 1.5%, 2.9%, 2.4% and 2.5% absolute gains in Span, Nuclearity, Relation and Full F1 scores, respectively.",
"Furthermore, the model with XLNet Model Span Nuc Rel Full Zhang et al. (2020) 62.3 50.1 40.7 39.6 Our model with GloVe 63.8 53.0 43.1 42.1 with XLNet 68.4 59.1 47.8 46.6 Table 2: End-to-end parsing performance.",
"achieves even better performance and outperforms many models that use gold segmentation (Table 1).",
"EDU Segmentation Results.",
"Our end-to-end parsing method gets an F1 score of 96.30 for the resulting EDUs.",
"Our result rivals existing SoTA segmentation methods 92.20 F1 of Li et al. (2018) and 95.55 F1 of Lin et al. (2019).",
"This shows the efficacy of our unified framework for not only discourse parsing but also segmentation.",
"2 3.3 Ablation Study To further understand the contributions from the different components of our unified parsing framework, we perform an ablation study by removing selected components from a network trained with the best set of parameters.",
"With Gold Segmentation.",
"Table 3 shows two ablations for parsing with gold EDUs.",
"We see that both beam search and boundary LSTM (hierarchi-cal encoding as shown in Fig. 7) are important to the model.",
"The former can find better tree structure by searching a larger searching space.",
"The latter, meanwhile, connects the EDU-boundary representations, which enhances the model's ability to capture long-range dependencies between EDUs.",
"2 We could not compare our segmentation results with the DISRPT 2019 Shared Task (Zeldes et al., 2019) participants.",
"We found few inconsistencies in the settings.",
"First, in their gold sentence dataset, instead of using the gold sentence, they pre-process the text with an automatic tokenizer and sentence segmenter.",
"Second, in the evaluation, under the same settings, they do not exclude the trivial BeginSegment label at the beginning of each sentence which we exclude in evaluating our segmentation result (following the RST standard).",
"End-to-end Parsing.",
"For end-to-end parsing, Table 4 shows that the sentence boundary constraint (2.3) is indeed quite important to guide the model as it decodes long texts.",
"Since sentence segmentation models are quite accurate, they can be employed if ground truth sentence segmentation is not available.",
"We also notice that pretraining (GloVe) leads to improved performance.",
"Error Analysis.",
"We show our best parser's (with gold EDUs) confusion matrix for the 10 most frequent relation labels in Fig. 5.",
"The complete matrix with the 18 relations is shown in Appendix (Fig. 8).",
"The imbalanced relation distribution in RST-DT affects our model's performance to some extent.",
"Also semantic similar relations tend to be confused with each other.",
"Fig. 6 shows an example where our model mistakenly labels Summary as Elaboration.",
"However, one could argue that the relation Elaboration is also valid here because the parenthesized text brings additional information (the equivalent amount of money).",
"We show more error examples in the Appendix (Fig. 9 11), where our parser la-Figure 6: An error example where our system incorrectly labels a Summary as Elaboration.",
"bels a Condition as Background, Temporal as Joint and Explanation as Elaboration.",
"As we can see, all these relations are semantically close and arguably interchangeable.",
"Table 5 compares the parsing speed of our models with a representative non-neural (Feng and Hirst, 2014) and neural model (Yu et al., 2018).",
"We measure speed empirically using the wall time for parsing the test set.",
"We ran the baselines and our models under the same settings (CPU: Intel Xeon W-2133 and GPU: Nvidia GTX 1080 Ti).",
"With gold-segmentation, our model with GloVe embeddings can parse the test set in 19 seconds, which is up to 11 times faster than (Feng and Hirst, 2014), and this is when their features are precomputed.",
"The speed gain can be attributed to ( i ) to the efficient GPU implementation of neural modules to process the decoding steps, and ( ii ) the fact that our model does not need to compute any handcrafted features.",
"With pretrained models, our parser with gold segmentation is about 2.4 times faster than (Yu et al., 2018).",
"Our end-to-end parser that also performs segmentation is faster than the baselines that are provided with the EDUs.",
"Nonetheless, we believe there is still room for speed improvement by choosing a better network, like the Longformer (Beltagy et al., 2020) which has an O (1) parallel time complexity in encoding a text, compared to the O ( n ) complexity of the recurrent encoder.",
"Discourse analysis has been a long-established problem in NLP.",
"Prior to the neural tsunami in NLP, discourse parsing methods commonly employed statistical models with handcrafted features (Soricut and Marcu, 2003; Hernault et al., 2010; Feng and Hirst, 2014; Joty et al., 2015).",
"Even within the neural paradigm, most previous studies still rely on external features to achieve their best performances (Ji and Eisenstein, 2014; Wang et al., 2017; Braud et al., 2016, 2017; Yu et al., 2018).",
"These parsers adopt a bottom-up approach, either transition-based or chart-based parsing.",
"Recently, top-down parsing has attracted more attention due to its ability to maintain an overall view of the input text.",
"Inspired by the Stack-Pointer network (Ma et al., 2018) for dependency parsing, Lin et al. (2019) first propose a seq2seq model for sentence-level parsing.",
"Zhang et al. (2020) extend this to the document level.",
"Kobayashi et al. (2020) adopt a greedy splitting mechanism for discourse parsing inspired by Stern et al. (2017)'s work in constituency parsing.",
"By using pretrained mod-els/embeddings and extra features ( e.g., syntactic, text organizational features), these models achieve competitive results.",
"However, their decoder infers a tree greedily.",
"Our approach differs from previous work in that it can perform end-to-end discourse parsing in a single neural framework without needing segmentation as a prerequisite.",
"Our model can parse a document from scratch without relying on any external features.",
"Moreover, it can apply efficient beam search decoding to search for the best tree.",
"We have presented a novel top-down end-to-end method for discourse parsing based on a seq2seq model.",
"Our model casts discourse parsing as a series of splitting decisions at token boundaries, which can solve discourse parsing and segmentation in a single model.",
"In both end-to-end parsing and parsing with gold segmentation, our parser achieves state-of-the-art, surpassing existing methods by a good margin, without relying on handcrafted features.",
"Our parser is not only more effective but also more efficient than the existing ones.",
"This work leads us to several future directions.",
"Our short-term goal is to improve the model with better architecture and training mechanisms.",
"For example, joint training on discourse and syntactic parsing tasks could be a good future direction since both tasks are related and can be modeled within our unified conditional splitting framework.",
"We also plan to extend our parser to other languages."
] | [
"objective",
"method",
"abstain",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"method",
"objective",
"objective",
"objective",
"objective",
"result",
"objective",
"other",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"other",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"abstain",
"other",
"objective",
"objective",
"result",
"abstain",
"abstain",
"objective",
"abstain",
"objective"
] |
[
"The prevalent way to estimate the similarity of two documents based on word embeddings is to apply the cosine similarity measure to the two centroids obtained from the embedding vectors associated with the words in each document.",
"Motivated by an industrial application from the domain of youth marketing, where this approach produced only mediocre results, we propose an alternative way of combining the word vectors using matrix norms.",
"The evaluation shows superior results for most of the investigated matrix norms in comparison to both the classical cosine measure and several other document similarity estimates.",
"Estimating semantic document similarity is of utmost importance in a lot of different areas, like plagiarism detection, information retrieval, or text summarization.",
"We will focus here on an NLP application that has been less researched, i.e., the assignment of people to the best matching target group to allow for running precise and customer-oriented marketing campaigns.",
"Until recently, similarity estimates were predominantly based either on deep semantic approaches or on typical information retrieval techniques like Latent Semantic Analysis.",
"In the last couple of years, however, so-called word and sentence embeddings became state-of-the-art.",
"The prevalent approach to document similarity estimation based on word embeddings consists in measuring the similarity between the vector representations of the two documents derived as follows: 1. The word embeddings (often weighted by the tf-idf coefficients of the associated words (Brokos et al., 2016)) are looked up in a hashtable for all the words in the two documents to compare.",
"These embeddings are determined beforehand on a very large corpus typically using either the skip gram or the continuous bag of words variant of the Word2Vec model (Mikolov et al., 2013).",
"2. The centroid over all word embeddings belonging to the same document is calculated to obtain its vector representation.",
"If vector representations of the two documents to compare were successfully established, a similarity estimate can be obtained by applying the cosine measure to the two vectors.",
"Let x 1 , . . . , x m and y 1 , . . . , y n be the word vectors of two documents.",
"The cosine similarity value between the two document centroids C 1 und C 2 is given by: cos( ( 1 m m (cid:88) i =1 x i , 1 n n (cid:88) i =1 y i )) = (cid:80) mi =1 (cid:80) nj =1 (cid:104) x i , y j (cid:105) mn (cid:107) C 1 (cid:107)(cid:107) C 2 (cid:107) (1) Hence, potentially small values of (cid:104) x i , y j (cid:105) can have in aggregate a considerable influence on the total similarity estimate, which makes this estimate vulnerable to noise in the data.",
"We propose an alternative approach that is based on matrix norms and which proved to be more noise-robust by focusing primarily on high word similarities.",
"Finally, we conducted an evaluation where we achieved with our method superior accuracy in target group assignments than several traditional word embedding based methods.",
"The most popular method to come up with word vectors is Word2Vec, which is based on a 3 layer neural network architecture in which the word vectors are obtained as the weights of the hidden layer.",
"Alternatives to Word2Vec are GloVe (Pennington et al., 2014), which is based on aggregated global word co-occurrence statistics and the Explicit Semantic Analysis (or shortly ESA) (Gabrilovic and Markovitch, 2009), in which each word is represented by the column vector in the tf-idf matrix over Wikipedia.",
"The idea of Word2Vec can be transferred to the level of sentences as well.",
"In particular, the so-called Skip Thought Vector model (STV) (Kiros et al., 2015) derives a vector representation of the current sentence by predicting the surrounding sentences.",
"(Song and Roth, 2015) propose an alternative approach to applying the cosine measure to the two word vector centroids for ESA word embeddings.",
"In particular, they establish a bipartite graph consisting of the best matching vector components by solving a linear optimization problem.",
"The similarity estimate for the documents is then given by the global optimum of the objective function.",
"However, this method is only useful for sparse vector representations.",
"In case of dense vectors, (Mijangos et al., 2017) suggested to apply the Frobenius kernel to the embedding matrices, which contain the embedding vectors for all document components (usually either sentences or words) (cf.",
"also (Hong et al., 2015)).",
"However, crucial limitations are that the Frobenius kernel is only applicable if the number of words (sen-tences respectively) in the compared documents coincide and that a word from the first document is only compared with its counterpart from the second document.",
"Thus, an optimal matching has to be established already beforehand.",
"In contrast, the matrix norm approach as presented here applies to arbitrary embedding matrices.",
"Since it conducts a pairwise comparison of all words contained in the two documents, there is also no need for any matching method.",
"Another simlarity estimate that employs the entire embedding matrix is the word movers distance (Kusner et al., 2015), which is a special case of the earth movers distance, a well studied transportation problem.",
"Basically, this approach determines the minimum effort (with respect to embedding vector changes) to transform the words of one text into the words of another text.",
"The word movers distance requires a linear optimization problem to be solved.",
"Linear optimization is usually tackled by the simplex method, which has in the worst case, which rarely occurs however, ex-Name Definition Frob.",
"A drawback of conventional similarity estimates as described above is that slightly related word pairs can have in aggregate a considerable influence on their values, i.e., these estimates are sensitive to noise in the data.",
"In contrast, several of our matrix norm based similarity estimates focus primarily on strongly related word pairs and are therefore less vulnerable to noise.",
"Before going more into detail, we want to review some concepts that are crucial for the remainder of this paper.",
"According to (Belanche and Orozco, 2011), a similarity measure on some set X is an upper bounded, exhaustive and total function s : X X I R with | I | > 1 (therefore I is upper bounded and sup I exists).",
"Additionally, it should fulfill the properties of reflexivity (the supremum is reached if an item is compared to itself) and symmetry.",
"We call such a measure normalized if the supremum equals 1 (Attig and Perner, 2011).",
"Note that an asymmetric similarity measure can easily be converted into a symmetric by taking the geometric or arithmetic mean of the asymmetric measure applied twice to the same arguments in switched order.",
"A norm is a function f : V R over some vector space V that is absolutely homogeneous, positive definite and fulfills the triangle inequality.",
"It is called matrix norm if its domain is a set of matrices and if it is sub-multiplicative, i.e., (cid:107) AB (cid:107) (cid:107) A (cid:107)(cid:107) B (cid:107) .",
"Several popular matrix norms are given in Table 1. Note that the Frobenius norm can also be represented by (cid:107) A (cid:107) F = (cid:112) tr( AA (cid:62) ) .",
"For an arbitrary document t we define the embeddings matrix E ( t ) as follows: E ( t ) ij is the i th component of the normalized embeddings vector belonging to the j -th word of the document t .",
"Let t, u be two arbitrary documents, then the entry ( i, j ) of a product E ( t ) (cid:62) E ( u ) specifies the result of the cosine measure estimating the semantic similarity between word i of document t and word j of document u .",
"The value of a matrix norm (cid:107) E ( t ) (cid:62) E ( u ) (cid:107) is then a measure for the similarity of the two documents.",
"Since the vector components obtained by Word2Vec can be negative, the cosine measure between two word vectors can also assume negative values (rather rarely in practice though).",
"Negative cosine values indicate negatively correlated words and should be handled akin to the uncorrelated case.",
"Because a matrix norm usually treats negative and positive matrix entries alike, we replace all negative values in the matrix by zeros.",
"Finally, since our measure should be restricted to values from zero to one, we have to normalize it.",
"Formally, we define our similarity measure sn ( t, u ) as follows : (cid:107) K ( E ( t ) (cid:62) E ( u )) (cid:107) (cid:112) (cid:107) K ( E ( t ) (cid:62) E ( t )) (cid:107) (cid:107) K ( E ( u ) (cid:62) E ( u )) (cid:107) where E ( t ) is the embeddings matrix belonging to document t , where all embedding column vectors are normalized.",
"K ( M ) is the matrix, where all negative entries are replaced by zero, i.e. K ( M ) ij = max { 0 , M ij } .",
"Proposition 1. If the cosine similarity values between all embedding vectors of words occurring in any of the documents are non-negative, i.e., if K ( E ( t ) (cid:62) E ( u )) = E ( t ) (cid:62) E ( u ) for all document pairs ( t, u ) , then sn is a normalized similarity measure for the 2-norm, the Frobenius norm and the L 1 , 1 -norm.",
"Proof.",
"We give the proof for the 2-norm here and for the other two norms in the appendix.",
"for arbitrary matrices Z , since with this property we have",
"Let M and N be arbitrary matrices such that MN and NM are both defined and quadratic, then (see (Chatelin, 1993))",
"Proof.",
"The following property needs to be veri-fied: (cid:107) A (cid:62) B (cid:107) 2 (cid:112) (cid:107) A (cid:62) A (cid:107) 2 (cid:107) B (cid:62) B (cid:107) 2 1 (5) In the proof, we exploit the fact that for every positive-semidefinite matrix X , the following equation holds ( X 2 ) = ( X ) 2 (6) We observe that for the denominator (cid:107) A (cid:62) A (cid:107) 2 (cid:107) B (cid:62) B (cid:107) 2 = (cid:112) (( A (cid:62) A ) (cid:62) A (cid:62) A ) (cid:112) (( B (cid:62) B ) (cid:62) B (cid:62) B ) = (cid:112) (( A (cid:62) A ) (cid:62) ( A (cid:62) A ) (cid:62) ) (cid:112) (( B (cid:62) B ) (cid:62) ( B (cid:62) B ) (cid:62) ) = (cid:112) ([( A (cid:62) A ) (cid:62) ] 2 ) (cid:112) ([( B (cid:62) B ) (cid:62) ] 2 ) (6) = (cid:112) (( A (cid:62) A ) (cid:62) ) 2 (cid:112) (( B (cid:62) B ) (cid:62) ) 2 = (( A (cid:62) A ) (cid:62) ) (( B (cid:62) B ) (cid:62) ) (4) = (cid:107) A (cid:107) 22 (cid:107) B (cid:107) 22 (7) Putting things together we finally obtain (cid:107) A (cid:62) B (cid:107) 2 (cid:112) (cid:107) A (cid:62) A (cid:107) 2 (cid:107) B (cid:62) B (cid:107) 2 sub-mult.",
"However, proposition 1 is not sufficient in all cases, since negative cosine similarity values can occur in practice.",
"Therefore, we also prove a stronger claim stated in the following proposition.",
"Proposition 2. If the cosine measure values between embedding vectors belonging to words of the same document are all non-negative, then sn is a normalized similarity measure for the Frobenius and the L 1 , 1 -norm.",
"Proof.",
"The proof of symmetry and reflexivity is analogous to proposition 1. So we only prove boundedness of sn .",
"Since the cosine measure for two embedding vectors emb belonging to words of the same document cannot be negative, we have (cid:104) emb ( w i ) , emb ( w k ) (cid:105) 0 for i , k with 1 i k | t | and therefore K ( E ( t ) (cid:62) E ( t )) = E ( t ) (cid:62) E ( t ) .",
"We furthermore have (cid:107) K ( E ( t ) (cid:62) E ( u )) (cid:107) (cid:107) E ( t ) (cid:62) E ( u ) (cid:107) for the Frobenius and L 1 , 1 -norm, since replacing a zero entry with another value can never decrease the value of the norm.",
"Thus, (cid:107) K ( E ( t ) (cid:62) E ( u )) (cid:107) (cid:112) (cid:107) K ( E ( t ) (cid:62) E ( t )) (cid:107) (cid:107) K ( E ( u ) (cid:62) E ( u )) (cid:107) (cid:107) E ( t ) (cid:62) E ( u ) (cid:107) (cid:112) (cid:107) E ( t ) (cid:62) E ( t ) (cid:107) (cid:107) E ( u ) (cid:62) E ( u ) (cid:107) 1 .",
"However, the proposed normalization factor (cid:112) (cid:107) K ( E ( t ) (cid:62) E ( t )) (cid:107) (cid:107) K ( E ( u ) (cid:62) E ( u )) (cid:107) is not eligible for all types of matrix norms, which is an immediate consequence of the following proposition.",
"sn 1 := (cid:107) K ( E ( t 0 ) (cid:62) E ( u 0 )) (cid:107) 1 m m := mean ( (cid:107) K ( E ( t 0 ) (cid:62) E ( t 0 )) (cid:107) 1 , (cid:107) K ( E ( u 0 ) (cid:62) E ( u 0 )) (cid:107) 1 ) (10)",
"Proof.",
"We give a counter-example for the maximum mean, for which we show that sn 1 exceeds the value of 1: E ( t 0 ) = 0 .",
"Since the maximum mean mean max ( a, b ) = max { a, b } is greater or equal to all other means (including the geometric mean), we have that:",
"(cid:107) K ( E ( t 0 ) (cid:62) E ( u 0 )) (cid:107) 1 mean ( (cid:107) K ( E ( t 0 ) (cid:62) E ( t 0 )) (cid:107) 1 , (cid:107) K ( E ( u 0 ) (cid:62) E ( u 0 )) (cid:107) 1 ) (cid:107) K ( E ( t 0 ) (cid:62) E ( u 0 )) (cid:107) 1 max {(cid:107) K ( E ( t 0 ) (cid:62) E ( t 0 )) (cid:107) 1 , (cid:107) K ( E ( u 0 ) (cid:62) E ( u 0 )) (cid:107) 1 } = 1 .",
"0284 > 1 (11) for arbitrary type of means mean .",
"Note that the matrices used in the counterexample can be extended to any number of embedding dimensions by adding additional zeros.",
"A further issue is, whether the similarity measure is invariant to word permutations.",
"Actually, this is the case for our matrix norm similarity estimates, which is stated in the following proposition.",
"Proposition 4. The obtained similarity estimate for all of the considered matrix norms is indepen-dent of the word sequence of the input texts.",
"This property is quite beneficial in our scenario since one of the texts to compare constitutes of an unordered key word list (see more details in the next section).",
"Proof.",
"We focus in this proof on the 2-norm, for which this property is not directly obvious like for the other regarded norms.",
"For simplicity, we first concentrate on the special case that all cosine values between word embeddings are non-negative.",
"This proof can easily be extended to the general case, too.",
"In particular, we show that the similarity estimate does not change, if two columns of the first matrix are exchanged, which can be expressed by postmultiplying this matrix with a permutation matrix P .",
"By employing symmetry and induction this proof can be applied to arbitrary sequence permutations and to the second argument matrix as well.",
"With this, the similarity estimate is given as: sn 2 ( t, u ) = (cid:107) (( AP ) (cid:62) B ) (cid:107) 2 = (cid:112) ((( AP ) (cid:62) B ) (cid:62) (( AP ) (cid:62) B )) = (cid:112) ( B (cid:62) APP (cid:62) A (cid:62) B ) ( P is an orthogonal matrix) = (cid:112) ( B (cid:62) AIA (cid:62) B ) = (cid:112) ( A (cid:62) B ) (cid:62) ( A (cid:62) B ) = (cid:107) A (cid:62) B (cid:107) 2 (12) By exploiting that K ( MP ) = K ( M ) P for ar-bitary matrices M , this proof can be generalized to negative cosine measure values as well.",
"The question remains, how the similarity measure value induced by matrix norms performs in comparison to the usual centroid method.",
"Let us first focus on L 11 and the Frobenius norm.",
"Actually, both are special cases of a norm that raises the absolute values of the matrix components to a certain power e .",
"If this exponent e becomes large, then: sn L e, 1 ( t, u ) = (cid:107) E ( u ) (cid:62) E ( t ) (cid:107) L e, 1 (cid:113) (cid:107) E ( t ) (cid:62) E ( t ) (cid:107) L e, 1 (cid:107) E ( u ) (cid:62) E ( u ) (cid:107) L e, 1 (# p ) 1 /e (cid:118)(cid:117)(cid:117)(cid:117)(cid:117)(cid:116)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) 1 0 . . . ... 0 . . . 1 (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) L e, 1 (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) 1 0 . . . ... 0 . . . 1 (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) L e, 1 (mainly the diagonal elements of the matrices in the denominator assume 1) (cid:18) # p nm (cid:19) 1 /e (13) where # p denotes the number of perfect matches (similarity value of 1.0) between words of the two documents and n (m) is the number of words in text t (u).",
"Thus, with an increasing exponent, sn L e, 1 tends to focus on very good matches and disregards the others.",
"This property is quite beneficial in our scenario, where often only one or two words of the contest answers (cf. next section) indicate the right target group.",
"General statements about the 2-norm based similarity measure are difficult, but we can draw some conclusions, if we restrict to the case, where A (cid:62) B is a square diagonal matrix.",
"Hereby, one word of the first text is very similar to exactly one word of the second text and very dissimilar to all remaining words.",
"The similarity estimate is then given by the largest eigenvalue (also called the spectral radius) of A (cid:62) B , which equals the largest cosine measure value.",
"Thus, the 2-norm based similarity estimate is able to filter out noise (low word similarity values) akin to the Frobenius norm.",
"Let us now take a look at the similarity measure sn 1 , which is induced by the 1-norm.",
"sn 1 assumes high values, if there is one word of the second document that matches very well with all words of the first document.",
"All other less matching words of the second document do not contribute to the assumed similarity estimate at all.",
"Market segmentation is one of the key tasks of a marketer.",
"Usually, it is accomplished by clustering over demographic variables, geographic variables, psychographic variables and behaviors (Lynn, 2011).",
"In this paper, we will describe an alternative approach based on unsupervised natural language processing.",
"In particular, our business partner operates a commercial youth platform for the Swiss market, where registered members get access to third-party offers such as discounts and special events like concerts or castings.",
"Actually, several hundred online contests per year are launched over this platform sponsored by other firms, an increasing number of them require the members to write short free-text snippets, e.g. to elaborate on a perfect holiday at a destination of their choice in case of a contest sponsored by a travel agency.",
"Based on the results of a broad survey, the platform provider's marketers assume five different target groups (called milieus ) being present among the platform members: progressive postmodern youth (people primarily interested in culture and arts), young performers (people striving for a high salary with a strong affinity to luxury goods), freestyle action sportsmen , hedonists (rather poorly educated people who enjoy partying and disco music) and conservative youth (tra-ditional people with a strong concern for security).",
"A sixth milieu called special groups comprises all those who cannot be assigned to one of the upper five milieus.",
"For each milieu (with the exception of special groups ) a keyword list was manually created to describe its main characteristics.",
"For triggering marketing campaigns, an algorithm shall be developed that automatically assigns each contest answer to the most likely target group: we propose the youth milieu as best match for a contest answer, for which the estimated semantic similarity between the associated keyword list and user answer is maximal.",
"In case the highest similarity estimate falls below the 10 percent quantile for the distribution of highest estimates, the special groups milieu is selected.",
"Since the keyword list typically consists of nouns (in the German language capitalized) and the user contest answers might contain a lot of adjectives and verbs as well, which do not match very well to nouns in the Word2Vec vector representation, we actually conduct two comparisons for our Word2Vec based measures, one with the unchanged user contest answers and one by capitalizing every word beforehand.",
"The final similarity estimate is then given as the maximum value of both individual estimates.",
"For evaluation, we selected three online contests (language: German), where people elaborated on their favorite travel destination for an example, speculated about potential experiences with a pair of fancy sneakers (contest 2) and explained why they emotionally prefer a certain product out of",
"four available candidates.",
"We experimented with different keyword list sizes but obtained the best results with rather few and therefore precise keywords.",
"In particular, we used the following number of keywords for the individual milieus: Action Sportsman: 3 Young Performer: 4 Hedonist: 7 Conservative Youth: 4 Progressive Postmodern Youth: 6 In order to provide a gold standard, three professional marketers from different youth marketing companies annotated independently the best matching youth milieus for every contest answer.",
"We determined for each annotator individually his/her average inter-annotator agreement with the others (Cohen's kappa).",
"The minimum and maximum of these average agreement values are given in Table 4. Since for contest 2 and contest 3, some of the annotators annotated only the first 50 entries (last 50 entries respectively), we specified min/max average kappa values for both parts.",
"We further compared the youth milieus proposed by our unsupervised matching algorithm with the majority votes over the human experts' answers (see Table 3) 1 .",
"Moreover, we computed its average inter-annotator agreement with the human annotators (see again Table 4), quasi treating the predictions like additional annotations.",
"The Word2Vec word embeddings were trained on the German Wikipedia (dump originating from 20 February 2017) merged with a Frankfurter Rundschau newspaper Corpus and 34 249 articles of the news journal 20 minutes 2 , where the latter is targeted to the Swiss market and freely available at various Swiss train stations (see Table 2 for a comparison of corpus sizes).",
"By employing articles from 20 minutes , we want to ensure the reliability of word vectors for certain Switzerland specific expressions like Velo or Glace , which are underrepresented in the German Wikipedia and the Frankfurter Rundschau corpus.",
"ESA is usually trained on Wikipedia, since the authors of the original ESA paper suggest that the articles of the training corpus should represent disjoint concepts, which is only guaranteed for encyclopedias.",
"However, Stein and Anerka (Gottron et al., 2011) challenged this hypothesis and demonstrated that promising results can be obtained by applying ESA on other types of corpora like the popular Reuters newspaper corpus as well.",
"Unfortunately, the implementation we use (Wikiprep-ESA 3 ) expects its training data to be a Wikipedia Dump.",
"Furthermore, Wikiprep-ESA only indexes words that are connected by hyperlinks, which are usually lacking in ordinary newspaper articles.",
"So we could train Wikiprep-ESA on Wikipedia only but additionally have developed a version of ESA that can be applied on arbitrary corpora (in the following referred to as ESA2) and which was trained on the full corpus (Wikipedia+Frankfurter Rund-schau+20 minutes).",
"The STVs were also trained on the same corpus as our matrix norms based estimates and Word2Vec embedding centroids.",
"The actual document similarity estimation is accomplished by the usual centroid approach (we did not evaluate matrix norms here).",
"An issue we were faced with is that STVs are not bag of word models but actually take the sequence of the words into account and therefore the obtained similar-1 Note that the geometric mean of the 1and -norm as specified in Table 3 is not a matrix norm itself, since it lacks submultipicativity.",
"ity estimate between milieu keyword list and contest answer would be dependent on the keyword ordering.",
"However, this order could have arbitrarily been chosen by the marketers and might be completely random.",
"A possible solution is to compare the contest answers with all possible permutation of keywords and determine the maximum value over all those comparisons.",
"However, such an approach would be infeasible already for medium keyword list sizes.",
"Therefore, we use a beam search approach instead, which extends the keyword list iteratively and keeps only the n-best performing permutations.",
"Finally, to verify the general applicability of our approach, we conducted a second experiment, where a novel from Edgar Allen Poe (The purloined letter) was independently translated by two translators into German.",
"We aim to match a sentence from the first translation to the associated sentence of the second by looking for the assignment with the highest semantic relatedness disregarding the sentence order.",
"The obtained accuracy values based on the first 200 sentences of both translations are given in Table 5. To guarantee an 1:1 sentence mapping, periods were partly replaced by semicolons.",
"The evaluation showed that the inter-annotator agreement values vary strongly for contest 2 part 2 (minimum average annotator agreement according to Cohen's kappa of 0.03 while the maximum is 0.149, see Table 4).",
"On this contest part, our matrix norm-based matching (2-norm and Frobenius-norm) obtains a considerably higher average agreement than one of the annotators.",
"Regarding baseline systems, the most relevant comparison is naturally the one with Word2Vec cen-0 0.2 0.4 0.6 0 0.2 0.4 0.6",
"troids, since it employs the same type of data.",
"Hereby we reached higher accuracy values for the best performing matrix norms on two of the three contests including the largest contest 1. Note that the elimination of negative values from the embedding matrix product proved to be important.",
"If we omit this step, the obtained accuracy of sn f for instance will drop by around 0.023 determined over all three contests (column: all ).",
"It is quite striking that, although sn 1 lacks two properties of a normalized similarity measure (boundedness by 1 and symmetry), it reaches quite good results on contest 1. As you can see in Figure 1, which shows the distribution of sn 1 in contest 1, the value of 1 is indeed exceeded several times (the maximum value is 1.5), but this occurs rather rarely in our experiment.",
"Actually, 99% of its values fall into the interval [0,1].",
"Thus, the non-boundedness is much less a problem in practice than the theoretical results indicate.",
"Finally, we determined the scatter plots (see Figure 2) showing cosine of Word2Vec embeddings (W2VC) vs several matrix norm based similarity estimates.",
"These scatter plots exhibits that the score distributions of sn f and sn 2 are quite similar and their values often exceed the cosine measure value due to the fact that a few very strong word matches can already result in a high similarity estimate.",
"The scatter plot for sn L 11 reveals that this measure is much closer to W2VC than the other two matrix norm based similarity estimates.",
"Note that a downside of our approach in relation to the usual Word2Vec centroids method is the increased runtime, since it requires the pairwise comparison of all words contained in the input documents.",
"In our scenario with rather short text snippets and keyword lists, this was not much of an issue.",
"However, for large documents, such a comprehensive comparison could become soon infeasible.",
"One possible solution for this performance issue is to apply our proposed estimates to sentence embeddings instead of word embeddings, which on the one hand would reduce the dimensionality of the embedding matrices and on the other hand would take word order into account.",
"We proposed a novel similarity measure to compare word embeddings from different documents, which makes use of matrix norms.",
"This measure was evaluated on the task to assign users to the best matching marketing target groups.",
"We obtained superior results compared to the usual centroid / cosine measure similarity estimation for most of the investigated matrix norm especially for the largest contest 1. Furthermore, we proved elementary properties for our proposed similarity measure regarding its well-definedness and its performance in comparison to the usual centroid-based approach.",
"Hereby we thank the Jaywalker GmbH as well as the Jaywalker Digital AG for their support regarding this publication and especially for annotating the contest data with the best-fitting youth milieus."
] | [
"abstain",
"objective",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"other",
"objective",
"result",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"other",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"method",
"result",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"objective",
"other"
] |
[
"We propose Differentiable Window, a new neural module and general purpose component for dynamic window selection.",
"While universally applicable, we demonstrate a compelling use case of utilizing Differentiable Window to improve standard attention modules by enabling more focused attentions over the input regions.",
"We propose two variants of Differentiable Window, and integrate them within the Transformer architecture in two novel ways.",
"We evaluate our proposed approach on a myriad of NLP tasks, including machine translation, sentiment analysis, subject-verb agreement and language modeling.",
"Our experimental results demonstrate consistent and sizable improvements across all tasks.",
"Computing relative importance across a series of inputs can be regarded as one of the important advances in modern deep learning research.",
"This paradigm, commonly known as attention (Bah-danau et al., 2015), has demonstrated immense success across a wide spectrum of applications.",
"To this end, learning to compute contextual representations (Vaswani et al., 2017), to point to the relevant part in the input (Vinyals et al., 2015), or to select windows or spans (Wang and Jiang, 2017) from sequences forms the crux of many modern deep neural architectures.",
"Despite aggressive advances in developing neural modules for computing relative relevance (Lu-ong et al., 2015; Chiu and Raffel, 2018), there has been no general purpose solution for learning differentiable attention windows.",
"While span selection-based pointer network models typically predict a start boundary and an end boundary (Wang and Jiang, 2017; Seo et al., 2017), these soft predictions generally reside at the last layer of the net*Equal contributions work and are softly optimized.",
"To the best of our knowledge, there exists no general purpose component for learning differentiable windows within networks.",
"Although the practical advantages of learning differentiable windows are plenty, this paper focuses on improving attentions with differentiable windows.",
"The key idea is to enable more focused attention, leveraging dynamic window selection for limiting (and guiding) the search space for the standard attention modules to work within.",
"This can also be interpreted as performing a form of dynamic local attention.",
"We make several key technical contributions.",
"First, we formulate the dynamic window selection problem as a problem of learning a discrete mask (i.e., binary values representing the window).",
"By learning and composing left and right boundaries, we show that we are able to parameterize the (discrete) masking method.",
"We then propose soft adaptations of the above mentioned, namely trainable soft masking and segment-based soft masking , which are differentiable approximations that can not only be easily optimized in an end-to-end fashion, but also inherit the desirable properties of discrete masking.",
"While these modules are task and model agnostic, we imbue the state-of-the-art Transformer (Vaswani et al., 2017) model with our differentiable window-based attention.",
"To this end, we propose two further variants, i.e., multiplicative window attention and additive window attention for improving the Transformer model.",
"Within the context of sequence transduction and self-attention based encoding, learning dynamic attention windows are beneficial because they can potentially eliminate noisy aggregation and alignment from large input sequences.",
"On the other hand, it is good to note that hard attention (Xu et al., 2015b), which replaces the weight average of soft attention with a stochastic sampling model, tries to achieve similar ends, albeit restricted to token-level selection.",
"Hence, our proposed differentiable windows are more flexible and expressive compared to hard attentions.",
"We evaluate our Transformer model with differentiable window-based attention on a potpourri of NLP tasks, namely machine translation, sentiment analysis, language modeling, and subject-verb agreement .",
"Extensive experimental results on these tasks demonstrate the effectiveness of our proposed method.",
"Notably, on the English-German and English-French WMT'14 translation tasks, our method accomplishes improvements of 0.63 and 0.85 BLEU, respectively.",
"On the Stanford Sentiment Treebank and IMDB sentiment analysis tasks, our approach achieves 2.4% and 3.37% improvements in accuracy, respectively.",
"We further report improvements of 0.92% in accuracy and 2.13 points in perplexity on the subject-verb agreement and language modeling tasks, respectively.",
"We make our code publicly available at https://ntunlpsg.github.io/project/ dynamic-attention/ .",
"The attention mechanism enables dynamic selection of relevant contextual representations with respect to a query representation.",
"It has become a key module in most deep learning models for language and image processing tasks, especially in encoder-decoder models (Bahdanau et al., 2015; Luong et al., 2015; Xu et al., 2015a).",
"The Transformer network (Vaswani et al., 2017) models the encoding and decoding processes using stacked self-attentions and cross-attention (encoder-decoder attentions).",
"Each attention layer uses a scaled multiplicative formulation defined as: score ( Q , K ) = ( QWQ )( KWK ) T d (1) att ( Q , K , V ) = S ( score ( Q , K ))( V WV ) (2) where S ( A ) denotes the softmax operation over each row of matrix A , Q IR n q d is the matrix containing the n q query vectors, and K , V IR n d are the matrices containing the n key and value vectors respectively, with d being the number of vector dimensions; WQ , WK , WV IR d d are the associated weights to perform linear transformations.",
"To encode a source sequence, the encoder applies self-attention , where Q , K and V contain the same vectors coming from the output of the previous layer.",
"1 In the decoder, each layer first applies masked self-attention over previous-layer states.",
"The resulting vectors are then used as queries to compute cross-attentions over the encoder states.",
"For cross-attention, Q comprises the decoder self-attention states while K and V contain the encoder states.",
"The attention mechanism adopted in the Transformer is considered global since the attention context spans the entire sequence.",
"In theory, given enough training data, global attention should be able to model dependencies between the query and the key vectors well.",
"However, in practice we have access to only a limited amount of training data.",
"Several recent studies suggest that incorporating more focused attention over important local regions in the input sequence as an explicit inductive bias could be more beneficial.",
"In particular, Shaw et al. (2018) show that adding relative positional biases to the attention scores (Eq. 1) increases BLEU scores in machine translation.",
"Specifically, for each query q i Q at position i and key k j K at position j , a trainable vector a i,j = w max ( ,min ( j i, )) is added to the key vector before the query-key dot product is performed.",
"The window size is chosen via tuning.",
"Sperber et al. (2018) also consider local information by restricting self-attention to neighboring representations to improve long-sequence acoustic modeling.",
"Although shown to be effective, their methods only apply to self-attention and not to cross-attention where the query vectors come from a different sequence.",
"That said, Luong et al. (2015) are the first to propose a Gaussian-based local attention for cross-attention .",
"At each decoding step t , their model approximates the source-side pivot position p t as a function of the decoding state and the source sequence length.",
"Then, local attention is achieved by multiplying the attention score with a confidence term derived from a N ( p t , 2 ) distribution.",
"The aligned pivot p t and the variance 2 (a hyper-parameter) respectively represent the center and the size of the local window.",
"1 Initially, Q , K , and V contain the token embeddings.",
"Meanwhile, Yang et al. (2018) improve the method of Luong et al. (2015) by assigning a soft window weight (a Gaussian bias) to obtain a flexible window span.",
"Despite effective, the aligned pivot position in the source is determined only by the decoder state, while the encoder states are disregarded these should arguably give more relevant information regarding the attention spans over the source sequence.",
"Besides, the confidence for local attention span may not strictly follow a normal distribution, but rather vary dynamically depending on the relationship between the query and the key.",
"Furthermore, the approach of Luong et al. (2015) is only applicable to cross-attention while the one of Yang et al. (2018) works better only for encoder self-attention as shown in their experiments.",
"Our proposed differentiable window approach to local attention addresses the above limitations of previous methods.",
"Specifically, our methods are dynamic and applicable to encoder and decoder self-attentions as well as cross-attention, without any functional constraints.",
"They incorporate encoder states into the local window derivation.",
"They are also invariant to sequence length, which removes the dependence on global features from the local context extraction process.",
"Our proposed attention method works in two steps: ( i ) derive the attention span for each query vector to attend over, and ( ii ) compute the respective attention vector using the span.",
"In this section, we present our approaches to step ( i ) by proposing trainable soft masking and segment-based soft masking .",
"In the next section, we present our methods to compute the attention vectors.",
"To give the necessary background to understand what can be expected from our method, we first present the discrete masking case.",
"In this context, we seek to dynamically derive a boolean mask vector for each query that will indicate the window in the key-sequence over which the query should attend.",
"In other words, attentions are only activated on the consecutive positions where the mask vector element is 1 , and the positions with 0 are canceled out.",
"Let the query vector and the key-sequence be q IR d and K = ( k 1 , k 2 , . . . , k n ) , respectively.",
"Formally, we define the local attention mask vector m q { 0 , 1 } n for the query q as Tl q Tr q f l q = Tl q L n g r q = Tr q L Tn m q = f l q (cid:12) g r q Figure 1: Example of , f , and g vectors and how the mask vector m q can be derived for l q = 3 and r q = 8 .",
"where l q and r q denote the left and right positional indices that form a discrete window [ l q , r q ] over which the query attends.",
"As such, in the standard global attention, l q = 1 and r q = n for all the query vectors, and in decoder self-attention, l q = 1 and r q = t for the query vector at decoding step t .",
"To facilitate the construction of m q , we first define vectors k , f k , g k and matrix L n with entries as: ik = (cid:40) 1 , if i = k 0 , otherwise ; f ik = (cid:40) 1 , if i k 0 , otherwise g ik = (cid:40) 1 , if i k 0 , otherwise ; L i,jn = (cid:40) 1 , if i j 0 , otherwise (4) where k { 0 , 1 } n denotes the one-hot representation for a boundary position k (from the left or right of a sequence), and f k , g k { 0 , 1 } n are the rightward' mask vector and leftward' mask vector, respectively; L n { 0 , 1 } n n denotes a unit-value (1) upper-triangular matrix with i and j being the row and column indices respectively.",
"Figure 1 visualizes how these entities appear.",
"Specifically, f k has entry values of 1 's for position k and its right positions, while g k has entry values of 1 's for position k and its left positions.",
"As such, f k and g k can be derived from k and L n as follows.",
"Note that f k can be interpreted as the cumulative sum across k , while g k as the inverse cumulative sum across k .",
"Given the above definitions, the mask vector m q for a query q to attend over the window [ l q , r q ] in the key sequence such that 1 l q r q n can be achieved by: m q = f l q (cid:12) g r q = ( Tl q L n ) (cid:12) ( Tr q L Tn ) (6) where (cid:12) denotes element-wise multiplication.",
"As shown in Figure 1, m q represents the intersection between f l q and g r q , and forms a masking span for the attention.",
"The above masking method is non-differentiable as is discrete, which makes it unsuitable in an end-to-end neural architecture.",
"In our trainable soft masking method, we approximate the discrete one-hot vector with a pointing mechanism (Vinyals et al., 2015).",
"2 Specifically, given the query q and the key-sequence K as before, we define confidence vectors l q , r q IR n as follows.",
"(7) (8)",
"where S is the softmax function as defined before, and WQL , WKL , WQR , WKR IR d d are trainable parameters.",
"Eq.",
"7-8 approximate the left and right boundary positions of the mask vector for the query q .",
"However, contrary to the discrete case, they do not enforce absolute cancellation or activation of attention weights on any position in the key-sequence.",
"Instead, they assign a confidence score to each position.",
"This allows the model to gradually correct itself from invalid assignments.",
"Moreover, the softmax operations enable differentiability while maintaining the gradient flow in an end-to-end neural architecture.",
"Note however that the left and right boundary concepts have now become ambiguous since the positions l q = arg max( l q ) and r q = arg max( r q ) are not guaranteed to conform to the constraint l q r q .",
"To understand its implication, lets first consider the discrete case in Eq.",
"6; the element-wise multiplication between f l q and g r q results in a zero vector for m q if l q > r q , canceling out the attention scores entirely.",
"Although not absolute zeros , 2 However, unlike the standard pointer network, in our case there is no direct supervision for learning the pointing function.",
"in the continuous case, m q would potentially contain significantly small values, which renders the attention implausible.",
"To address this, we compute the soft mask vector m q as follows.",
"This formulation has two additive terms; the former constructs the mask vector when l q r q , whereas the latter is activated when l q > r q .",
"This ensures a non-zero result regardless of l q and r q values.",
"It can be shown that the values in m q represent the expected value of the discrete flags in m q , i.e., m q = E ( m q ) ; see Appendix for a proof.",
"We concatenate the mask vectors horizontally for all the query vectors in Q IR m d to get the mask matrix M IR m n .",
"Since the pointing mechanism is invariant to sequence length, the computation of the mask vectors enjoys the same advantages, enabling our models to efficiently perform attentions on any arbitrarily long sequences.",
"In addition, the method is applicable to all attention scenarios from decoder to encoder cross-attention, encoder self-attention, and decoder self-attention.",
"The soft masking introduced above modulates the attention weight on each token separately which may result in unsmooth attention weights on neighbouring tokens.",
"However, words in a sentence are related and they often appear in chunks or phrases, contributing to a shared meaning.",
"Thus, it may be beneficial to assign identical mask values to the tokens within a segment so that they are equally treated in the window selection method.",
"In this section, we propose a novel extension to our soft masking method that enables the mask vector to share the same masking values for the tokens within a segment in a key-sequence.",
"The main idea is to divide the key-sequence K = ( k 1 , k 2 , . . . , k n ) into (cid:100) n/b (cid:101) consecutive segments and to assign the same masking value to the tokens in a segment.",
"The segment size b is considered a hyper-parameter.",
"We compute the segment-based mask vector m (cid:48) q similarly as in Eq.",
"9, but with L n replaced by J n IR n n defined as follows.",
"Eq.",
"10 11 ensure that all the items in a segment share the same masking value, which is the cumulative sum of the confidence scores in l q and r q .",
"For instance, suppose l q = ( a 1 , a 2 , a 3 , . . . , a n ) and segment size b = 2 , then the term Tl q J n evaluates to ( (cid:80) 2 i =1 a i , (cid:80) 2 i =1 a i , (cid:80) 4 i =1 a i , . . . ) , and Tl q J Tn evaluates to ( (cid:80) ni =1 a i , (cid:80) ni =1 a i , (cid:80) ni =3 a i , . . . ) .",
"Similarly, Tr q J Tn and Tr q J n will have segment-level effects on the cumulative sums.",
"Figure 2 visualizes the method with an example for b = 2 .",
"One advantage of this approach is that it allows us to control the masking behavior (by varying b ) without increasing the number of parameters compared to the token-based masking.",
"We also show its effectiveness in our experiments.",
"Having presented our method to compute the mask vector that defines the attention spans, we now present our methods to incorporate the mask vectors into the attention layers.",
"In this approach, the attention weights (Eq. 2) are (element-wise) multiplied by the mask matrix M to confine their attention scope defined by the mask.",
"Formally, the attention scores and outputs are defined as follows.",
"In this approach, the standard global attention weights are suppressed and partially overshadowed by the attention window imposed by M .",
"Thus, it can be interpreted as a local attention method similar to Luong et al. (2015).",
"However, instead of using a static Gaussian bias, we use a dynamic mask to modulate the attention weights.",
"Having a local attention window could be beneficial, but it does not rule out the necessity of global attention, which has been shown effective in many applications (Vaswani et al., 2017; Devlin et al., 2019).",
"Thus, we also propose an additive window attention , which implements a combination of global attention and local attention.",
"The attention output in this method is formally defined as s glb = ( QWQ glb )( QWK glb ) T (14) s loc = ( QWQ loc )( QWK loc ) T (cid:12) M (15) score AW = s glb + s loc d (16) att AW = S ( score AW )( V WV ) (17) where WQ glb , WK glb , WQ loc , and WK loc IR d d are the weight matrices for global and local attentions.",
"Compared to the multiplicative window attention where the mask re-evaluates the global attention weights, additive window attention applies the mask vector to the local attention scores ( s loc ), which is then added to the global attention scores ( s glb ) before passing it through the softmax function.",
"In this way, the mask-defined local window does not suppress the global context but rather complements it with a local context.",
"Moreover, the resulting attention weights add up to one , which avoids attention weights diminishment that could occur in the multiplicative window attention.",
"Additive merger of global and local window components may also facilitate more stable gradient flows.",
"We now describe how the proposed dynamic window attention methods can be integrated into the Transformer.",
"Encoder, Decoder and Cross Attentions.",
"Our proposed methods can be readily applied to the any of the attention layers in the Transformer framework.",
"We could also selectively apply our methods to different layers in the encoder and decoder.",
"In our initial experiments on WMT'14 English-German development set, we observed that the following settings provide more promising performance gains.",
"First, encoder self-attention layers benefit most from additive window attention , while decoder self-attention layers prefer multiplicative attention .",
"This shows that the global attention component is more useful when the key sequence is provided entirely in the encoder, while less useful when only the fragmented key sequence (past keys) is visible in the decoder.",
"Second, the above argument is further reinforced as we found that cross-attention layers also prefer additive window attention , where the entire source sequence is available.",
"Third, cross-attention works better with segment-based masking , which provides smoothness and facilitates phrase (n-gram) based translations.",
"Lower-layer Local Attentions.",
"It has been shown that deep neural models learn simple word features and local syntax in the lower layers, while higher layers learn more complex context-dependent aspects of word semantics.",
"Belinkov et al. (2017) show this on NMT models, while Peters et al. (2018) and Jawahar et al. (2019) show this on representation learning with ELMo and BERT respectively.",
"In other words, local contextual information can still be derived in higher layers with the standard global attention.",
"As such, we propose to apply our dynamic window attention methods only to the first 3 layers of the Transformer network, leaving the top 3 layers intact.",
"Our diverse experiments in the following section support this setup as it offers substantial improvements, whereas using local attention in higher layers does not show gains, but rather increases model parameters.",
"In this section, we present the training settings, experimental results and analysis of our models in comparison with the baselines on machine translation (MT), sentiment analysis, subject verb agreement and language modeling (LM) tasks.",
"We trained our models on the standard WMT'16 English-German (En-De) and WMT'14 English-French (En-Fr) datasets containing about 4.5 and 36 million sentence pairs, respectively.",
"For validation (development) purposes, we used new-stest2013 for En-De and a random split from the training set for En-Fr.",
"All translation tasks were evaluated against their respective newstest2014 test sets, in case-sensitive tokenized BLEU.",
"We used byte-pair encoding (Sennrich et al., 2016) with shared source-target vocabularies of 32,768 and 40,000 sub-words for En-De and En-Fr translation tasks, respectively.",
"We compare our models with three strong baselines: ( i ) Transformer Base (Vaswani et al., 2017), ( ii ) Transformer Base with Relative Position (Shaw et al., 2018), and ( ii ) Transformer Base with Localness Modeling (Yang et al., 2018).",
"To ensure a fair comparison, we trained our models and the baselines with the following training setup.",
"Training Setup.",
"We followed model specifica-tions in (Vaswani et al., 2017) and optimization settings in (Ott et al., 2018), with some minor mod-ifications.",
"Specifically, we used word embeddings of dimension 512, feedforward layers with inner dimension 2048, and multi-headed attentions with 8 heads.",
"We trained our models on a single physical GPU but replicated the 8-GPU setup following the gradient aggregation method proposed by Ott et al. (2018).",
"We trained the models for 200,000 updates for En-De and 150,000 updates for En-Fr translation tasks.",
"Finally, we averaged the last 5 checkpoints to obtain the final models for evaluation.",
"The segment size b in the segment-based masking method was set to 5.",
"3 Translation Results.",
"We report our translation results in Table 1; Enc(AW) indicates the use of additive window (AW) attention in the encoder, Dec(MW) indicates the use of multiplicative window (MW) attention in the decoder, and Cr(AW,Seg) indicates the use of additive window attention with segment-based masking for cross-attention.",
"The attention module that is not specified in our naming convention uses the default token-based global attention in the Transformer.",
"For example, Enc(AW)-Dec(MW) refers to the model that uses AW attention in the encoder, MW attention in the decoder and the default global attention for cross attention.",
"We notice that despite a minor increase in the number of parameters, applying our attentions in the encoder and decoder offers about 0.7 and 1.0 BLEU improvements in En-De and En-Fr translation tasks respectively, compared to the 3 We did not tune b ; tuning b might improve the results further.",
"Transformer base (Vaswani et al., 2017).",
"Our model with the segment-based additive method for cross attention achieves a similar performance.",
"We observe further improvements as we apply our attentions in all the attention modules of the Transformer.",
"Specifically, our model Enc(AW)-Cr(AW,Seg)-Dec(MW) achieves 28.25 and 40.32 BLEU in En-De and En-Fr translation tasks, outperforming Transformer base with localness (Yang et al., 2018) by 0.63 and 0.85 BLEU, respectively.",
"To verify our modeling decisions, we performed an ablation study in the WMT'14 En-De translation task.",
"In particular, we evaluated ( i ) the impact of applying our differentiable window attentions in all layers vs. only in certain lower layers of the Transformer network, ( ii ) which window attention methods (additive or multiplicative) are suitable particularly for the encoder/decoder self-attention and cross-attention, and ( iii ) the impact of segment-based masking in different attention modules.",
"( iv ) training efficiency and performance of our best model with the similar models.",
"Plus, to further interpret our window-based attention, we also provide the local window visualization.",
"Full vs. Partial.",
"Table 2 shows BLEU scores for the Transformer models that employ our window-Model Token-based Segment-based Cr(AW) 27.97 28.13 Enc(AW)-Dec(MW) 28.11 27.91 Table 3: BLEU scores for tokenand segment-based masking in cross attention and encoder self-attention.",
"based attentions in all 6 layers ( Full ) vs. only in the first 3 layers ( Partial ), as well as the methods used in different attention modules (encoder/decoder self-attention, cross-attention).",
"We can see that almost all the models with window-based methods in the first 3 layers outperform those that use them in all 6 layers.",
"This gives the setup significant advantages as it performs not only better in BLEU but also requires less parameters.",
"The results also show that multiplicative window (MW) attention is preferred in decoder self-attention, while additive window (AW) is more suitable for encoder self-attention and for cross-attention.",
"This suggests that the global context, which is maintained in AW, is more useful when it is entirely available like in encoder self-attention and cross attention.",
"In contrast, incomplete and partially-generated context in decoder self-attention may induce more noise than information, where MW attention renders better performance than AW.",
"Tokenvs. Segment-based.",
"Table 3 compares the results for using token-based vs. segment-based masking methods in different attention modules of the network.",
"Note that it is preferred for decoder self-attention to adopt token-based masking since the decoder cannot point to unfinished segments in autoregressive generation, if it had used segment-based masking.",
"We see that segment-based additive window masking outdoes its token-based counterpart (28.13 vs. 27.97 BLEU) for cross-attention.",
"Meanwhile, for encoder self-attention, token-based masking performs better than segment-based masking by 0.2 BLEU.",
"This suggests that segments (or phrases) represent better translation units than tokens, justifying its performance superiority in cross-lingual attention but not in monolingual (self-attention) encoding.",
"in table 4, our training efficiency is competitive to the baselines.",
"That is, the training speed for our model is 1 .",
"04",
"steps/sec which is similar to Yang et al. (2018).",
"Besides, our model outperforms the Transformer with 8 layers, which has more parameters.",
"This suggests that our performance gain may not come from additional parameters, but rather from a better inductive bias through the dynamic window attention.",
"Local Window Visualization.",
"To further interpret our window-based attentions, Figure 3a shows the cross-attention soft masking values ( m q ) on the source tokens for each target token in an En-Fr test sample assigned by our Enc(AW)-Cr(AW,Seg)-Dec(MW) model.",
"The darker the score, the higher the attention is from a target token to a source token.",
"We can see the relevant subwords are captured by the attentions quite well, which promotes ngram-level alignments.",
"For instance, the mask ( m q ) guides the model to evenly distribute attention scores on sub-words Co@@ and en (Fig. 3b), while standard attention is biased towards Co@@ (Fig. 3c).",
"Similar phenomenon can be seen for Bro@@ and thers (towards fr`eres).",
"We evaluate our models on the Stanford Sentiment Treebank (SST) (Socher et al., 2013), IMDB sentiment analysis (Maas et al., 2011) and Subject-Verb Aggreement (SVA) (Linzen et al., 2016) tasks.",
"We compare our attention methods (incorporated into the Transformer encoder) with the encoders of Vaswani et al. ( 2017), Shaw et al. (2018) and Yang et al. (2018).",
"Training Setup.",
"As the datasets are quite small compared to the MT datasets, we used tiny versions of our models as well as the baselines.",
"4 Specifi-cally, the models consist of a 2-layer Transformer encoder with 4 attention heads, 128 hidden dimensions and 512 feedforward inner dimensions.",
"In these experiments, our attention methods are applied only to the first layer of the network.",
"We trained for 3,000, 10,000 and 10,000 updates for SST, IMDB and SVA tasks, respectively on a single GPU machine.",
"Results.",
"Table 5 shows the results.",
"Our multiplicative window approach (Enc (MW)) achieves up to 79.7%, 85.1% and 95.95% accuracy in SST, IMDB and SVA, exceeding Transformer (Vaswani et al., 2017) by 0.4%, 1.35% and 1.47%, respectively.",
"Our additive window attention (Enc (AW)) renders even more improvements.",
"Specifically, it outperforms Transformer with relative position (Shaw et al. 2018) by 2.4% and 3.37%, 0.92% reaching 82.13%, 87.98% and 96.19% accuracy in SST, IMDB and SVA, respectively.",
"In fact, the results demonstrate consistent trends with our earlier MT experiments: additive window attention outdoes its multiplicative counterpart in the encoder, 4 As specified in https://github.com/tensorflow/tensor2tensor.",
"Finally, to demonstrate our proposed methods as effective general purpose NLP components, we evaluate them on the One Billion Word LM Benchmark dataset (Chelba et al., 2013).",
"The dataset contains 768 million words of data compiled from WMT 2011 News Crawl data, with a vocabulary of 32,000 words.",
"We used its held-out data as the test set.",
"Training Setup.",
"As the LM dataset is considerably large, we used the same model settings as adopted in our MT experiments.",
"For these experiments, we only trained the models on virtually 4 GPUs for 100,000 updates using gradient aggregation on a single GPU machine.",
"Note that only the self-attention based autoregressive decoder of the Transformer framework is used in this task.",
"Therefore, the method of Yang et al. (2018) is not applicable to this task.",
"Results.",
"Table 6 shows the perplexity scores.",
"As can be seen, our multiplicative and additive window attention models both surpass Transformer (Vaswani et al., 2017) by 2.37 and 1.42 points respectively, reaching 44.00 and 44.95 perplexity scores respectively.",
"In addition, it is noteworthy that similar to MT experiments, multiplicative attention outperforms the additive one on this task, where the decoder is used.",
"This further reinforces the claim that where the global context is not fully available like in the decoder, the incomplete global context may induce noises into the model.",
"Thus, it is effective to embrace dynamic local window attention to suppress the global context, for which the multiplicative window attention is designed.",
"We have presented a novel Differential Window method for dynamic window selection, and used it",
"to improve the standard attention modules by enabling more focused attentions.",
"Specifically, we proposed Trainable Soft Masking and Segment-based Masking, which can be applied to en-coder/decoder self-attentions and cross attention.",
"We evaluated our models on four NLP tasks including machine translation, sentiment analysis, subject verb agreement and language modeling.",
"Our experiments show that our proposed methods outperform the baselines significantly across all the tasks.",
"All in all, we demonstrate the benefit of incorporating the differentiable window in the attention.",
"In the future, we would like to extend our work to make a syntactically-aware window that can automatically learn tree (or phrase) structures.",
"We would like to express our gratitude to Yi Tay and our anonymous reviewers for their insightful feedback on our paper.",
"Shafiq Joty would like to thank the funding support from his Start-up Grant (M4082038.020)."
] | [
"objective",
"objective",
"objective",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"result",
"abstain",
"abstain",
"objective",
"objective",
"result",
"objective",
"abstain",
"objective",
"abstain",
"abstain",
"objective",
"method",
"objective",
"abstain",
"result",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"abstain",
"other",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"result",
"result",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"objective",
"method",
"objective",
"objective",
"objective",
"other",
"other"
] |
[
"The robustness and security of natural language processing (NLP) models are significantly important in real-world applications.",
"In the context of text classification tasks, adversarial examples can be designed by substituting words with synonyms under certain semantic and syntactic constraints, such that a well-trained model will give a wrong prediction.",
"Therefore, it is crucial to develop techniques to provide a rigorous and provable robustness guarantee against such attacks.",
"In this paper, we propose WordDP to achieve certified robustness against word substitution attacks in text classification via differential privacy (DP).",
"We establish the connection between DP and adversarial robustness for the first time in the text domain and propose a conceptual exponential mechanism-based algorithm to formally achieve the robustness.",
"We further present a practical simulated exponential mechanism that has efficient inference with certified robustness.",
"We not only provide a rigorous analytic derivation of the certified condition but also experimentally compare the utility of WordDP with existing defense algorithms.",
"The results show that WordDP achieves higher accuracy and more than 30 efficiency improvement over the state-of-the-art certified robustness mechanism in typical text classification tasks.",
"Deep neural networks (DNNs) have achieved state-of-the-art performance in many natural language processing (NLP) tasks, such as text classification (Zhang et al., 2015), sentiment analysis (Bakshi et al., 2016), and machine translation (Bahdanau et al., 2014), making the robustness and security of NLP models significantly important.",
"Recent studies have shown that DNNs can be easily fooled by adversarial examples, which are carefully crafted J. Lou is the corresponding author.",
"by adding imperceptible perturbations to input examples during inference time (Szegedy et al., 2013).",
"In the context of text classification tasks, adversarial examples can be designed by manipulating the word or characters under certain semantic and syntactic constraints (Ren et al., 2019; Jin et al., 2019; Zang et al., 2020; Gao et al., 2018).",
"Among all the attack strategies, word substitution attacks, in which attackers attempt to alter the model output by replacing input words with their synonyms, can maximally maintain the naturalness and semantic similarity of the input.",
"Therefore, in this paper, we consider such word substitution attacks and focus on defending against such attacks.",
"Figure 1 shows an example of the word substitution attack where the clean input text is changed into adversarial text by substituting input words from a synonym list.",
"Various mechanisms have been developed to Figure 1: Word Substitution Attack and Certified Robustness via WordDP .",
"defend against adversarial examples in text classification models.",
"Miyato et al .",
"(2016) applied adversarial training to the text domain that involves adversarial examples in the training stage.",
"Data augmentation in the training phase is another defense approach to improve model robustness.",
"For example, Synonyms Encoding Method (SEM) proposed by Wang et al .",
"(2019), Dirichlet Neighborhood Ensemble (DNE) proposed by Zhou et al .",
"(2020), and Robust Encodings (RobEn) proposed by Jones et al .",
"(2020) are different data augmentation methods on either embedding space or word space.",
"However, all the above-mentioned works are only evaluated empirically and have no theoretical analysis or guarantee on the robustness of the methods in that they may be broken by other adaptive attacks.",
"Therefore, it is important to provide rigorous and provable certified defense.",
"There are several attempts to achieve certified robustness for word substitution attacks.",
"Jia et al .",
"(2019) and Huang et al .",
"(2019) utilize Interval Bound Propagation (IBP) to compute an upper bound on the model's loss in the forward pass and minimize this bound via backpropagation.",
"Although IBP gives a theoretical bound, it does not provide any certification condition.",
"Another limitation is that it is not applicable to character-level DNNs, because IBP is limited to continuous space so that model input should be the word-level embedding.",
"SAFER (Ye et al., 2020) achieves certified robustness with a new randomized smoothing technique.",
"However, its computation of synonym set intersection greatly reduces the computation speed in the inference stage.",
"Besides, SAFER only provides a theoretical certified accuracy and its empirical effectiveness on adversarial examples has not been evaluated.",
"In this paper, we propose a novel approach WordDP to certified robustness against word substitution attacks in text classification via differential privacy (DP) (Dwork, 2008).",
"Figure 1 is a high-level illustration.",
"In the inference phase, the input go through a randomized mechanism WordDP.",
"If a clean input satisfies the certification condition of WordDP , its adversarial counterpart is guaranteed to predict the same output label.",
"DP is a privacy framework that protects the information of individual record in the database by randomized computations, such that the change of the computation output is bounded when small perturbation is applied on the database.",
"This stable output guarantee is in parallel with the definition of robustness: ensuring that small changes in the input will not result in dramatic shift of its output.",
"The idea of providing robustness certification via DP was originally introduced in PixelDP (Lecuyer et al., 2019) which is specifically designed for norm-bounded adversarial examples in the continuous domain for applications like image classification.",
"However, it is challenging to directly apply such an idea against word substitution attack, due to the discrete nature of the text input space.",
"Therefore, in this work, we develop WordDP to achieve the DP and robustness connection in the discrete text space by exploring novel application of the exponential mechanism (McSherry and Talwar, 2007), conventionally utilized to realize DP for answering discrete queries.",
"To achieve this, we present a conceptual certified robustness algorithm that randomly samples word-substituted sentences according to the probability distribution designated by the exponential mechanism and aggregates their inference result as the final classification for the input.",
"A fundamental barrier limiting the conceptual algorithm from being applied in practice is that the sampling distribution of the exponential mechanism requires an exhaustive enumeration-based sub-step, which needs to repeat the model inference for every neighboring sentences with word substitutions from the input sentence.",
"To overcome this computational difficulty, we develop a practical simulated exponential mechanism via uniform sampling and re-weighted averaging, which not only lowers the computational overhead but also ensures uncompromising level of certified robustness.",
"Our contribution can be summarized as follows: 1) We propose WordDP to establish the connection between DP and certified robustness for the first time in text classification domain (Sec.4.1).",
"2) We leverage conceptual exponential mechanism to achieve WordDP and formally prove an L -word bounded certified condition for robustness against word substitution attacks (Sec.4.2).",
"3) We develop a simulated exponential mechanism via uniform sampling and weighted averaging to overcome the computation bottleneck of the conceptual exponential mechanism without compromising the certified robustness guarantee (Sec.4.3).",
"4) Extensive experiments validate that WordDP outperforms existing defense methods and achieves over 30 efficiency improvement in the inference stage than the state-of-the-art certified robustness mechanism (Sec.5).",
"Word Substitution Attacks.",
"Various attacks have been developed to fool DNNs in text classification, including substituting a word with its synonyms (Ren et al., 2019; Jin et al., 2019; Zang et al., 2020; Alzantot et al., 2018), manipulating the characters (Gao et al., 2018; Ebrahimi et al., 2018), and perturbation on the embedding space (Papernot et al., 2016; Liang et al., 2018; Sato et al., 2018; Cheng et al., 2019).",
"words in a sentence with their synonyms according to a synonym table, including PWWS (Ren et al., 2019), TEXTFOOLER (Jin et al., 2019), among others (Zang et al., 2020).",
"In particular, PWWS is the most widely used attack algorithm to evaluate defense mechanisms (Zhou et al., 2020; Jia et al., 2019; Ye et al., 2020).",
"PWWS uses WordNet to build synonym set and only replaces named entities (NEs) with similar NEs in order to flip the prediction.",
"It incorporates word saliency to determine the replacement order and selects the synonym that can cause the greatest prediction probability change.",
"Empirical Defenses to Word Substitution Attacks.",
"Several existing empirical defenses are effective for adversarial word substitution.",
"Miyato et al .",
"(2016) applied adversarial training to the text domain.",
"Wang et al .",
"(2019) proposed Synonyms Encoding Method (SEM), which finds a mapping between the words and their synonyms before the input layer.",
"Jones et al .",
"(2020) proposed robust encodings (RobEn) that involves an encoding function to map sentences to a smaller, discrete space.",
"Dirichlet Neighborhood Ensemble (DNE) (Zhou et al., 2020) creates virtual sentences by mixing the embedding of the original word with its syn-onyms' embedding via Dirichlet sampling, which is randomized smoothing based data augmentation.",
"Certified Robustness.",
"Certified robustness has been first studied in image domain, which certifies that a model is robust to adversarial examples when its prediction result is stable when applying small perturbations to the input (Lecuyer et al., 2019; Cohen et al., 2019; Lee et al., 2019).",
"In text domain, Jia et al .",
"(2019) and Huang et al .",
"(2019) both applied Interval Bound Propagation (IBP) for certification.",
"The intuition is to compute an upper bound on the model's loss through the network in a standard forward pass and minimize this upper bound via backpropagation.",
"One major limitation of IBP certification is that it is not applicable to character-level DNNs, because IBP is limited to continuous space (word-level embedding).",
"SAFER (Ye et al., 2020) is a certified robust method based on randomized smoothing.",
"The certification is based on the intersection of synonym sets between perturbed examples and clean examples.",
"However, its computation of synonym set intersection greatly reduces the inference efficiency.",
"Besides, it lacks thorough evaluation of empirical effectiveness on adversarial examples.",
"Adversarial Word Substitution.",
"Consider a sentence of ! words X = ( x 1 , x 2 , ..., x i , ..., x ! ) , where each word x i belongs to a synonym set of ( i ) number of synonyms S ( x i ) = { x 1 i , x 2 i , ..., x ( i ) i } .",
"Following common practice (Ye et al., 2020), we also assume the synonymous relation is symmetric, such that x i is in the synonym set of all its synonyms x 2 i , ..., x ( i ) i and S ( x ji ) = S ( x ki ) for all j, k 2 [ ( i )] .",
"The synonym set S ( x i ) can be built by following GLOVE (Pen-nington et al., 2014b).",
"Definition 3.1.",
"( L -Adversarial Word Substitution Attack)",
"For an input sentence X , an L adversarial word substitution attack perturbs the sentence by selecting at most L ( L ! ) words x 1 , ..., x L and substitutes each selected word x i with one of its synonyms x 0 i 2 S ( x i ) .",
"We denote an attacked sentence by X 0 and the set of all possible attacked sentences by S ( L ) .",
"Certified Robustness.",
"In general, we say a model is robust to adversarial examples when its prediction result is stable when applying small perturbations to the input.",
"Definition 3.2.",
"(Certified Robustness to Word Substitution Attack)",
"Denote a multiclass classification model by f ( X ) : X 7! c 2 C , where c is a label in the possible label set C = { 1 , ..., C } .",
"In general, f ( X ) outputs a vector of scores f y ( X ) = ( f y 1 , ..., f y C ) 2 Y , where Y = { y : P Ci =1 f y i = 1 , f y i 2 [0 , 1] } , and c = arg max i 2 C f y i .",
"A predictive model f ( X ) is robust to L -adversarial word substitution attack on input X , if for all X 0 2 S ( L ) , it has f ( X ) = f ( X 0 ) , which is equivalent to y c ( X 0 ) > max i 2 C : i 6 = c y i ( X 0 ) .",
"Differential Privacy.",
"The concept of DP is to prevent the information leakage of an individual record in the database by introducing randomness into the computation.",
"More specifically, DP guarantees the output of a function over two neighbouring databases are indistinguishable.",
"Definition 3.3.",
"(Differential Privacy (Dwork et al., 2006)) A randomized mechanism A is differentially private if, for all neighboring datasets D D 0 that differ in one record or are bounded by certain distance and for all events O in the output space O of A , we have P [ A ( D ) 2 O ] e P [ A ( D 0 ) 2 O ] .",
"(2) Exponential Mechanism.",
"The exponential mechanism is a commonly utilized DP mechanism in the discrete domain, which consists of the utility score function, sensitivity, and sampling probability distribution as its key ingredients.",
"Definition 3.4.",
"( Exponential Mechanism (Mc-Sherry and Talwar, 2007) ) Denote the score function u ( D , r ) : D R 7!",
"R , which maps each pair of input dataset D D and candidate result r 2 R to a real valued score.",
"Denote the sensitivity by \u0000 u := max r 2 R max D D 0 | u ( D , r ) \u0000 u ( D 0 , r ) | .",
"The exponential mechanism ME ( D , u, R ) selects and outputs an element r 2 R with probability proportional to e u ( D , r ) 2 \u0000 u .",
"The exponential mechanism is -differentially private.",
"WordDP .",
"We expand the intuition that DP can be applied to provide certified robustness against textual adversarial examples like word substitution attack by regarding the sentence as a database and each word as a record.",
"If the randomized predictive model satisfies -DP during inference, then the output of a potentially adversarial input X 0 2 S ( L ) and the output of the original input X should be indistinguishable.",
"Thus, our proposed approach is to transform a multiclass classification model's prediction score into a randomized WordDP score, which is formally defined below.",
"Definition 4.1.",
"(Word Differential Privacy)",
"Consider any input sentence X and its L -word substitution sentence set S ( L ) .",
"For a randomized function f A ( X ) , let its prediction score vector be y 2 Y .",
"f A ( X ) satisfies -word differential privacy ( WordDP ), if it satisfies -differential privacy for any pair of neighboring sentences X 1 , X 2 2 S ( L ) and the output space y 2 Y .",
"Remark 1. We stress that WordDP does not seek DP protection for the training dataset as in the conventional privacy area.",
"Instead, it leverages the DP randomness for certified robustness during inference with respect to a testing input.",
"In practice, for a base model f , a DP mechanism A will be introduced to randomize it to f A .",
"For an WordDP model f A , its expected prediction E [ f A ( X )] is certified robust.",
"Denote the prediction score vector of E [ f A ( X )] by E [ f y A ( X )] = ( E [ f y 1 A ( X )] , ..., E [ f y CA ( X )]) 2 Y .",
"Lemma 4.2 shows E [ f y A ( X )] satisfies the certified robustness condition in",
"eq.(1), based on Lemma 4.1 that shows each expected prediction score E [ f y i A ( X )] is stable.",
"Lemma 4.1.",
"For an -WordDP model f A , its prediction score satisfies the relation, 8 i 2 [ C ] , E [ f y i A ( X 1 )] e E [ f y i A ( X 2 )] , 8 X 1 , X 2 2 L .",
"From the above property, we can derive the certified robustness condition to adversarial examples.",
"Lemma 4.2.",
"For an -WordDP model f A and an input sentence X , if there exists a label c such that: E ( f y c A ( X )) > e 2 max i 6 = c E ( f y i A ( X )) , (4) then the multiclass classification model f A based on the expected label prediction score vector E [ f y A ( )] is certified robust to L -adversary word substitution attack on X .",
"The proofs of the above two lemmas can be adapted from the pixelDP to WordDP context based on Lemma 1 and Proposition 1 in Lecuyer et al .",
"(2019).",
"We relegate the proofs to Appendix A. Our focus is how to design the DP mechanism A to achieve WordDP (Subsection 4.2), and how to implement it for efficient inference that still ensures certified robustness (Subsection 4.3).",
"4.2 WordDP with Exponential Mechanism In this subsection, we present the conceptual exponential mechanism-based algorithm to achieve WordDP and the certification procedure.",
"Exponential Mechanism for WordDP .",
"To obtain the DP classifier f A given the base model f , we introduce the exponential mechanism ME as the randomization mechanism A and define f A := f ( ME ) .",
"Given an input example, the mechanism selects and outputs L -substitution sentences with a probability based on exponential mechanism.",
"It then aggregates the inferences of these samples by an average as the estimated prediction of the input.",
"Figure 2 illustrates the algorithm.",
"Definition 4.2.",
"(Exponential Mechanism for WordDP and L -Certified Robustness)",
"Given the base model f , for any input sentence X and potential L -substitution sentence set S ( L ) , we define the utility score function as: u ( S ( L ) , X 0 ) = e \u0000k f y ( X 0 ) \u0000 f y ( X ) k 1 , (5) Figure 2: WordDP with Exponential Mechanism.",
"which associates a utility score to a candidate output X 0 2 S ( L ) .",
"The sensitivity of the utility score is \u0000 u = 1 \u0000 e \u0000 1 .",
"Then, the exponential mechanism selects and outputs X 0 with probability PX 0 PX 0 = 1 exp( u ( S ( L ) , X 0 ) 2 \u0000 u ) , (6) where = P | S ( X ,L ) | i =1 exp( u ( S ( L ) , X 0 i ) 2 \u0000 u ) is the normalization factor.",
"Proposition 4.1.",
"The exponential mechanism M ( E ) satisfies -DP.",
"The composition model function f ME ( X ) := f ( ME ( X )) is -DP and its prediction score vector E [ f y ME ( X )] -based classification is certified robust to L -adversary word substitution attack on X .",
"Proof.",
"To show ME is -DP, we prove the sensitivity of the utility score (maximum difference between the utility scores given any two neighboring input) \u0000 u is indeed 1 \u0000 e \u0000 1 and the remaining follows the definition of the exponential mechanism (c.f.Definition 3.4).",
"Since k f y ( X 0 i ) \u0000 f y ( X ) k 1 is the prediction probability change which is in [0 , 1] , we have u ( S ( L ) , X 0 i ) 2 [ e \u0000 1 , 1] , which leads to \u0000 u = 1 \u0000 e \u0000 1 .",
"Next, since ME ( X ) is -DP, by the post-processing property (i.e., any computation on the output of the DP mechanism remains DP, Proposition 2.1 in (Dwork et al.,",
"2014).), f ME ( X ) is also -DP.",
"Subsequently, by Lemma 4.2, E [ f ME ( X )] is L -certified robust on X .",
"Remark 2. 1) The design of the utility function has the intuition that we wish to assign higher probability to sentences that have minimal impact on the prediction score function.",
"2) The privacy budget influences whether the sampling probability distribution is flat (lower ) or peaky (greater ).",
"Too small of an value will clearly affect the prediction accuracy.",
"For certification purpose, according to the certified condition Lemma 4.2, too large of an value will result in none certified, so can only be searched within a limited range.",
"Certification Condition.",
"It is a common practice in certified robustness literature to estimate E [ f y ME ( X )] via Monte Carlo estimation (Lecuyer et al., 2019; Cohen et al., 2019) in the form of b E [ f y ME ( X )] .",
"That is, we repeat the exponential mechanism-based inference to draw n samples of f y ME ( X 0 ) , for 2 [ n ] and let b E [ f y ME ( X )] = 1 n P n =1 f y ME ( X 0 ) .",
"The estimation error between b E [ f y ME ( X )] and E [ f y ME ( X )] can be bounded based on Hoeffd-ing's inequality with probability , which guarantees that b E [ f y ME ( X )] 2 [ E [ f y ME ( X )] \u0000 q 12 n ln ( 2 C 1 \u0000 ) , E [ f y ME ( X )] + q 12 n ln ( 2 C 1 \u0000 )] := [ b E lb [ f y ME ( X )] , b E ub [ f y ME ( X )]] .",
"The next proposition shows that the inference based on the estimated b E [ f y ME ( X )] (as versus E [ f y ME ( X )] ) can still ensure certified robustness.",
"Proposition 4.2.",
"Under the same condition with Proposition 4.1, if there exists a label c such that b E lb [ f y c ME ( X )] > e 2 max i 6 = c b E ub [ f y i ME ( X )] , (7) the prediction score vector b E [ f y ME ( X )] -based classification is certified robust with probability to L -adversary word substitution attack on X .",
"Simulated Exponential Mechanism.",
"The conceptual exponential mechanism in Definition 4.2 is computationally impractical.",
"The bottleneck is the need to enumerate the entire S ( L ) in order to calculate the probability distribution of PX 0 for each X 0 2 S ( L ) and the normalization factor , which essentially requires us to perform inference for S ( L ) \u0000 n times ( n is the number of samples) for certifying a single input sentence X .",
"In the following, we show that we can significantly reduce the computation cost by sampling via a simulated exponential mechanism, which suffices to sample n candidate L \u0000 substitution sentences and calculate only n times, i.e., the same repetitions as the Monte Carlo estimation.",
"The key insight is based on the different purpose of applying the exponential mechanism between the conventional scenario for achieving DP and our certified robustness scenario.",
"For the former, in order to ensure DP of the final output f ME ( X 0 ) , the intermediate X 0 is forced to satisfy DP, i.e., drawn from the exact probability distribution designated by the exponential mechanism.",
"For the latter, while the derivation of the certified robustness relied on the randomness of DP and the exponential mechanism, we do not actually require the DP of the intermediate X 0 .",
"As a result, it allows us to sample X 0 from other simpler distributions without calculating the probability distribution of the exponential mechanism, as long as the alternative approach can obtain the equivalent b E [ f y A ( X )] for robustness certification.",
"We develop a simulated exponential mechanism via uniform sampling and re-weighted average prediction score calculation .",
"Figure 2 shows the simulated mechanism in contrast to the conceptual mechanism.",
"In detail, we sample from S ( L ) with uniform probability, which can be efficiently implemented without generating S ( L ) .",
"Denoting a sample by X 0 , we calculate its scaled exponential mechanism probability by PX 0 = exp( u ( S ( L ) , X 0 ) 2 \u0000 u ) , (8) which can be obtained via a single inference on X 0 and the inference on X due to the omission of the normalization factor that requires the entire S ( L ) .",
"Finally, we use the following re-weighted average prediction score (weighted by the scaled exponential mechanism probability) for certified robust prediction, E [ f y ME ( X )] = n X =1 PX 0 f y ME ( X 0 ) .",
"(9) The following theorem shows that E [ f y ME ( X )] based prediction guarantees certified robustnessand the conceptual exponential mechanism-based inference in Proposition 4.2 is certified robust provided E [ f y ME ( X )] is so.",
"The inference on X only needs to be computed once and shared by all n Monte Carlo repetitions.",
"Such uniform sampling and scaled probability calculation is repeated for n times, which requires only n + 1 inferences.",
"Theorem 4.1.",
"For any input X , let E [ f y ME ( X )] be calculated by",
"eq.(9).",
"Denote E lb [ f y ME ( X )] and E ub [ f y ME ( X )] be -confidence lower and upper bounds, respectively, i.e., E lb [ f y ME ( X )] = E [ f y ME ( X )] \u0000 q 12 n ln ( 2 C 1 \u0000 ) and E ub [ f y ME ( X )] = E [ f y ME ( X )] + q 12 n ln ( 2 C 1 \u0000 ) .",
"If there exists a label c such that E lb [ f y c ME ( X )] > e 2 max i 6 = c E ub [ f y i ME ( X )] , (10) the prediction score vector E [ f y ME ( X )] -based classification is certified robust with probability to L -adversary word substitution attack on X .",
"The proof of Theorem 4.1 requires the following lemma, which is adapted from Lemma 4.1 from the accurate expectation of E [ f y ME ( X )] to the simulated expectation E [ f y ME ( X )] .",
"We stress that during both proofs, we do not use the DP property of E [ f y ME ( )] , but only its equivalent relation to b E [ f y i ME ( )] .",
"Lemma 4.3.",
"For any label i 2 [ C ] and any X 1 , X 2 2 S ( L ) , let E [ f y ME ( X )] be computed by",
"eq.(9).",
"Then, we have E [ f y i ME ( X 1 )] e E [ f y i ME ( X 2 )] .",
"Proof.",
"First, we notice that for any X 0 2 S ( L ) , it has E [ f y i ME ( X 0 )] = | S ( L ) | b E [ f y i ME ( X 0 )] by P [ X 0 ] = P [ X 0 ] and the uniform sampling probability 1 | S ( L ) | .",
"Second, since b E [ f y i ME ( X 0 )] is WordDP , we can show that it satisfies Lemma 4.1 by switching E [ f y i ME ( )] there to b E [ f y i ME ( )] here.",
"It follows that: E [ f y i ME ( X 1 )] = b E [ f y i ME ( X 1 )] ( | S ( L ) | ) e b E [ f y i ME ( X 2 )] ( | S ( L ) | ) = e E [ f y i ME ( X 2 )] , which proves the lemma.",
"Proof.",
"(Proof of Theorem 4.1)",
"For any X 0 2 S ( L ) , by",
"eq.(11), we have e E [ f y c ME ( X 0 )] \u0000 E [ f y c ME ( X )] > E [ f y c ME ( X )] \u0000 r 1 2 nln ( 2 C 1 \u0000 ) = E lb [ f y c ME ( X )]; as well as E [ f y i ME ( X 0 )] e E [ f y i ME ( X )] e max i 6 = c E [ f y i ME ( X )] e max i 6 = c ( E [ f y i ME ( X )] + r 1 2 nln ( 2 C 1 \u0000 )) = e max i 6 = c E ub [ f y i ME ( X )] .",
"Equipped with the above two relations, we can prove the claim in Theorem 4.1.",
"We show that E [ f y i ME ( X )] is certified robust for any X 0 2 S ( L ) , as follows, E [ f y c ME ( X 0 )] > E lb [ f y c ME ( X )] /e > e max i 6 = c E ub [ f y i ME ( X )] > e max i 6 = c E [ f y i ME ( X 0 )] .",
"which is E [ f y c ME ( X 0 )] > e 2 max i 6 = c E [ f y i ME ( X )] .",
"For completeness, we can also show that the certified robustness of E [ f y A ( X )] implies the certified robustness of b E [ f y A ( X )] : b E [ f y c ME ( X 0 )] = ( | S ( L ) | ) E [ f y c ME ( X 0 )] > ( | S ( L ) | ) E lb [ f y c ME ( X )] /e > ( | S ( L ) | ) e max i 6 = c E ub [ f y i ME ( X )] > ( | S ( L ) | ) max i 6 = c E [ f y i ME ( X 0 )] = max i 6 = c b E [ f y i ME ( X 0 )] , which proves b E [ f y c ME ( X 0 )] > max i 6 = c b E [ f y i ME ( X 0 )] .",
"Training procedure.",
"To achieve a better certification result, we involve randomness in the training stage, which is also adopted by almost all certified robustness approaches.",
"To do so, we use the data augmentation strategy that utilizes the perturbed sentences for training, i.e., X 0 2 S ( L ) \\ X given the original training sample X .",
"In practice, we first train the model without data augmentation for several epochs to achieve a reasonable performance, followed by training with perturbed X 0 .",
"For each training data point, we randomly draw one neighbour sentence during training (as opposed to multiple draws during certified inference).",
"5 Experiments We evaluate WordDP on two classification datasets: Internet Movie Database (IMDB) (Maas et al., 2011) and AG News corpus (AGNews) (Zhang et al., 2015).",
"IMDB is a binary sentiment classification dataset containing 50000 movie reviews.",
"AGNews includes 30,000 news articles categorized into four classes.",
"The target model architecture we select is a single-layer LSTM model with size of 128.",
"We use Global Vectors for Word Representation (GloVe) (Pennington et al., 2014a) for word embedding.",
"The LSTM model achieves 88.4% and 91.8% clean accuracy on IMDB and AGNews, respectively.",
"We use PWWS (Ren et al., 2019) to generate adversarial examples on the test dataset.",
"PWWS is a state-of-the-art attack method which uses WordNet to build synonym set and incorporates word saliency to replace selected named entities (NEs) with their synonyms in order to flip the prediction.",
"The details about the datasets, model training and attack algorithm are in Appendix C. 5.1 Evaluation Metrics and Baselines We use four metrics to evaluate the effectiveness of WordDP : certified ratio, certified accuracy, conditional accuracy, and conventional accuracy.",
"Certified Ratio represents the fraction of testing set that the prediction satisfies the certification criteria: P Tt =1 certifiedCheck ( X t ,L, ) T , where certifiedCheck returns 1 if Theorem 4.1 is satis-fied and T is the size of the test dataset.",
"Certified accuracy (CertAcc) denotes the fraction of the clean testing set on which the predictions are both correct and satisfy the certification criteria.",
"This is a standard metric to evaluate certified robust model (Lecuyer et al., 2019).",
"Formally, it is defined as: P Tt =1 certifiedCheck ( X t ,L, )& corrClass ( X t ,L, ) T , where corrClass returns 1 if the classification output is correct.",
"When the accuracy of a model is close to 100%, certified accuracy largely reflects certified ratio.",
"Conventional accuracy (Con-vAcc) is defined as the fraction of testing set that is correctly classified, P Tt =1 corrClass ( X t ,L, ) T , which is a standard metric to evaluate any deep learning systems.",
"Note that the input X t can be both adversarial or clean inputs.",
"We use this metric to evaluate how WordDP empirically works on adversarial examples.",
"Besides the above standard metrics, we introduce a new accuracy metric called Conditional accuracy (CondAcc) to evaluate the following: when a clean input X t is certified within bound L , whether its corresponding L -word substitution adversarial example X advt is indeed correctly classified.",
"The CondAcc can be formulated as: P Tt =1 certifiedCheck ( X t ,L, )& corrClass ( X advt ,L, ) P Tt =1 certifiedCheck ( X t ,L, ) .",
"While certified accuracy is typically evaluated on clean inputs in the literature to show the certified robustness property, conditional accuracy is evaluated on adversarial inputs and provides an informative measure of the classification result of adversarial examples when its counterpart clean input can be certified.",
"This metric is aligned with the definition and purpose of certified robustness.",
"Ideally, if a clean example is successfully certified, adversarial examples created from this clean example should have the same prediction.",
"Therefore, the accuracy of adversarial examples is influenced by the ConvAcc of clean examples.",
"Comparison Methods.",
"We compare WordDP with the state-of-the-art certified robust method SAFER for text classification.",
"We note that SAFER only reports certified accuracy, without accuracy on adversarial examples.",
"To conduct a fair comparison with WordDP , we rerun SAFER on the adversarial examples and report the comparison",
"in CertAcc and CondAcc.",
"Besides SAFER, we also compare the ConvAcc on adversarial examples with two state-of-the-art defense methods, i.e., IBP (Jia et al., 2019) and DNE (Zhou et al., 2020), which do not provide certified robustness guarantee.",
"Thus, their defense may be broken by more powerful word substitution attacks in the future.",
"Certified Accuracy.",
"Figure 3 presents the CertAcc, CondAcc and ConvAcc under different and L , respectively.",
"Each line in the figures represents a certified bound L , which allows L number of words to be substituted.",
"The first row is the results on IMDB, and the second row is on AGNews.",
"Figures",
"3(a) and",
"3(d) show the certified accuracy on the two datasets.",
"Since the conventional accuracy on the clean examples of our mechanisms is close to 100% (as shown in Figures",
"3(c) and",
"3(f)), the certified accuracy mainly reflects the certified ratio (which we skip in the results).",
"As shown, higher can result in lower CertAcc.",
"This is intuitive as the condition in Theorem 4.1 is more difficult to satisfy when given higher epsilon, i.e. weaker requirement of indistinguishability of the output, hence results in lower certified ratio.",
"As illustrated in",
"3(a), when is around 1 .",
"5 , the mechanism will approach 0 certified ratio.",
"This indicates that can only be searched within a limited range.",
"Comparing each line in",
"3(a) and",
"3(d), we note that greater L results in higher CertAcc in most cases for the AGNews dataset.",
"This can be ex-ADV IBP DNE SAFER WordDP IMDB 0.172 0.722 0.823 0.727 0.972 AGNews 0.194 0.823 0.909 0.647 0.719 Table 1: Empirical comparison on accuracy",
"certified ratio.",
"Accuracy on Adversarial Examples.",
"Figures",
"3(b),",
"3(e),",
"3(c) and",
"3(f) present CondAcc and ConvAcc of the two datasets on adversarial examples, respectively.",
"Note that we only test the adversarial examples that are within the L bound.",
"We also show the CondAcc and ConvAcc for both clean and adversarial examples without any defense mechanisms as a reference.",
"In addition, we show ConvAcc of WordDP with varying parameters on clean examples to show the impact of the mechanism on clean examples.",
"As shown in the figures, WordDP achieves significantly higher accuracy on adversarial examples compared to no defense while maintaining the close to 100% accuracy on clean examples.",
"Conditional",
"(a) Fixed attack power 40",
"(b) Fixed defense power 40 Figure 5: The trend on accuracy under different defense and attack power accuracy is higher than conventional accuracy as expected, since it is computed only on those adversarial examples with a certified counterpart clean example.",
"Besides, we can observe that with higher , higher CondAcc on adversarial examples can be achieved.",
"This is because less randomness is introduced in the inference.",
"In addition, by comparing different L bound under the same , larger L can yield more accuracy improvement on adversarial examples but less on clean examples.",
"Intuitively, using the aggregated prediction of more distant neighbouring sentences (higher L ) can benefit adversarial examples more than clean examples.",
"Trader-off between Certified Ratio and CondAcc.",
"We can see that has an opposite impact on certified accuracy (certified ratio) and CondAcc, we present the trade-off between the certified ratio and CondAcc of WordDP in Figure 4 in comparison with the baseline method SAFER.",
"Ideally, we want both high certified ratio and high condAcc to contribute to overall high accuracy.",
"The black dot represents the baseline SAFER, since the neighbouring sentence generating method of SAFER does not depend on L or .",
"As illustrated on these two datasets, with L = 20 and L = 40 , WordDP can dominate SAFER and achieve a much better performance in both certified ratio and condAcc.",
"Relation between certified bound L and adversarial attack power L adv .",
"Figure 5 presents the three accuracy metrics under different attack power and defense power.",
"In Figure",
"5(a), we fix the attack power L adv to 40, which means allowing less than 40 word substitutions, and adjust the WordDP defense power by using different certified bound L .",
"As discussed in Section 4, certified bound L determines the size of neighbouring set.",
"Greater L leads to higher randomness and thus can benefit the CondAcc and ConvAcc on adversarial examples.",
"On the other hand, greater L also makes the certified condition more difficult to be satisfied, which result in lower CertAcc.",
"In Figure",
"5(b), we fix the certified bound L to 40, which means using the same power of WordDP to defend against adversarial examples generated by varying attack power L adv .",
"As shown in the figure, the performance increases with higher attack power.",
"This is because the adversarial examples with more word changes (higher L adv ) are more difficult to generate but easier to defend (due to the nature of PWWS attack algorithm).",
"Comparison with Empirical Defense.",
"Besides certified robust method SAFER, we also compare CondAcc of WordDP with baseline empirical defense methods, IBP (Jia et al., 2019) and DNE (Zhou et al., 2020).",
"Table 1 compares the highest CondAcc achieved by WordDP with the conventional accuracy reported by the baselines (ADV corresponds to no defense).",
"WordDP achieves a much higher accuracy on IMDB dataset compared to IBP, DNE and SAFER.",
"For AGNews, the accuracy of WordDP outperforms SAFER, but is lower than the two empirical defenses.",
"We stress, however, the empirical defense methods do not provide any rigorous certified robustness guarantees and the performance can be significantly dependent on datasets and specific attacks.",
"Efficiency Comparison.",
"We also compare the efficiency of WordDP with SAFER by computing the average time cost for certifying one input and producing the Monte Carlo sampling-based output.",
"It takes WordDP 6.25s and 3.21s on IMDB and AGNews, respectively.",
"As a comparison, it costs SAFER 230.35s and 96.68s.",
"Thus, WordDP achieves more than 30 efficiency improvement.",
"We proposed WordDP , a certified robustness method to adversarial word substitution attacks with the exponential mechanism-based algorithm.",
"Compared with previous work, WordDP achieves notable accuracy improvement and 30 efficiency improvement.",
"In the future, it would be interesting to expand WordDP to other kinds of textual adversarial examples, such as character-level attacks.",
"It is also worthwhile to study other certified approaches such as random smoothing.",
"We sincerely thank all anonymous reviewers for their constructive comments.",
"This work is partially supported by the National Science Foundation (NSF) CNS-1952192, IIS-1838200, and National Institutes of Health (NIH) CTSA Award UL1TR002378."
] | [
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"objective",
"objective",
"result",
"objective",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"other",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"other",
"other"
] |
[
"Implicit knowledge, such as common sense, is key to fluid human conversations.",
"Current neural response generation (RG) models are trained to generate responses directly, omitting unstated implicit knowledge.",
"In this paper, we present Think-Before-Speaking (TBS), a generative approach to first externalize implicit commonsense knowledge ( think ) and use this knowledge to generate responses ( speak ).",
"We expect that externalizing implicit knowledge allows more efficient learning, produces more informative responses, and enables more explainable models.",
"We analyze different choices to collect knowledge-aligned dialogues, represent implicit knowledge, and transition between knowledge and dialogues.",
"Empirical results show TBS models outperform end-to-end and knowledge-augmented RG baselines on most automatic metrics and generate more informative, specific, and commonsense-following responses, as evaluated by human annotators.",
"TBS also generates knowledge that makes sense and is relevant to the dialogue around 85% of the time 1 .",
"Human communication strives to achieve common ground , consisting of mutual beliefs and common knowledge (Stalnaker, 1978; Clark and Schaefer, 1989).",
"Such common ground depends not only on utterances, but also implicit knowledge.",
"For example, in Figure 1, this common ground includes the relevant implicit background knowledge rose is a type of flower .",
"Integrating such common ground in utterances is an implicit process often referred to as knowledge grounding (Clark and Brennan, 1991).",
"Recent state-of-the-art neural response generation (RG) models based on pre-trained language models (LM) mostly produce responses in an end-to-end manner (Vaswani et al., 2017; Zhang et al., Work done while Pei Zhou was an intern at Amazon Alexa AI 1 Code and data will be released after approval.",
"2020a; Lewis et al., 2020), i.e. , models are trained to take history and produce a response.",
"Since implicit knowledge is unstated in dialogue history, RG models do not explicitly learn knowledge grounding and may generate uninformative and hallucinated responses (Serban et al., 2017; Welleck et al., 2019; Roller et al., 2021).",
"Knowledge-grounded RG (Ghazvininejad et al., 2018; Dinan et al., 2019; Gopalakrishnan et al., 2019) addresses this issue, however, most approaches require a knowledge base (KB) to retrieve knowledge for RG (Zhou et al., 2018; Zhao et al., 2020; Eric et al., 2021), which may suffer from the limited knowledge coverage of the used KBs.",
"Some work also casts knowledge as a latent factor in generation (Tuan et al., 2020; Xu et al., 2021), which makes it hard to examine the quality of knowledge generation and how exactly RG uses the implicit knowledge, posing interpretability concerns.",
"We propose Think-Before-Speaking (TBS), an RG framework that trains the RG model to explicitly generate the implicit knowledge and use this knowledge to generate a response, inspired by inquiry-based discovery learning (Bruner, 1961).",
"We argue that this decomposition brings three ma-jor benefits: 1) compared with end-to-end RG, generated knowledge augments and/or constrains RG to produce more informative responses; 2) compared with knowledge-retrieval models, explicitly generating intermediate groundings can potentially generalize to knowledge not included in KBs and synergize with the RG process; 3) explicitly generated implicit knowledge used in RG provides a faithful explanation of the response intent.",
"This new RG paradigm poses three main challenges: (1) how to identify implicit commonsense knowledge associated with dialogue turns for training the knowledge generation module; (2) how to represent structured knowledge in natural language (NL) for neural generative models; and (3) how to integrate knowledge and dialogues while distinguishing implicit and explicit parts in responses.",
"To collect knowledge associated with each dialogue instance for training the TBS generative model, we propose weak supervision procedures to automatically align knowledge with each dialogue turn, rather than manually collecting human-annotations, which is expensive and unscalable.",
"This is achieved by using ConceptNet (Speer et al., 2017) as our knowledge base and different matching approaches to identify the implicit knowledge.",
"We explore several ways to format knowledge originally represented as structured triples into natural language so that RG models can adapt to the knowl-edge+response generation task easily.",
"We experiment with structured triples, triples converted to natural language, and a more colloquial question answering format.",
"To ensure a smooth transition between knowledge and dialogues, we consider using special symbols or prompts as separators.",
"To evaluate the TBS framework, we introduce new evaluation protocols to cover different aspects of the system, including response quality, knowledge quality, and how TBS models leverage generated knowledge.",
"We conduct extensive human evaluations for different variants of our training procedure.",
"Our experimental results show that our models produce more informative, specific, and responses that make more common sense compared to end-to-end RG models and other knowledge-augmented models such as knowledge-selection.",
"Knowledge quality analysis shows that at least 85% of generated knowledge makes sense and is relevant, and the generated novel knowledge (not in ConceptNet) also has high quality.",
"Furthermore, our TBS model even outperforms an RG model that takes in knowledge obtained using ground-truth responses, showing that explicitly generating implicit knowledge is a promising direction for response generation in open domain dialogue systems.",
"Our TBS RG paradigm extends the traditional RG setting by incorporating an additional component of implicit knowledge in the generation process to externalize the knowledge grounding step in RG.",
"We follow the common dialogue response generation setup (Weizenbaum, 1966; Ritter et al., 2011; Sordoni et al., 2015): given a dialogue history H (a sequence of dialogue utterances), generate an appropriate response R .",
"Current neural RG models often frame this task as a conditional language modeling problem.",
"Specifically, given a history ( H ) consisting of a sequence of n dialogue turns: X 1 , X 2 , ..., X n (each turn refers to an utterance containing a sequence of t i tokens: x i, 1 , x i, 2 , ..., x i,t i ) and a response ( R ) sentence Y comprised of a sequence of m tokens y 1 , y 2 , ..., y m , RG models aim to learn the conditional probability distribution by training on human dialogues: P ( R | H ) = m (cid:89) i =1 P ( y i | y <i , X 1 , ..., X n ) .",
"To make the implicit knowledge grounding step explicit, we introduce a new component to RG implicit knowledge that is conditioned on the dialogue history H .",
"We use I to denote the implicit knowledge for brevity, which contains multiple natural language (NL) statements I = Z 1 , Z 2 , ... (each containing a sequence of tokens: z i, 1 , z i, 2 , ... ) expressing commonsense knowledge.",
"For example, in Figure 1, rose is a type of flower and rose is a symbol of love are two NL statements expressing the implicit commonsense knowledge.",
"To emulate realistic conversation scenario, we also fuse dialogue history H in traditional RG with implicit knowledge I for each turn and denote it with H .",
"i.e. H = X 1 , I 1 , X 2 , I 2 ..., X n , where I i indicates the implicit knowledge statements for the i-th turn in the dialogue history.",
"based learning (Bruner, 1961; Shwartz et al., 2020a), our TBS RG paradigm requires models to first generate implicit knowledge I conditioned on H , i.e. P ( I n | H = X 1 , I 1 , X 2 , I 2 ..., X n ) .",
"This section introduces our proposed TBS method to train a generative model that can both talk with itself to explicitly generate background commonsense knowledge ( P ( I | H ) ) and then generate response afterwards, P ( R | H , I ) .",
"Figure 2 illustrates the process to train the TBS models.",
"To pair each dialogue with appropriate implicit knowledge, we first define a matching process and use ConceptNet (Speer et al., 2017) as the implicit knowledge source (Section 3.1).",
"Then, to construct training instances, we face two key method design choices: how to represent knowledge (3.2) and how to connect the knowledge with the dialogue (3.3).",
"Finally, we train TBS RG models to learn P ( I | H ) and P ( R | H , I ) with the same parameters .",
"The following sections explain these components in details.",
"To train TBS models we need dialogue datasets consisting of a dialogue history, a response, and the knowledge statement connecting them.",
"We focus on two methods that create weakly-supervised knowledge labels for dialogues as they are more scalable and cost less than human annotations.",
"Hard-Matching The hard-matching process first lemmatizes all the non-stop words in each utterance, then it identifies knowledge triples whose two concepts appear in an utterance and the next turn respectively.",
"This is the same as the filtering process in Zhou et al. (2021a) and is closely related to distant supervision methods for relation extraction (Craven et al., 1999; Mintz et al., 2009).",
"For more details, refer to Appendix A.1.",
"Soft-Matching Using Embedding Similarity Hard-matching only captures the surface form and neglects many important semantic relations between words.",
"We thus develop a soft-matching procedure using embedding similarity from SentenceBERT (Reimers and Gurevych, 2019) to measure semantic relations between dialogue turns and triples in ConceptNet.",
"Specifically, we first extract candidate triples from ConceptNet with one concept appearing in the i th turn.",
"Next, we form a query by concatenating the i th turn and the next ( i + 1) th turn response.",
"Finally, we encode the query and all triple candidates using SentenceBERT and use cosine similarity to find the semantically closest triples as matched knowledge.",
"More details are presented in Appendix A.1.",
"Implicit commonsense knowledge I stored in ConceptNet is in the form of (subject s , relation r , object o ) triples, such as (rose, TypeOf, flower) , which is not compatible with RG models, which operate on NL sentences and may not include relation tokens in their trained vocabulary.",
"Here we design two alternatives to represent the grounded knowledge and use the implicit knowledge in Figure 1 as a running example.",
"Map Relations to Natural Language (NL) To convert ConceptNet triples into NL, we follow a common practice and map every relation r in the triple to its NL template, and fill in s and o in the template (Levy et al., 2017).",
"We use the same mapping as that used in COMET (Bosselut et al., 2019), covering all standard types of relations in ConceptNet.",
"For example, rose is a type of flower; rose is a symbol of love .",
"Information-Seeking Question-Answer Pairs Another format to convert triples to NL sentences is through asking and answering information-seeking questions.",
"Shwartz et al. (2020b) designed templates of information-seeking questions and answers to provide background knowledge for LMs.",
"We adopt a similar strategy and design a template for each relation in ConceptNet.",
"For example, What is a type of flower?",
"Rose is a type of flower.",
"Rose is a symbol of what?",
"Rose is a symbol of love .",
"The mappings we use for these two types of representations are shown in Appendix A.2.",
"To help our RG models learn the TBS paradigm and generate outputs structured similarly, i.e., implicit knowledge first and then responses, we need to properly connect knowledge and dialogues in our data.",
"Here we consider two alternatives for creating such a transition.",
"Special symbols .",
"Following the common practice of separating sequences in neural LMs (Rad-ford et al., 2018; Devlin et al., 2019), we use a 1239 (cid:46)(cid:81)(cid:82)(cid:90)(cid:79)(cid:72)(cid:71)(cid:74)(cid:72)(cid:16)(cid:36)(cid:79)(cid:76)(cid:74)(cid:81)(cid:72)(cid:71)(cid:3)(cid:39)(cid:76)(cid:68)(cid:79)(cid:82)(cid:74)(cid:88)(cid:72)(cid:86)(cid:3)(cid:11)(cid:43)(cid:15)(cid:3)(cid:44)(cid:15)(cid:3)(cid:53)(cid:12) (cid:53)(cid:72)(cid:86)(cid:83)(cid:82)(cid:81)(cid:86)(cid:72)(cid:3)(cid:11)(cid:53)(cid:12) (cid:39)(cid:76)(cid:68)(cid:79)(cid:82)(cid:74)(cid:88)(cid:72)(cid:3)(cid:43)(cid:76)(cid:86)(cid:87)(cid:82)(cid:85)(cid:92)(cid:3)(cid:11)(cid:43)(cid:12)(cid:3) (cid:44)(cid:80)(cid:83)(cid:79)(cid:76)(cid:70)(cid:76)(cid:87)(cid:3)(cid:46)(cid:81)(cid:82)(cid:90)(cid:79)(cid:72)(cid:71)(cid:74)(cid:72)(cid:3)(cid:11)(cid:44)(cid:12)(cid:3) (cid:94)(cid:195)(cid:348)(cid:224)(cid:262)(cid:265)(cid:294)(cid:256)(cid:728)(cid:132)(cid:334)(cid:300)(cid:224)(cid:234)(cid:340)(cid:340) (cid:44)(cid:3)(cid:81)(cid:72)(cid:72)(cid:71)(cid:3)(cid:87)(cid:82)(cid:3)(cid:69)(cid:88)(cid:92)(cid:3)(cid:86)(cid:82)(cid:80)(cid:72)(cid:3)(cid:288)(cid:82)(cid:90)(cid:72)(cid:85)(cid:86)(cid:3)(cid:73)(cid:82)(cid:85)(cid:3)(cid:80)(cid:92)(cid:3)(cid:90)(cid:76)(cid:73)(cid:72)(cid:17) (cid:55)(cid:85)(cid:76)(cid:83)(cid:79)(cid:72)(cid:86)(cid:3)(cid:70)(cid:82)(cid:81)(cid:89)(cid:72)(cid:85)(cid:87)(cid:72)(cid:71)(cid:3)(cid:87)(cid:82)(cid:3)(cid:49)(cid:47) (cid:53)(cid:82)(cid:86)(cid:72)(cid:3)(cid:76)(cid:86)(cid:3)(cid:68)(cid:3)(cid:87)(cid:92)(cid:83)(cid:72)(cid:3)(cid:82)(cid:73)(cid:3)(cid:263)(cid:82)(cid:90)(cid:72)(cid:85)(cid:30)(cid:53)(cid:82)(cid:86)(cid:72)(cid:3)(cid:76)(cid:86)(cid:3)(cid:68)(cid:3)(cid:86)(cid:92)(cid:80)(cid:69)(cid:82)(cid:79)(cid:3)(cid:82)(cid:73)(cid:3)(cid:79)(cid:82)(cid:89)(cid:72)(cid:30)(cid:17)(cid:17)(cid:17) (cid:51)(cid:72)(cid:85)(cid:75)(cid:68)(cid:83)(cid:86)(cid:3)(cid:92)(cid:82)(cid:88)(cid:322)(cid:71)(cid:3)(cid:69)(cid:72)(cid:3)(cid:76)(cid:81)(cid:87)(cid:72)(cid:85)(cid:72)(cid:86)(cid:87)(cid:72)(cid:71)(cid:3)(cid:76)(cid:81)(cid:3)(cid:85)(cid:72)(cid:71)(cid:3)(cid:85)(cid:82)(cid:86)(cid:72)(cid:86)(cid:17) (cid:52)(cid:88)(cid:72)(cid:86)(cid:87)(cid:76)(cid:82)(cid:81)(cid:3)(cid:36)(cid:81)(cid:86)(cid:90)(cid:72)(cid:85)(cid:76)(cid:81)(cid:74) (cid:58)(cid:75)(cid:68)(cid:87)(cid:3)(cid:76)(cid:86)(cid:3)(cid:68)(cid:3)(cid:87)(cid:92)(cid:83)(cid:72)(cid:3)(cid:82)(cid:73)(cid:3)(cid:263)(cid:82)(cid:90)(cid:72)(cid:85)(cid:34)(cid:3)(cid:53)(cid:82)(cid:86)(cid:72)(cid:3)(cid:76)(cid:86)(cid:3)(cid:68)(cid:3)(cid:87)(cid:92)(cid:83)(cid:72)(cid:3)(cid:82)(cid:73)(cid:3)(cid:263)(cid:82)(cid:90)(cid:72)(cid:85)(cid:30)(cid:17)(cid:17)(cid:17) (cid:39)(cid:76)(cid:68)(cid:79)(cid:82)(cid:74)(cid:88)(cid:72)(cid:39)(cid:68)(cid:87)(cid:68)(cid:86)(cid:72)(cid:87)(cid:86) (cid:46)(cid:81)(cid:82)(cid:90)(cid:79)(cid:72)(cid:71)(cid:74)(cid:72)(cid:3)(cid:41)(cid:82)(cid:85)(cid:80)(cid:68)(cid:87)(cid:86) (cid:38)(cid:82)(cid:81)(cid:81)(cid:72)(cid:70)(cid:87)(cid:76)(cid:81)(cid:74)(cid:3)(cid:43)(cid:15)(cid:3)(cid:44)(cid:15)(cid:3)(cid:68)(cid:81)(cid:71)(cid:3)(cid:53)(cid:3) (cid:16) (cid:54)(cid:83)(cid:72)(cid:70)(cid:76)(cid:68)(cid:79)(cid:3)(cid:54)(cid:92)(cid:80)(cid:69)(cid:82)(cid:79)(cid:86) (cid:16) (cid:49)(cid:47)(cid:3)(cid:51)(cid:85)(cid:82)(cid:80)(cid:83)(cid:87)(cid:86) (cid:41)(cid:76)(cid:81)(cid:68)(cid:79)(cid:3)(cid:87)(cid:85)(cid:68)(cid:76)(cid:81)(cid:76)(cid:81)(cid:74)(cid:3)(cid:76)(cid:81)(cid:86)(cid:87)(cid:68)(cid:81)(cid:70)(cid:72)(cid:29) (cid:323)(cid:44)(cid:3)(cid:81)(cid:72)(cid:72)(cid:71)(cid:3)(cid:87)(cid:82)(cid:3)(cid:69)(cid:88)(cid:92)(cid:3)(cid:86)(cid:82)(cid:80)(cid:72)(cid:3)(cid:288)(cid:82)(cid:90)(cid:72)(cid:85)(cid:86)(cid:3)(cid:73)(cid:82)(cid:85)(cid:3)(cid:80)(cid:92)(cid:3)(cid:90)(cid:76)(cid:73)(cid:72)(cid:17)(cid:3)(cid:31)(cid:76)(cid:80)(cid:83)(cid:79)(cid:76)(cid:70)(cid:76)(cid:87)(cid:33)(cid:3) (cid:85)(cid:82)(cid:86)(cid:72)(cid:3)(cid:76)(cid:86)(cid:3)(cid:68)(cid:3)(cid:87)(cid:92)(cid:83)(cid:72)(cid:3)(cid:82)(cid:73)(cid:3) (cid:263)(cid:82)(cid:90)(cid:72)(cid:85)(cid:30)(cid:3)(cid:85)(cid:82)(cid:86)(cid:72)(cid:3)(cid:76)(cid:86)(cid:3)(cid:68)(cid:3)(cid:86)(cid:92)(cid:80)(cid:69)(cid:82)(cid:79)(cid:3)(cid:82)(cid:73)(cid:3)(cid:79)(cid:82)(cid:89)(cid:72)(cid:30) (cid:3)(cid:31)(cid:18)(cid:76)(cid:80)(cid:83)(cid:79)(cid:76)(cid:70)(cid:76)(cid:87)(cid:33)(cid:3)(cid:51)(cid:72)(cid:85)(cid:75)(cid:68)(cid:83)(cid:86)(cid:3) (cid:92)(cid:82)(cid:88)(cid:322)(cid:71)(cid:3)(cid:69)(cid:72)(cid:3)(cid:76)(cid:81)(cid:87)(cid:72)(cid:85)(cid:72)(cid:86)(cid:87)(cid:72)(cid:71)(cid:3)(cid:76)(cid:81)(cid:3)(cid:85)(cid:72)(cid:71)(cid:3)(cid:85)(cid:82)(cid:86)(cid:72)(cid:86)(cid:17)(cid:324) (cid:55)(cid:85)(cid:68)(cid:76)(cid:81)(cid:3)(cid:53)(cid:42)(cid:3)(cid:80)(cid:82)(cid:71)(cid:72)(cid:79)(cid:86)(cid:3)(cid:87)(cid:82)(cid:3)(cid:74)(cid:72)(cid:81)(cid:72)(cid:85)(cid:68)(cid:87)(cid:72)(cid:3)(cid:88)(cid:81)(cid:71)(cid:72)(cid:85)(cid:79)(cid:76)(cid:81)(cid:72)(cid:71)(cid:3)(cid:44)(cid:14)(cid:53)(cid:3)(cid:83)(cid:68)(cid:85)(cid:87) Figure 2: Method illustration.",
"special symbol to serve as the separator.",
"We enclose the implicit knowledge I with special symbols < implicit > and < /implicit > and add it between H and R , for example, < speaker1 > I need to buy some flowers for my wife. < implicit > rose is a type of flower < /implicit > < speaker2 > Perhaps you'd be interested in red roses.",
"Natural language prompts .",
"More recent work has found that NL prompts help LMs to perform better on various downstream tasks, including natural language generation (NLG) (Brown et al., 2020; Liu et al., 2021; Zheng and Huang, 2021).",
"Here we use the NL prompts to prompt RG models to generate implicit knowledge and responses.",
"We use The following background knowledge is helpful for generating the response: to elicit knowledge and Grounded on the background knowledge, what does the speaker probably say in the next response? to elicit response.",
"After constructing knowledge-aligned dialogues, each of our data instances is a sequence of tokens with three components: a dialogue history H fused with potential implicit knowledge after each turn, implicit knowledge (empty or nonempty) I , and a response R .",
"We split each instance d ( H , R, I ) D to first train the model to generate just the knowledge I based on H , P ( I | H ) , and then train it to generate R based on both I and H , P ( R | H , I ) .",
"Formally, we follow standard way of modeling P in auto-regressive neural RG models and use Maximum Likelihood Estimation (MLE) to train our model to maximize P ( I | H ) (knowledge generation KG) by minimizing the conditional negative log-likelihood loss (NLL): LKG = m (cid:88) i =1 log P ( Z i | Z <i , X 1 , ..., X n ) , where Z i is the i-th statement in I .",
"And to model P ( R | H , I ) we minimize: LRG = m (cid:88) i =1 log P ( y i | y <i , X 1 , I 1 ..., X n ) .",
"We train one generative model on these losses in one-pass with splitted instances for KG and RG instead of multiple training phases.",
"During inference, we only provide dialogue history as input and the model has to generate knowledge and responses.",
"We consider dialogues from four datasets: Dai-lyDialog (Li et al., 2017), EmpatheticDia-logues (Rashkin et al., 2019), MuTual (Cui et al., 2020), and SocialIQA-prompted Commonsense-Dialogues (Zhou et al., 2021a).",
"For training, we use the filtered version of the four datasets from Zhou et al. (2021a), which ensures each dialogue contains at least one commonsense knowledge triple from ConceptNet.",
"In total, the training data contains 31k dialogues with 159k utterances.",
"We reserve 10% of data as a development set for evaluating model training and selecting hyper-parameters.",
"Table 1 shows the number of instances resulted from applying our hardand soft-matching procedures to our training data in order to construct knowledge-aligned dialogues.",
"the response, we use the test data from the original data distribution of the 4 datasets mentioned above.",
"The testing data consists of around 3k dialogues.",
"We use DialoGPT-medium (Zhang et al., 2020a) as our base model, which is a commonly-used end-to-end RG model.",
"We fine-tune DialoGPT using all of the 159K dialogue instances.",
"We also use DialoGPT to serve as the backbone model and consider three variables in our TBS model configuration introduced from Sections 3.1 to 3.3: hard matching or soft -matching, special symbol as separator or NL prompt , and triple-convertedNL to represent knowledge or information seeking QA pairs.",
"To justify our choice of using one model to do both KG and RG, we also compare with TBS-Two Model where we train separate models for knowledge generation (KG) and RG using the same training data.",
"Our default model configuration is hard-symbol-NL .",
"We also compare several knowledge-grounded RG baselines that retrieve external knowledge or generate knowledge with another model.",
"For retrieval, we follow most common approaches in knowledge-selection (Zhao et al., 2017; Wolf et al., 2020; Eric et al., 2021) and train RoBERTa (Liu et al., 2019) to classify triples using our knowledge-aligned data (matched or not matched), and use it to label candidate triples during testing ( KS-RoBERTa ).",
"For the generative model, we use COMET (Bosselut et al., 2019) as a commonsense knowledge generator ( KG-COMET ).",
"Furthermore, we consider RG models that take the hard-matched or soft-matched knowledge obtained from the ground-truth response ( Hard-GT and Soft-GT ).",
"Note that though there is noise in hard-matching or soft-matching procedure, this setting uses the next turn response and is likely to provide relevant knowledge.",
"Implementation details for all the models are shown in Appendix B.1.",
"2005), ROUGE (Lin, 2004), CIDEr (Vedantam et al., 2015) and SkipThoughts (Kiros et al., 2015).",
"We also use GRADE (Huang et al., 2020), a reference-free metric shown to have consistent correlation with human judgements (Yeh et al., 2021) to ensure the validity of experimental results.",
"Human Evaluation We conduct extensive human evaluation using 300 randomly sampled instances from unseen test dialogues described above.",
"For response quality , we conduct pairwise comparison where we present a dialogue history and two responses made by two different models and ask them to choose one or select not sure based on different criteria (Zhou et al., 2018; Zhang et al., 2020b) 2 .",
"We evaluate on six dimensions: which response is more grammatical , coherent , engaging , informative , specific , and makes common sense (Zhang et al., 2020b; Roller et al., 2021).",
"More details of the instructions for annotators on each dimension with examples are included in Appendix B.2.",
"For knowledge quality , we evaluate the generated knowledge in isolation ( does this knowledge make sense ) and in conjunction with the context for relevance.",
"We perform majority voting per instance using three annotators from Amazon Mechnical Turk (AMT).",
"We use Fleiss' Kappa ( ) (Fleiss, 1971) to measure agreement among the annotators.",
"By evaluating our TBS model variants with other baselines, we aim to address the following questions: 1) do TBS models produce better responses than standard end-to-end RG models?",
"2) compared with other approaches to retrieve or generate additional knowledge, is TBS more helpful for RG?",
"3) do TBS RG models generate knowledge that makes sense and is relevant to the dialogue context?",
"4) do TBS models faithfully leverage the generated knowledge?",
"Model variant analysis To find the best-performing configuration of our TBS method, we consider alternatives as discussed in Sections 3.1 to 3.3, and conduct 4 pairwise comparisons: soft vs.",
"2 We choose to conduct pairwise comparison since multiple previous work has shown that it produces a more reliable evaluation than directly asking humans to score the response, which is a highly subjective task (Amidei et al., 2019; CallisonBurch et al., 2007; Celikyilmaz et al., 2020)",
"hard , prompt vs. symbol , and QA vs. relation-converted NL format .",
"From Table 2, we find that using soft-matching to create knowledge-aligned dialogue dataset produces more grammatical responses and responses that make more common sense, with =0.64-0.73, indicating substantial agreement according to one interpretation from Landis and Koch (1977).",
"Using QA to represent knowledge makes the responses more grammatical, coherent, commonsensical, and also achieves the best performance on average on six dimensions.",
"We also compare results that combine these alternatives, e.g., soft-symbol-QA (due to space constraints, results are shown in Appendix C.1), however, we do not observe significant improvements after combining these alternatives and our best configuration in terms of average improvement is still hard-symbol-QA .",
"We thus use hard-symbol-QA as our final configuration and refer to it as TBS throughout this section.",
"Does TBS produce better responses vs. end-to-end RG?",
"By comparing TBS and end-to-end DialoGPT-ft model in Table 3 and Figure 3, we find that TBS models produce better-quality responses using both automatic and human evaluations.",
"Specifically, even though hard-matching only annotates about 33% of the training instances, TBS outperforms end-to-end RG model significantly on most automatic metrics.",
"From human evaluation ( =0.62-0.69), we find our TBS model performs on par with DialoGPT trained on more data in grammar, coherence, and engagingness, and achieves statistically-significant (p < 0.05) improvement on informativeness, specificity, and the common sense aspects of generated responses 3 .",
"We argue that by providing weakly-supervised knowledge labels and TBS training, RG models require less data and can generate quality responses with improvement in the informativeness, specificity, and common sense aspects of the responses.",
"Is TBS knowledge generation better than other knowledge-augmented RG?",
"We compare TBS models with other knowledge-augmented baselines that retrieve knowledge from ConceptNet using embedding scores (KS-SBERT) or a trained selector (KS-RoBERTa), or generate from another model (KG-COMET).",
"From Table 3, we find that these models perform similarly to the end-to-end DialoGPT model and are outperformed by TBS models on most automatic metrics.",
"Figure 3 shows that while TBS methods have significant improvements on all dimensions against knowledge-selection baselines, COMET as a knowledge generator has smaller gaps on informativeness, specificity, and common sense, but is outperformed significantly on grammar, coherence, and engagingness.",
"Next we compare against the setup where we feed the model the knowledge that is derived using the ground-truth response (Hard/Soft-GT), i.e. , the provided knowledge is obtained using concepts appearing in the ground-truth response.",
"From Table 3, we surprisingly find that even though our 3 We also conducted direct scoring in human evaluations and observed significant improvement (on average 7.3 out of 10 for TBS vs. 5.9 for DialoGPT-ft), but since it results in lower agreement ( =0.49), we focus on comparative evaluation.",
"proposed TBS model has no access to response-leaking knowledge labels and is trained on much less data, the TBS RG model still achieves statistically significant improvement on GRADE and BLEU-4.",
"And from human evaluation results in Figure 4, TBS model significantly improves the specificity and common sense aspect of responses while stays on par on other evaluation dimensions compared with the hard-GT model and improves even more compared with soft-GT.",
"We find that one potential explanation is that only around 55% of Hard-GT knowledge is labeled as used in response whereas it is 77% in our TBS model (see Section 5.3).",
"This is also related to how the RG model leverages the knowledge in training.",
"Further analysis is needed to understand the effect of knowledge and the relationship between knowledge and responses.",
"We then examine how well TBS RG models learn to generate knowledge on unseen dialogues.",
"We use human evaluation and focus on three dimensions: does the model generate novel knowledge that does not appear in ConceptNet?",
"does the gen-Model Novel Makes Sense Relevant KS-SBERT 0% 91.7%* 85.0% KS-RoBERTa 0% 77.7%* 76.3% KG-COMET 63.3% 68.3%/63.2% 67.5%/68.9% TBS-two-model 46.3% 89.0%/85.6% 90.7%/90.2% TBS-one-model 44% 86.3%/85.9% 85.7%/86.5% Table 4: Human evaluation on knowledge quality .",
"erated knowledge statement make sense as a standalone fact?",
"and is the generated knowledge relevant to the dialogue context?",
"For the first question we directly query from ConceptNet and show percentages.",
"For the latter two we follow Section 4.3 and show the percentages that MTurkers think the knowledge makes sense and is relevant from the 300 sampled test instances (the same used in response quality).",
"We test our TBS model, the two-model variant, and other knowledge-augmented baselines introduced in Section 4.2.",
"makes sense and is relevant Table 4 shows that TBS models can generate implicit knowledge that makes sense and is relevant to the context for around 85% of the time as judged by human annotators ( =0.73-0.80).",
"Compared with knowledge-selection models that retrieve knowledge from ConceptNet, TBS generates knowledge that is similar in terms of common sense and has better relevance to the dialogue history.",
"Compared with COMET that also generates knowledge, we find TBS models generate more knowledge that follows common sense and is relevant to the dialogue.",
"Comparing two-model and one-model TBS, we find that two-model generates more knowledge that makes sense and is relevant, although its response quality is poorer (Table 3 and Figure 3).",
"This might be due 1243 Grammatical Coherent Engaging Informative Specific Common Sense Evaluation Dimensions 30 35 40 45 50 55 60 P r e f e r e n c e P e r c e n t a g e s 50.0* 55.0* 53.3* 43.7 44.7 53.0* 38.0* 38.3* 40.3* 39.6 46.0 43.3* Effects on Noisy Knowledge Input Models TBSTBS-Noisy Knowledge Figure 5: Effects of noisy knowledge on response quality.",
"to model synergies when learning both knowledge generation and response generation.",
"Model generates novel knowledge We find a significant portion of novel knowledge generated from the COMET and TBS models that is not present in the training data.",
"Furthermore, the quality of the generated novel knowledge is similar to that of knowledge existing in ConceptNet.",
"COMET generates more new knowledge but the quality (both common sense and relevance) is significantly lower than TBS models.",
"We include some examples of novel knowledge generated in Appendix C. In general we find that the new knowledge is complimentary to ConceptNet, not just a paraphrased version of existing triples (since in those cases the model will directly generate the ConceptNet triple).",
"This shows a promising sign that TBS RG models can potentially generate good-quality novel knowledge labels for unseen dialogues.",
"Most responses are knowledge grounded To examine how TBS methods leverage knowledge for RG, we also present annotators a history, generated knowledge, and generated response, and ask them whether the knowledge is used in response .",
"We find that around 77% of generated knowledge is used in the generated response, i.e. , the response is grounded in the knowledge generated from TBS.",
"Noisy knowledge heavily impacts quality To better showcase the connection between knowledge and response, we examine how knowledge quality generated from TBS methods can affect response quality.",
"During inference, we randomly sample noisy knowledge from another dialogue, feed it to the model to generate a response conditioned on irrelevant knowledge, and compare the response quality with response generated from TBS knowledge.",
"Fig 5 shows that there is a statistically significant (p 0.05) drop in response quality in four dimensions.",
"This indicates that the quality of knowledge input heavily influences response quality and that TBS models generate better responses because of its decent knowledge quality.",
"Qualitative examples and limitations We show several qualitative examples from different models and human responses in Table",
"5. We find that TBS generates relevant knowledge and responses grounded properly in that knowledge, whereas KS/KG models retrieve noisy knowledge and Hard-GT generates response not grounded in knowledge.",
"Here we present a summary of error patterns of TBS models and discuss potential directions to improve.",
"More examples can be found in Table",
"6. First, our matching procedures do not concern multi-hop triples that might be needed for complex reasoning chains.",
"Second, ConceptNet mostly contains taxonomic and lexical knowledge ( RelatedTo, IsA, etc ), limiting the diversity of generated knowledge from TBS models.",
"We plan to explore other knowledge resources such as ATOMIC2020 (Hwang et al., 2021) in the future.",
"Third, currently the model always generates implicit knowledge.",
"In future work, we are interested in training RG models that understand when implicit knowledge is needed based on the dialogue context.",
"Open-Domain Dialogue Generation Recent work focused on fine-tuning large pre-trained transformer models (Radford et al., 2019; Zhang et al., 2020a; Roller et al., 2021) on massive dialogue data.",
"Knowledge-augmented RG has been studied extensively to alleviate the issue of generic or hallucinated responses (Serban et al., 2017; Welleck et al., 2019; Roller et al., 2021).",
"Most work retrieves relevant knowledge from knowledge candidates (wikipedia or KBs) and generates responses after incorporating additional knowledge in dialogue context (Ghazvininejad et al., 2018; Zhou et al., 2018; Wu et al., 2020).",
"More recent work also explored other ways of constructing knowledge, such as by considering knowledge as a latent variable (Tuan et al., 2020; Li et al., 2020) and generating it implicitly.",
"Our TBS framework differs from these two lines of work in that it explicitly generates knowledge in text and uses one generative model for both knowledge generation and RG.",
"Understanding (NLU) Although explicit knowledge generation (KG) for RG has not been explored, similar methods have been proposed for NLU tasks such as question answering (Shwartz et al., 2020b).",
"Previous work has also explicitly generated rationales that can be seen as helpful additional knowledge (Rajani et al., 2019).",
"TBS differs from such work in that we consider a generative task and use the same generative model to do both KG and RG.",
"Inspired by how humans contribute to the common ground during communication, We propose to train RG models that explicitly generate implicit knowledge and then respond (TBS).",
"This brings us three main benefits compared with prior end-to-end RG models: 1) more informative and coherent responses by augmenting with knowledge; 2) generated knowledge provides faithful explanations of RG model's inner-workings; 3) models do not rely on external knowledge bases in response generation time.",
"We first identify implicit knowledge in dialogues, explore different knowledge representation and transition choices, and demonstrate promising results compared with end-to-end and knowledge-grounded RG models from extensive evaluations.",
"We find strong and promising results for TBS RG model compared with end-to-end RG.",
"In particular, TBS can produce good quality and novel knowledge, outperform end-to-end RG models despite training on less data, and even produce better responses than RG models that take ground-truth knowledge.",
"We hope our findings encourage more future studies on making RG models better emulate human communication process and produce better-quality responses.",
"Our work aims to train RG models that explicitly generate implicit knowledge before responding.",
"Sheng et al. (2021) have found biases in DialoGPT (our base model) responses and Mehrabi et al. (2021) have found representational harms in common sense resources.",
"We acknowledge that the 1245 generated responses from our models might contain biases.",
"All of the dialogue datasets and models are in English, which benefits English speakers more.",
"We have conducted human evaluation using Amazon Mechanical Turks.",
"We pay turkers around $15 per hour, well above the highest state minimum wage and engage in constructive discussions if they have concerns about the process.",
"We also give each annotation instance enough time so that we do not pressure annotators."
] | [
"abstain",
"abstain",
"objective",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"objective",
"other",
"abstain",
"objective",
"result",
"objective",
"method",
"method",
"objective",
"method",
"objective",
"abstain",
"result",
"abstain",
"method",
"abstain",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"result",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"objective",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"result",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain"
] |
[
"Back-translation is a critical component of Unsupervised Neural Machine Translation (UNMT), which generates pseudo parallel data from target monolingual data.",
"A UNMT model is trained on the pseudo parallel data with translated source , and translates natural source sentences in inference.",
"The source discrepancy between training and inference hinders the translation performance of UNMT models.",
"By carefully designing experiments, we identify two representative characteristics of the data gap in source: (1) style gap (i.e., translated vs. natural text style) that leads to poor generalization capability; (2) content gap that induces the model to produce hallucination content biased towards the target language.",
"To narrow the data gap, we propose an online self-training approach, which simultaneously uses the pseudo parallel data { natural source, translated target } to mimic the inference scenario.",
"Experimental results on several widely-used language pairs show that our approach outperforms two strong baselines (XLM and MASS) by remedying the style and content gaps.",
"1 1 Introduction In recent years, there has been a growing interest in unsupervised neural machine translation (UNMT), which requires only monolingual corpora to accomplish the translation task (Lample et al., 2018a,b; Artetxe et al., 2018b; Yang et al., 2018; Ren et al., 2019).",
"The key idea of UNMT is to use back-translation (BT) (Sennrich et al., 2016) to construct Work was done when Zhiwei He was interning at Tencent AI Lab.",
"the pseudo parallel data for translation modeling.",
"Typically, UNMT back-translates the natural target sentence into the synthetic source sentence (trans-lated source) to form the training data.",
"A BT loss is calculated on the pseudo parallel data { translated source, natural target } to update the parameters of UNMT models.",
"In Supervised Neural Machine Translation (SNMT), Edunov et al. (2020) found that BT suffers from the translationese problem (Zhang and Toral, 2019; Graham et al., 2020) in which BT improves BLEU score on the target-original test set with limited gains on the source-original test set.",
"Unlike authentic parallel data available in the SNMT training data, the UNMT training data entirely comes from pseudo parallel data generated by the back-translation.",
"Therefore in this work, we first revisit the problem in the UNMT setting and start our research from an observation (2): with comparable translation performance on the full test set, the BT based UNMT models achieve better translation performance than the SNMT model on the target-original (i.e. translationese) test set, while achieves worse performance on the source-6611 original ones.",
"In addition, the pseudo parallel data { translated source, natural target } generated by BT poses great challenges for UNMT, as shown in Table 1.",
"First, there exists the input discrepancy between the translated source (translated style) in UNMT training data and the natural source (natural style) in inference data.",
"We find that the poor generalization capability caused by the style gap (i.e., translated style v.s natural style) limited the UNMT translation performance (3.1).",
"Second, the translated pseudo parallel data suffers from the language coverage bias problem (Wang et al., 2021), in which the content of UNMT training data biases towards the target language while the content of the inference data biases towards the source language.",
"The content gap results in hallucinated translations (Lee et al., 2018; Wang and Sennrich, 2020) biased towards the target language (3.2).",
"To alleviate the data gap between the training and inference, we propose an online self-training (ST) approach to improve the UNMT performance.",
"Specifically, besides the BT loss, the proposed approach also synchronously calculates the ST loss on the pseudo parallel data { natural source, translated target } generated by self-training to update the parameters of UNMT models.",
"The pseudo parallel data { natural source, translated target } is used to mimic the inference scenario with { natural source, translated target } to bridge the data gap for UNMT.",
"It is worth noting that the proposed approach does not cost extra computation to generate the pseudo parallel data { natural source, translated target } 2 , which makes the proposed method efficient and easy to implement.",
"We conduct experiments on the XLM (Lample and Conneau, 2019) and MASS (Song et al., 2019) UNMT models on multiple language pairs with varying corpus sizes (WMT14 En-Fr / WMT16 EnDe / WMT16 En-Ro / WMT20 En-De / WMT21 En-De).",
"Experimental results show that the proposed approach achieves consistent improvement over the baseline models.",
"Moreover, we conduct extensive analyses to understand the proposed approach better, and the quantitative evidence reveals that the proposed approach narrows the style and content gaps to achieve the improvements.",
"2 The vanilla UNMT model adopts the dual structure to train both translation directions together, and the pseudo parallel data { natural source, translated target } has already been generated and is used to update the parameters of UNMT model in the reverse direction.",
"",
"We empirically analyze the data gap between training and inference for UNMT and identify two critical factors: style gap and content gap.",
"Our empirical study demonstrates that the back-translation based UNMT framework suffers from the translationese problem, causing the inaccurate evaluation of UNMT models on standard benchmarks.",
"We propose a simple and effective approach for incorporating the self-training method into the UNMT framework to remedy the data gap between the training and inference.",
"Notations.",
"Let X and Y denote the language pair, and let X = { x i } Mi =1 and Y = { y j } Nj =1 represent the collection of monolingual sentences of the corresponding language, where M, N are the size of the corresponding set.",
"Generally, UNMT method that based on BT adopts dual structure to train a bidirectional translation model (Artetxe et al., 2018b, 2019; Lample et al., 2018a,b).",
"For the sake of simplicity, we only consider translation direction X Y unless otherwise stated.",
"Online BT.",
"Current mainstream of UNMT methods turn the unsupervised task into the synthetic supervised task through BT, which is the most critical component in UNMT training.",
"Given the translation task X Y where target corpus Y is available, for each batch, the target sentence y Y is used to generate its synthetic source sentence by the backward model MTY X : x = arg max x PY X ( x | y ; ) , (1) where is a fixed copy of the current parameters indicating that the gradient is not propagated through .",
"In this way, the synthetic parallel sentence pair { x , y } is obtained and used to train the forward model MTX Y in a supervised manner by minimizing: LB = E y Y [ log PX Y ( y | x ; )] .",
"It is worth noting that the synthetic sentence pair generated by the BT is the only supervision signal of UNMT training.",
"Objective function.",
"In addition to BT, denoising auto-encoding (DAE) is an additional loss term of UNMT training, which is denoted by LD and is not the main topic discussed in this work.",
"In all, the final objective function of UNMT is: L = LB + DLD , (3) where D is the hyper-parameter weighting DAE loss term.",
"Generally, D starts from one and decreases as the training procedure continues 3 .",
"To verify whether the UNMT model suffers from the input gap between training and inference and thus is biased towards translated input while against natural input, we conduct comparative experiments between SNMT and UNMT models.",
"Setup We evaluate the UNMT and SNMT models on WMT14 En-Fr, WMT16 En-De and WMT16 En-Ro test sets, following Lample and Conneau (2019) and Song et al. (2019).",
"We first train the UNMT models on the above language pairs with model parameters initialized by XLM and MASS models.",
"Then, we train the corresponding SNMT models whose performance on the full test sets is controlled to be approximated to UNMT by undersampling training data.",
"Finally, we evaluate the UNMT and SNMT models on the target-original and source-original test sets, whose inputs are translated and natural respectively.",
"Unless otherwise stated, we follow previous work (Lample and Conneau, 2019; Song et al., 2019) to use case-sensitive BLEU score (Papineni et al., 2002) with the multi-bleu.perl 4 script as the evaluation metric.",
"Please refer to Appendix B for the results of SacreBLEU, and refer to Appendix A for the training details of SNMT and UNMT models.",
"Results We present the translation performance in terms of the BLEU score in Table 2 and our observations are: UNMT models perform close to the SNMT models on the full test sets with 0.3 BLEU difference at most on average (33.5/33.9 vs. 33.6).",
"UNMT models outperform SNMT models on target-original test sets (translated input) with 3 Verified from open-source XLM Github implementation.",
"4.4 BLEU points (36.8/37.6 vs. 33.2).",
"UNMT models underperform the SNMT models on source-original test sets (natural input) with an average performance degradation of 4.4 and 4.2 BLUE points (28.7/28.9 vs. 33.1).",
"The above observations are invariant concerning the pre-trained model and translation direction.",
"In particular, the unsatisfactory performance of UNMT under natural input indicates that UNMT is overestimated on the previous benchmark.",
"We attribute the phenomenon to the data gap between training and inference for UNMT: there is a mismatch between natural inputs of source-original test data and the back-translated inputs that UNMT employed for training.",
"This work focuses on the experiments on the source-original test sets (i.e., the input of an NMT translation system is generally natural), which is closer to the practical scenario.",
"5 3 Data Gap between Training and Inference In this section, we identity two representative data gaps between training and inference data for 5 From WMT19, the WMT community proposes to use the source-original test with natural input sets to evaluate the translation performance.",
"UNMT: style gap and content data.",
"We divide the test sets into two portions: the natural input portion with source sentences originally written in the source language and the translated input portion with source sentences translated from the target language.",
"Due to the limited space, we conduct the experiments with pre-trained XLM initialization and perform analysis with different kinds of inputs (i.e., natural and translated inputs) on De En newstest2013-2018 unless otherwise stated.",
"To perform the quantitative analysis of the style gap, we adopt KenLM 6 to train a 4-gram language model on the UNMT translated source sentences 7 and use the language model to calculate the perplexity (PPL) of natural and translated input sentences in the test sets.",
"The experimental results are shown in Table",
"3. The lower perplexity value (219 < 242) indicates that compared with the natural inputs, the UNMT translated training inputs have a more similar style with translated inputs in the test sets .",
"In order to further reveal the influence of the style gap on UNMT, we manually eliminated it and re-evaluated the models on the natural input portion of WMT16 De En.",
"Concretely, We first take the third-party Google Translator to translate 6 https://github.com/kpu/kenlm 7 To alleviate the content bias problem, we generate the training data 50% from En De translation and 50% from round trip translation De En De.",
"the target English sentences of the test sets into the source German language to eliminate the style gap.",
"And then we conduct translation experiments on the natural input portion and its Google translated portion to evaluate the impact of the style gap on the translation performance.",
"We list the experimental results in Table",
"4. We can find that by converting from the natural inputs (natural De) to the translated inputs (translated De ), the UNMT model achieves more improvement than the SNMT model (-2.8 > -6.3), demonstrating that the style gap inhibits the UNMT translation output quality.",
"In this section, we show the existence of the content gap by (1) showing the most high-frequency name entities, (2) calculating content similarity using term frequency-inverse document frequency (TF-IDF) for the training and inference data.",
"We use spaCy 8 to recognize German named entities for the UNMT translated source sentences, natural inputs and translated inputs in test sets, and show the ten most frequent name entities in Table",
"5. From the table, we can observe that the UNMT translated source sentences have few named entities biased towards source language German (words in red color), while having more named entities biased towards target language English, e.g., USA, Obama.",
"It indicates that the content of the UNMT translated source sentences is biased towards the target language English.",
"Meanwhile, the natural input portion of the inference data has more named entities biased towards source language German (words in red color), demonstrating that the content gap exists between the natural input portion of the inference data and the UNMT translated training data.",
"Next, we remove the stop words and use the term frequency-inverse document frequency (TF-IDF) approach to calculate the content similarity between the training and inference data.",
"Similarity scores are presented in Table",
"6. We can observe that the UNMT translated source data has a more significant similarity score with translated inputs which are generated from the target English sentences.",
"This result indicates that the content of UNMT translated source data is more biased towards the target language , which is consistent with the findings in Table",
"5. As it is difficult to measure the name entities 8 https://github.com/explosion/spaCy 6614 Data Most Frequent Name Entities Natural Infer.",
"translation accuracy in terms of BLEU evaluation metric, we provide a translation example in Table 7 to show the effect of the content gap in the UNMT translations (more examples in Appendix C). We observe that the UNMT model outputs the hallucinated translation U.S., which is biased towards the target language English.",
"We present a quantitative analysis to show the impact of the content gap on UNMT translation performance in Section 6.2.",
"To bridge the data gap between training and inference of UNMT, we propose a simple and effective method through self-training.",
"For the translation task X Y , we generate the source-original training samples from the source corpus X to improve the model's translation performance on natural inputs.",
"For each batch, we apply the forward model MTX Y on the natural source sentence x to generate its translation: y = arg max y PX Y ( y | x ; ) .",
"where S is the hyper-parameter weighting the self-training loss term.",
"It is worth noting that the generation step of",
"Eq.(4) has been done by the BT step of Y X training.",
"Thus, the proposed method will not increase the training cost significantly but make the most of the data generated by BT (Table 9).",
"Data We follow the common practices to conduct experiments on several UNMT benchmarks: WMT14 En-Fr, WMT16 En-De, WMT16 En-Ro.",
"The details of monolingual training data are delineated in Appendix A.2.",
"We adopt En-Fr newsdev2014, En-De newsdev2016, En-Ro news-dev2016 as the validation (development) sets, and En-Fr newstest2014, En-De newstest2016, En-Ro newstest2016 as the test sets.",
"In addition to the full test set, we split the test set into two parts: target-original and source-original, and evaluate the model's performance on the three kinds of test sets.",
"We use the released XLM BPE codes and vocabulary for all language pairs.",
"Model We evaluate the UNMT model fine-tuned on XLM 9 and MASS 10 pre-trained model (Lample and Conneau, 2019; Song et al., 2019).",
"For XLM models, we adopt the pre-trained models released by Lample and Conneau (2019) for all language pairs.",
"For MASS models, we adopt the pre-trained 9 https://github.com/facebookresearch/XLM 10 https://github.com/microsoft/MASS 6615 Testset Model Approach En-Fr En-De En-Ro Avg.",
"models released by Song et al. (2019) for En-Fr and En-Ro and continue pre-training the MASS model of En-De for better reproducing the results.",
"More details are delineated in Appendix A.2.",
"Table 8 shows the translation performance of XLM and MASS baselines and our proposed models.",
"We have the following observations: Our re-implemented baseline models achieve comparable or even better performance as reported in previous works.",
"The reproduced XLM+UNMT model has an average improvement of 1.4 BLEU points compared to the original report in Lample and Conneau (2019) and MASS+UNMT model is only 0.1 BLEU lower on average than Song et al. (2019).",
"Our approach with online self-training significantly improves overall translation performance (+0.8 BLEU on average).",
"This demonstrates the universality of the proposed approach on both large-scale (En-Fr, En-De) and data imbalanced corpus (En-Ro).",
"In the translated input scenario, our approach achieves comparable performance to baselines.",
"It demonstrates that although the sample of self-training is source-original style, our approach does not sacrifice the performance on the target-original side.",
"In the natural input scenario, we find that our proposed approach achieves more significant improvements, with +1.1 and +1.3 average BLEU on both baselines.",
"The reason is that the source-original style sample introduced by self-training alleviates model bias between natural and translated input.",
"We compare online self-training with the following two related methods, which also incorporate natural inputs in training:",
"Offline Self-training model distilled from the forward and backward translated data generated by the trained UNMT model.",
"CBD (Nguyen et al., 2021) model distilled from the data generated by two trained UNMT models through cross-translation, which embraces data diversity.",
"Dataset Previous studies have recommended restricting test sets to natural input sentences, a methodology adopted by the 2019-2020 edition of the WMT news translation shared task (Edunov et al., 2020).",
"In order to further verify the effectiveness of the proposed approach, we also conduct the evaluation on WMT19 and WMT20 En-De test sets.",
"Both test sets contain only natural input samples.",
"Results Experimental results are presented in Table",
"9. We also show the training costs of these methods.",
"We find that Unexpectedly, the offline self-training has no significant improvement over baseline UNMT.",
"Sun et al. (2021) have demonstrated the effectiveness of offline self-training in UNMT under low-resource and data imbalanced scenarios.",
"However, in our data-sufficient scenarios, offline self-training may suffer from the data diversity problem while online self-training can alleviate the problem through the dynamic model parameters during the training process.",
"We leave the complete analysis to future work.",
"CBD achieves a significant improvement compared to baseline UNMT, but the training cost is about six times that of online self-training.",
"Since the self-training samples are translated sentences on the target side, there is concern that the improvement achieved by self-training only comes from making the model outputs better match the translated references, rather than enhancing the model's ability on natural inputs.",
"To dispel the concern, we conducted the following experiments: (1) evaluate the fluency of model outputs in terms of language model PPL and (2) evaluate the translation performance on Google Paraphrased WMT19 En De test sets (Freitag et al., 2020).",
"Output fluency We exploit the monolingual corpora of target languages to train the 4-gram language models.",
"Table 10 shows the language mod-els' PPL on model outputs of test sets mentioned in 5.2.",
"We find that online self-training has only a slight impact on the fluency of model outputs, with the average PPL of XLM and MASS models only increasing by +3 and +6, respectively.",
"We ascribe this phenomenon to the translated target of self-training samples, which is model generated and thus less fluent then natural sentences.",
"However, since the target of BT data is natural and the BT loss term is the primary training objective, the output fluency does not decrease significantly.",
"Translation performance on paraphrased references Freitag et al. (2020) collected additional human translations for newstest2019 with the ultimate aim of generating a natural-to-natural test set.",
"We adopt the HQ(R) and HQ(all 4), which have higher human adequacy rating scores, to re-6617 Approach En-Fr En-De En-Ro Avg.",
"We present the experimental results in Table",
"11. Our proposed method outperforms baselines on both kinds of test sets.",
"Therefore, we demonstrate that our proposed method improves the UNMT model performance on natural input with limited translationese outputs.",
"Style Gap From Table 8, our proposed approach achieves significant improvements on the natural input portion while not gaining on the translated input portion over the baselines.",
"It indicates our approach has better generalization capability on the natural input portion of test sets than the baselines.",
"Content Gap To verify that our proposed approach bridges the content gap between training and inference, we calculate the accuracy of NER translation by different models.",
"Specifically, we adopt spaCy to recognize the name entities in reference and translation outputs and treat the name entities in reference as the ground truth to calculate the accuracy of NER translation.",
"We show the results in Table",
"12. Our proposed method achieves a significant improvement in the translation accuracy of NER compared to the baseline.",
"The result demonstrates that online self-training can help the model pay more attention to the input content rather than being affected by the content of the target language training corpus.",
"Next, we investigate the impact of target quality on ST. We use the SNMT model from 2.2 to generate ST data rather than the current model itself and keep the process of BT unchanged.",
"As shown in Table 2, the SNMT models perform well on source-original test set and thus yield higher quality target in ST data.",
"We denote this variant as knowledge distillation (KD) and report the performance on WMT19/20 E De in Table",
"13. When target quality gets better, model performance improves significantly, as expected.",
"Therefore, reducing the noise on the target side of the ST data may further improve the performance.",
"Implementing in an unsupervised manner is left to future work.",
"Unsupervised Neural Machine Translation Before attempts to build NMT model using monolingual corpora only, unsupervised cross-lingual embedding mappings had been well studied by Zhang et al. (2017); Artetxe et al. (2017, 2018a); Conneau et al. (2018).",
"These methods try to align the word embedding spaces of two languages without parallel data and thus can be exploited for unsupervised word-by-word translation.",
"Initialized by the cross-lingual word embeddings, Artetxe et al. (2018b) and Lample et al. (2018a) concurrently proposed UNMT, which achieved remarkable performance for the first time using monolingual corpora only.",
"Both of them rely on online back-translation and denoising auto-encoding.",
"After that, Lample et al. (2018b) proposed joint BPE for related languages and combined the neural and phrase-based methods.",
"Artetxe et al. (2019) warmed up the UNMT model by an improved statistical machine translation model.",
"Lample and Conneau (2019) proposed cross-lingual language model pretraining, which obtained large improvements over previous works.",
"Song et al. (2019) extended the pretraining framework to sequence-to-sequence.",
"Tran et al. (2020) induced data diversification in UNMT via cross-model back-translated distillation.",
"Data Augmentation Back-translation (Sennrich et al., 2016; Edunov et al., 2018; Marie et al., 2020) and self-training (Zhang and Zong, 2016; He et al., 2020; Jiao et al., 2021) have been well studied in the supervised NMT.",
"In the unsupervised scenario, Tran et al. (2020) have shown that multilingual pre-trained language models can be used to retrieve the pseudo parallel data from the large monolingual data.",
"Han et al. (2021) use generative pre-training language models, e.g., GPT-3, to perform zero-shot translations and use the translations as few-shot prompts to sample a larger synthetic translations dataset.",
"The most related work to ours is that offline self-training technology used to enhance low-resource UNMT (Sun et al., 2021).",
"In this paper, the proposed online self-training method for UNMT can be applied to both high-resource and low-resource scenarios without extra computation to generate the pseudo parallel data.",
"Translationese Problem Translationese problem has been investigated in machine translation evaluation (Lembersky et al., 2012; Zhang and Toral, 2019; Edunov et al., 2020; Graham et al., 2020).",
"These works aim to analyze the effect of translationese in bidirectional test sets.",
"In this work, we revisit the translationese problem in UNMT and find it causes the inaccuracy evaluation of UNMT performance since the training data entirely comes from the translated pseudo-parallel data.",
"Pseudo parallel corpus generated by back-translation is the foundation of UNMT.",
"However, it also causes the problem of translationese and results in inaccuracy evaluation on UNMT performance.",
"We attribute the problem to the data gap between training and inference and identify two data gaps, i.e., style gap and content gap.",
"We conduct the experiments to evaluate the impact of the data gap on translation performance and propose the online self-training method to alleviate the data gap problems.",
"Our experimental results on multiple language pairs show that the proposed method achieves consistent and significant improvement over the strong baseline XLM and MASS models on the test sets with natural input.",
"Zhiwei He and Rui Wang are with MT-Lab, Department of Computer Science and Engineering, School of Electronic Information and Electrical Engineering, and also with the MoE Key Lab of Arti-ficial Intelligence, AI Institute, Shanghai Jiao Tong University, Shanghai 200204, China.",
"Rui is supported by General Program of National Natural Science Foundation of China (6217020129), Shanghai Pujiang Program (21PJ1406800), and Shanghai Municipal Science and Technology Major Project (2021SHZDZX0102).",
"Zhiwei is supported by CCF-Tencent Open Fund (RAGR20210119)."
] | [
"abstain",
"abstain",
"abstain",
"method",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"objective",
"abstain",
"objective",
"method",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"other",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"other",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"objective",
"other",
"other",
"other"
] |
[
"Natural language spatial video grounding aims to detect the relevant objects in video frames with descriptive sentences as the query.",
"In spite of the great advances, most existing methods rely on dense video frame annotations, which require a tremendous amount of human effort.",
"To achieve effective grounding under a limited annotation budget, we investigate one-shot video grounding, and learn to ground natural language in all video frames with solely one frame labeled, in an end-to-end manner.",
"One major challenge of end-to-end one-shot video grounding is the existence of videos frames that are either irrelevant to the language query or the labeled frames.",
"Another challenge relates to the limited supervision, which might result in ineffective representation learning.",
"To address these challenges, we designed an end-to-end model via Information Tree for One-Shot video grounding (IT-OS).",
"Its key module, the information tree, can eliminate the interference of irrelevant frames based on branch search and branch cropping techniques.",
"In addition, several self-supervised tasks are proposed based on the information tree to improve the representation learning under insufficient labeling.",
"Experiments on the benchmark dataset demonstrate the effectiveness of our model.",
"Natural language spatial video grounding is a vital task for video-text understanding (Luo and Shakhnarovich, 2017; Zhou et al., 2019; Hu et al., 2019; Zhang et al., 2020b; Li et al., 2021), which aims to detect the objects described by the natural",
"First author.",
"mengzeli@zju.edu.cn Corresponding author.",
"zhaozhou@zju.edu.cnjiaxu.miao@yahoowenqiaozhang@zju.edu.cnwufei@zju.edu.cn Figure 1: An example of spatially grounding natural language in video frames.",
"language query from each video frame, as shown in Figure",
"1. There is a substantial and rapidly-growing research literature studying this problem with dense annotations (Li et al., 2017; Yamaguchi et al., 2017; Sadhu et al., 2020), where each frame that contains objects relevant to the language query will be manually labeled with bounding boxes.",
"Obviously, such annotations require tremendous human effort and can hardly be satisfied in real-world scenarios.",
"Recently, some works have investigated weakly-supervised video grounding with solely the video-text correspondence rather than object-text annotations (Huang et al., 2018; Chen et al., 2019a; Shi et al., 2019; Chen et al., 2019b; Zhou et al., 2018).",
"However, the performance is less satisfied with such weak supervision.",
"In practice, we are more likely to have a limited annotation budget rather than full annotation or no annotation.",
"In addition, as humans, after experiencing the language query and one frame object paired together for the first time, we have the ability to generalize this finding and identify objects from more frames.",
"Towards this end, we investigate another practical problem setting, i .",
"e",
"., one-shot spatial video grounding, with solely one relevant frame in the video labeled with bounding boxes per video.",
"Existing methods that are devised for supervised video grounding are not directly applicable to this novel setting.",
"We summarize several critical challenges: On the one hand, most of them incorporate a multi-stage training process, i .",
"localization module in the second stage.",
"However, in one-shot spatial video grounding, there are no temporal annotations, which indicate the start/end time of the relevant clip, to train the clip localization module.",
"Moreover, many of them extract video features in a pre-processed manner using feature extractor or object detector pretrained on large-scale datasets.",
"However, independent modeling limits the cooperation of different modules, especially when the labels are few.",
"Therefore, it is in urgent need to derive an end-to-end training framework for one-shot spatial video grounding.",
"On the other hand, there are video frames that are either irrelevant to the natural language query or the labeled frames.",
"These irrelevant frames might increase the computation complexity of end-to-end training, and bring confounding between the frame label and (irrelevant) visual features.",
"Lastly, with fewer supervision signals, deep representation learning might become error-prone or easily under-fitting, especially for end-to-end training.",
"To address these challenges, we devise an end-to-end model via the Information Tree for the One Shot natural language spatial video grounding (IT-OS).",
"Different from previous works, we design a novel tree structure to shield off the one-shot learning from frames that are irrelevant to either the language query or the labeled frame.",
"We devise several self-supervised tasks based on the tree structure to strengthen the representation learning under limited supervision signals.",
"Specifically, the calculation processes of the key module, information tree, contains four steps: (1) To construct the information tree, we view video frame features as nodes, and then compress the adjacent nodes to non-leaf nodes based on the visual similarity of themselves and the semantic similarity with the language query; (2) We search the information tree and select branch paths that are consistently relevant to the language query both in the abstractive non-leaf node level and in the fine-grained leaf node level; (3) We drop I) the leaf nodes that do not belong the same semantic unit with the labeled node; and II) the non-leaf nodes on the low relevance branch paths.",
"We also down-weight the importance of the leaf nodes that belong to the same semantic unit with the labeled node but are on the low relevance paths; (4) Finally, we input the extracted and weighted information to the transformer, and conduct training with the one-shot label and self-supervised tasks, including masked feature prediction and video-text matching.",
"We note that both the information tree and the transformer are jointly trained in an end-to-end manner.",
"We conduct experiments on two benchmark datasets, which demonstrate the effectiveness of IT-OS over state-of-the-arts.",
"Extensive analysis including ablation studies and case studies jointly demonstrate the merits of IT-OS on one-shot video grounding.",
"Our contributions can be summarized as follows: To the best of our knowledge, we take the initiative to investigate one-shot natural language spatial video grounding.",
"We design an end-to-end model named IT-OS via information tree to address the challenges brought by limited labels.",
"By leveraging the language query, several novel modules on the information tree, such as tree construction, branch search, and branch cropping, are proposed.",
"Moreover, to strengthen the deep representation learning under limited supervision signals, we introduce several self-supervised tasks based on the information tree.",
"We experiment with our IT-OS model on two benchmark datasets.",
"Comparisons with the state-of-the-art and extensive model analysis jointly demonstrate the effectiveness of IT-OS.",
"Natural Language Video Grounding.",
"Among numerous multimedia understanding applications (Zhang et al., 2020a,c, 2021d,c, 2020d; Kai et al., 2021; Zhang et al., 2020e), natural language video grounding has attracted the attention of more and more researchers recently.",
"There are mainly three branches, temporal grounding[(Ross et al., 2018; Lu et al., 2019; Zhang et al., 2019; Lin et al., 2020a,b; Zhang et al., 2021a; Li et al., 2022; Gao et al., 2021; Yang et al., 2021)], spatio-temporal grounding[(Tang et al., 2021; Zhang et al., 2020f,g; Su et al., 2021)], and spatial grounding.",
"We focus on the last one.",
"Deep neural network has convincingly demonstrated high capability in many domains (Wu et al., 2020, 2022; Guo et al., 2021; Li et al., 2020b,c,a), especially for video related tasks (Miao et al., 2021; Miao et al.; Xiao et al., 2020, 2021), like video 8708 grounding.",
"For example,(Li et al., 2017) use the neural network to detect language query related objects in the first frame and track the detected object in the whole video.",
"Compared to it, (Yamaguchi et al., 2017) and (Vasudevan et al., 2018) go further.",
"They extract all the object proposals through the pretrained detector, and choose the right proposal described in the text.",
"Supervised training for the natural language video object detection needs high labeling costs.",
"To reduce it, some researchers pay attention to weakly-supervised learning fashion using multiple instances learning(MIL) method (Huang et al., 2018; Chen et al., 2019a; Shi et al., 2019; Chen et al., 2019b; Zhou et al., 2018; Wang et al., 2021a)transfers contextualized knowledge in cross-modal alignment to release the unstable training problem in MIL.",
"Based on contrastive learning (Zhang et al., 2022), (Da et al., 2021) proposes an AsyNCE loss to disentangle false-positive frames in MIL, which allows for mitigating the uncertainty of from negative instance-sentence pairs.",
"Weakly supervised false-positive identification based on contrastive learning has witnessed success in some other domains (Zhang et al., 2021b; Yao et al., 2022) One-shot Learning for Videos.",
"One-shot learning has been applied in some other video tasks.",
"(Yang et al., 2018) proposes a meta-learning-based approach to perform one-shot action localization by capturing task-specific prior knowledge.",
"(Wu et al., 2018) investigates the one-shot video person re-identification task by progressively improving the discriminative capability of CNN via stepwise learning.",
"Different from these works, (Caelles et al., 2017) and (Meinhardt and Leal-Taix, 2020) define the one-shot learning as only one frame being labeled per video.",
"Specifically, (Caelles et al., 2017) use a fully convolutional neural network architecture to solve the one-shot video segmentation task.",
"(Meinhardt and Leal-Taix, 2020) decouple the detection task, and uses the modified Mask-RCNN to predict local segmentation masks.",
"Following this setting, we investigate one-shot natural language spatial video grounding, and devise a novel information-tree based end-to-end framework for the task.",
"Problem Formulation.",
"Given a video V = { v i } i =1 , 2 ,...,I and a natural language query C , spatial video grounding aims to localize the query-described object from all the objects O i = { o ij } j =1 , 2 ,...,J for each frame.",
"I denotes the frame number of the video, and the J is the object number in the video.",
"In one-shot spatial video grounding, solely one frame v i in video V is labeled with the region boxes of the target objects O i .",
"Pipeline of IT-OS.",
"As shown in Figure 2, there are mainly four steps involved in the end-to-end modeling of IT-OS: Firstly, we extract the features from the input video and the input caption.",
"Secondly, we build the information tree to get the representation of the video.",
"The information tree is built upon the frame feature maps, which are the leaf nodes.",
"Leaf nodes will be further merged based on the relevance between node-node and node-query to have non-leaf and root nodes.",
"Nodes on unnecessary branches will be deleted conditioned on the language query.",
"Thirdly, we utilize the transformer encoder to reason on the remaining nodes and language features.",
"Upon the transformer, we devise two self-supervised tasks, i .",
"e",
"., masked feature modeling, and video-text matching, which enhances the representation learning under limited labels.",
"Specifically, for the video, we use ResNet-101(He et al., 2016) as the image encoder to extract the frame feature maps; for the language query, we employ a language model Roberta(Liu et al., 2019).",
"Both the vision encoder and the language encoder are jointly optimized with the whole network.",
"Prediction and Training.",
"We follow the common prediction and training protocol of visual transformers used in other object detection models (Wang et al., 2021b).",
"We input the embedding parameters E de and the multi-model features F de generated by the transformer encoder into the transformer decoder D .",
"Then, the decoder D outputs possible prediction region features for each frame.",
"For each possible region, a possibility P and a bounding box B are generated.",
"We choose the box B with the highest possibility value P for each frame as the target box.",
"During the training process, we first calculate the possible prediction regions.",
"Then, we match the possible regions with the target boxes, and choose the best match for each frame.",
"Finally, use the match to train our IT-OS model.",
"In this section, we will elaborate the information tree modules in detail.",
"We will illustrate how to construct the information tree, how to extract critical information from it and how to design the self-supervised learning based on the tree.",
"To ease the illustration, we take the 6 frames as an example, and show the process in Figure",
"2. 3.2.1 Tree Construction Given the frame features generated by the CNN, we build the information tree by merging adjacent frame features in the specified order.",
"Specifically, the frame features output by the image encoder are the leaf nodes N = { n i } 2 Mi =1 .",
"A sliding window of size 2 and step 2 is applied on these nodes and nodes in the window are evaluated to be merged or not.",
"We calculate the semantic relevance difference between each node pair with the language query, and get the visual relevance between the nodes in each pair.",
"For the visual relevance calculation, we max-pool the feature maps of the i node pair to have the feature vector f 2 i 1 v and f 2 iv .",
"And then, we compute the cosine similarity r ivv between f 2 i 1 v and f 2 iv to be the visual relevance.",
"Next, we calculate the semantic relevance r 2 i 1 tv and r 2 i tv between the text feature f t and the nodes of i node pair: r 2 i 1 tv = (( w t f t ) ( w v f 2 i 1 v ) T ) , (2) r 2 itv = (( w t f t ) ( w v f 2 iv ) T ) , (3) where the w t and w v are learnable parameters, and is the sigmoid activation function.",
"With the relevant difference value, we rank the node pairs and pick out the top .",
"The is a hy-perparameter, which can be set as a constant or a percentage.",
"We merge the node pairs: n new = w mg ( n 2 i 1 + n 2 i ) + b mg , (5) where the w mg and b mg are trainable.",
"Finally, The new node n new replace the old nodes n 2 i 1 and n 2 i in the queue.",
"Repeat the process until there is only one node in the queue.",
"Saving all nodes in the process and the composite relationship between nodes generated in the merging process, we get the information tree.",
"We use a branch to denote a subtree.",
"To filter critical local and global information, we perform branch search and selection.",
"We firstly select branches that contain leaf nodes less than max and more than min .",
"max and min are hyperpa-rameters.",
"We calculate the semantic relevance of branches' root nodes and the language query based on Equation",
"2. Training.",
"During training, we directly select the branch that contains the labeled leaf node and the root node with the highest semantic relevance.",
"This selection improves the training efficiency.",
"Inference.",
"During inference, all frames should be processed.",
"We conduct an iterative search with multiple search steps.",
"For each step, we select the branch with the highest semantic relevance and remove the selected branch from the information tree.",
"After the search, we have multiple selected branches and each branch will be forwarded to the following processes.",
"Note that not all the non-leaf nodes in the selected branches are closely related to the input caption.",
"We remove non-leaf nodes that are with semantic relevance less than , which is a hyperparameter.",
"Their descendant non-leaf nodes are also removed.",
"To reserve enough frame nodes for training, we do not remove the descendant leaf nodes.",
"Instead, we down-weight them with = 0 .",
"5 .",
"For other leaf nodes, = 1 .",
"The remaining leaf nodes and non-leaf nodes represent the critical local information and the global information, respectively.",
"We multiply the feature of node i and the node's semantic relevance r itv : f iv new = f iv r itv , (6) where f iv new is the feature vector input into the transformer.",
"As such, Equation 6 considers both local relevance r tv and global relevance with the language query.",
"We leverage a transformer encoder for these extracted information and the language query.",
"As shown in the Figure 2, we design two self-supervised tasks as: 1) predicting the masked text features, and masked local/global video information; 2) judging whether the text and the video match.",
"For the transformer, the input tokens F in consist of the local information, the global information and the text features, which are three types of tokens.",
"We further introduce 2-D position embedding for video tokens and type embedding for all tokens, which are added to the tokens' features.",
"We predict the original features for masked language tokens and masked video tokens (leaf/non-leaf nodes in the selected branch) using multilayer perceptrons.",
"where the MLP t and MLP v are the multilayer per-ceptrons for text and video features, respectively.",
"We view masked token modeling as feature regression and adopt L2 distance as the loss function.",
"In addition, there will be a mismatched language query at the rate of 50%.",
"We propose to predict whether the video and language are matched, i .",
"e",
"., whether the video contains the event described by the language query, based on the output representation of token [CLS] .",
"When the video and the language are not matched, we will not train the model with the one-shot label.",
"Datasets We consider two video grounding benchmarks for evaluation: (1) VidSTG (Zhang et al., 2020g) is a large-scale benchmark dataset for video grounding, which is constructed based on VidOR (Shang et al., 2019) dataset.",
"VidSTG contains 10 , 000 videos and 99 , 943 sentences with 8711 Method Declarative Sentence Grounding Interrogative Sentence Grounding 0.4 0.5 0.6 Avg 0.4 0.5 0.6 Avg GroundeR 24.56 18.22 13.73 18.85 25.28 18.87 14.39 19.52 STPR 25.68 20.07 14.64 19.89 27.09 21.04 16.00 21.38 STGRN 27.57 20.91 16.25 21.50 28.51 21.89 17.20 22.47 VOGnet 32.08 24.38 19.91 25.75 33.08 25.54 20.85 26.72 OMRN 34.43 27.57 21.91 27.96 35.69 28.74 23.03 29.14 VOGnet* 36.42 29.37 21.95 29.25 36.98 28.35 22.57 29.30 OMRN* 39.54 30.02 22.34 30.64 38.89 30.53 24.10 31.17 IT-OS 46.75 35.81 23.23 35.26 46.16 34.55 25.19 35.30 Table 1: Compared with baselines on VidSTVG.",
"55 , 135 interrogative sentences and 44 , 808 declarative sentences.",
"These sentences describe 79 types of objects appearing in the videos.",
"We follow the official dataset split of (Zhang et al., 2020g).",
"(2) VID-sentence (Chen et al., 2019b) is another widely used video grounding benchmark constructed based on the VID (Russakovsky et al., 2015) dataset.",
"There are 30 categories and 7 , 654 video clips in this dataset.",
"We report the results of all methods on the validation set for the VID-sentence dataset.",
"We obtain similar observations and conclusions on the test set.",
"Implementation Detail For video preprocessing, we random resize the frames, and set the max size is 640 640 .",
"The other data augmentation methods, such as random horizontal flip and random size cropping are used at the same time.",
"During training, the learning rate is by default 0 .",
"00005 , and decays by a factor of 10 for every 35 epochs.",
"The batch size is 1 and the maximum training epoch is 100 .",
"We implement IT-OS in Pytorch and train it on a Linux server.",
"For model hyperparameters, we set = 60% , and = 0 .",
"7 .",
"Most of the natural language spatial video grounding models use the pretrained detection model as the backbone.",
"Thus, like them, we choose the official pretrained MDETR (Kamath et al., 2021) as the parameter basis for target detection of our IT-OS.",
"Evaluation Metrics We follow the evaluation protocol of (Chen et al., 2019b).",
"Specifically, we compute the Intersection over Union (IoU) metric for the predicted spatial bounding box and the ground-truth per frame.",
"The prediction for a video is considered as \"accurate\" if the average IoU of all frames exceeds a threshold .",
"The is set to 0 .",
"4 , 0 .",
"5 , and 0 .",
"6 during testing.",
"Baselines Since existing video grounding methods are not directly applicable to the one-shot setting, we extend several state-of-the-arts as the baselines.",
"Specifically, to have a comprehensive comparison, we consider 1)fully supervised models, including VOGnet (Sadhu et al., 2020), OMRN (Zhang et al., 2020f) and STGRN (Zhang et al., 2020g); and 2) other widely known methods, including video person grounding STPR (Yamaguchi et al., 2017), and visual grounding method, GroundeR (Rohrbach et al., 2016).",
"The experimental results for one-shot video grounding on VidSTVG and VID-sentence datasets are shown in Table 1 and 2, respectively.",
"According to the results, we have the following observations: Not surprisingly, although extended to the video grounding setting, baselines that belong to other domains, including video person grounding STPR and visual grounding GroundeR, achieve 8712 inferior results on video grounding benchmarks.",
"They lack domain-specific knowledge and might fail to effectively model the spatial-temporal relationships of videos and language queries.",
"IT-OS consistently achieves the best performance on two benchmarks and multiple experimental settings with a large margin improvement.",
"Remarkably, IT-OS boosts the performance (Avg) of the previous state-of-the-art OMRN from nearly 28 .",
"0 / 29 .",
"1 / 34 .",
"4 to 35 .",
"3 / 35 .",
"3 / 42 .",
"8 on VidSTVG and VID-sentence, respectively.",
"It demonstrates the superiority of IT-OS on one-shot video grounding.",
"The baselines are implemented with the backbones used in their original papers, which are different from ours.",
"To further disentangle the sources of performance improvement, we re-implement the best-performing baselines (VOGnet*, and OMRN*) with the same object detection backbone, MDETR, as IT-OS.",
"Although there is performance improvement with the new backbone, the best-performing baseline OMRN*, still underperforms IT-OS by over 4 points for the average accuracy on all datasets.",
"It further reveals the effectiveness of our novel model designs eliminating interference with different pre-training parameters.",
"We attribute the improvement to the end-to-end modeling, where different modules can simultaneously benefit from each other.",
"In addition, the proposed information tree alleviates the negative effects of irrelevant frames, and effectively models the interactions between the video global/local information and the language query.",
"Several self-supervised learning tasks based on the information tree enhance the representation learning under limited one-shot labels.",
"We are interested in 1) how different baselines perform under fully supervised settings; 2) how one-shot IT-OS perform compared to these baselines.",
"Towards this end, we train multiple baselines and IT-OS with all labels on the VID-sentence dataset.",
"The experiment results are shown in Table",
"3. From the table, we have the following findings: Remarkably, the performance gap between one-shot IT-OS and the fully supervised OMRN is Method 0.4 0.5 0.6 Avg GroundeR 42.72 33.77 27.05 34.51 STPR 47.95 36.19 30.41 38.18 STGRN 49.25 44.03 34.89 42.72 VOGnet 53.17 43.47 33.77 43.47 OMRN 55.22 46.64 37.50 46.45 IT-OS (OS) 51.87 42.91 33.58 42.79 Table 3: Compared with the baselines on VID-sentence.",
"less than 4% .",
"Such a minor gap demonstrates the effectiveness of IT-OS on learning with limited annotations.",
"This is significant and practical merit since we are more likely to have a limited annotation budget in real-world applications.",
"Surprisingly, one-shot IT-OS can still outperform some weak baselines such as GroundeR and STPR.",
"These results reveal the necessity of end-to-end modeling for video grounding.",
"We are interested in how different building blocks contribute to the effectiveness of IT-OS.",
"To this end, we surgically remove several components from IT-OS and construct different architectures.",
"The investigated components include information tree ( tree ), the branch cropping ( crop ), and the self-supervised training ( self ).",
"It is worth noting that the other components cannot be deleted independently except the branch cropping.",
"Thus, we don't conduct an ablation study for them.",
"Results on VidSTG and VID-sentence datasets are shown in Table 4 and Table 5, respectively.",
"There are several observations: Overall, removing any component incurs a performance drop, demonstrating the necessity and effectiveness of the information tree, branch search & cropping, and self-supervised training.",
"Stacking multiple components outperform the architecture with a single component.",
"This result reveals that the proposed components can benefit from each other in end-to-end training and jointly boost one-shot video grounding.",
"We conduct a case study to visually reveal the ability of the IT-OS in detail.",
"Specifically, we random 8713 Declarative Sentence Grounding Interrogative Sentence Grounding self tree crop 0.4 0.5 0.6 Avg 0.4 0.5 0.6 Avg 39.00 30.52 17.61 29.05 38.78 28.75 19.67 29.07 (cid:88) 40.52 32.32 18.83 30.56 40.82 31.44 20.66 30.97 (cid:88) 42.34 32.65 20.35 31.78 42.26 32.02 21.89 32.06 (cid:88) (cid:88) 44.16 33.38 21.11 32.89 44.55 33.78 23.19 33.84 (cid:88) (cid:88) 44.77 34.62 22.93 34.11 44.30 33.23 24.17 33.90 (cid:88) (cid:88) (cid:88) 46.75 35.81 23.23 35.26 46.16 34.55 25.19 35.30 Table 4: Ablation study on VidSTG dataset.",
"sample 3 videos from the datasets, and sample 6 frames from each video to visualize.",
"We compare our IT-OS model with the baseline method, OMRN, and the fundamental ablation model of the IT-OS, which is removed from the self-supervised module and the information tree.",
"As shown in Figure 3, we have the following key findings: (1) The IT-OS detects the more accurate one from all objects of the video than the best performing previous method.",
"It demonstrates the better representation extraction and analysis capabilities of our model.",
"(2) Even if the target object is selected correctly, the IT-OS localizes a more precise spatial area compared with the previous two stages method.",
"The results reflect the end-to-end model, IT-OS, has more accurate domain knowledge through training the whole model on the target dataset.",
"(3) After adding the information tree and the self-supervised module, the IT-OS outputs more precise bounding boxes.",
"It reveals that combining the two modules introduce stronger supervision signals for model training so that the model has stronger detection ability.",
"In this paper, we introduce the one-shot learning into the natural language spatial video grounding task to reduce the labeling cost.",
"To achieve the goal, the main point is to make full use of only one frame label for each video.",
"The invalid frames Figure 3: Examples of the detection result visualization.",
"unrelated to the input text and target objects bring confounding to the one-shot training process.",
"We design an end-to-end model (IT-OS) via the information tree to avoid it.",
"Specifically, the information tree module merges frames with similar semantics into one node.",
"Then, by searching the tree and cropping the invalid nodes, we can get the complete and valid semantic unit of the video.",
"Finally, two self-supervised tasks are used to make up the insufficient supervision.",
"This work is supported in part by the National Natural Science Foundation of China (Grant No.62037001, No.61836002, No.62072397).",
"This work is also partially funded by Hangzhou Hikvision Digital Technology."
] | [
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"objective",
"objective",
"objective",
"objective",
"objective",
"objective",
"abstain",
"objective",
"abstain",
"method",
"method",
"method",
"objective",
"abstain",
"objective",
"method",
"abstain",
"abstain",
"method",
"abstain",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"other",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"other"
] |
[
"We present a method for generating comparative summaries that highlights similarities and contradictions in input documents.",
"The key challenge in creating such summaries is the lack of large parallel training data required for training typical summarization systems.",
"To this end, we introduce a hybrid generation approach inspired by traditional concept-to-text systems.",
"To enable accurate comparison between different sources, the model first learns to extract pertinent relations from input documents.",
"The content planning component uses deterministic operators to aggregate these relations after identifying a subset for inclusion into a summary.",
"The surface realization component lexicalizes this information using a text-infilling language model.",
"By separately modeling content selection and realization, we can effectively train them with limited annotations.",
"We implemented and tested the model in the domain of nutrition and health rife with inconsistencies.",
"Compared to conventional methods, our framework leads to more faithful, relevant and aggregation-sensitive summarization while being equally fluent.",
"1 1 Introduction Articles written about the same topic rarely exhibit full agreement.",
"To present an unbiased overview of such material, a summary has to identify points of consensus and highlight contradictions.",
"For instance, in the healthcare domain, where studies often exhibit wide divergence of findings, such comparative summaries are generated by human experts for the benefit of the general public.",
"2 Ideally, this capacity will be automated given a large number of relevant articles and continuous influx of new ones that require a summary update to keep 1 Our code and data is available at https://github.c om/darsh10/Nutribullets 2 Examples include https://www.healthline.c om and https://foodforbreastcancer.com .",
"it current.",
"However, standard summarization architectures cannot be utilized for this task since the amount of comparative summaries is not sufficient for their training.",
"In this paper, we propose a novel approach to multi-document summarization based on a neural interpretation of traditional concept-to-text generation systems.",
"Specifically, our work is inspired by the symbolic multi-document summarization system of (Radev and McKeown, 1998) which produces summaries that explicitly highlight agreements, contradictions and other relations across input documents.",
"While their system was based on human-crafted templates and thus limited to a narrow domain, our approach learns different components of the generation pipeline from data.",
"To fully control generated content, we frame the task of comparative summarization as concept-to-text generation.",
"As a pre-processing step, we extract pertinent entity pairs and relations (see Figure 1) from input documents.",
"The Content Selection component identifies the key tuples to be presented in the final output and establishes their comparative relations (e.g., consensus) via aggregation operators.",
"Finally, the surface realization component utilizes a text-infilling language model to translate these relations into a summary.",
"Figure 1 exem-plifies this pipeline, showing selected key pairs (marked in bold), their comparative relation Contradiction (rows 1 &3 and rows 4&5 conflict), and the final summary.",
"3 This generation architecture supports refined control over the summary content, but at the same time does not require large amounts of parallel data for training.",
"The latter is achieved by separately training content selection and content realization components.",
"Since the content selection component operates over relational tuples, it can be robustly trained to identify salient relations utilizing limited parallel data.",
"Aggregation operators are implemented using simple deterministic rules over the database where comparative relations between different rows are apparent.",
"On the other hand, to achieve a fluent summary we have to train a language model on large amounts of data, but such data is readily available.",
"In addition to training benefits, this hybrid architecture enables human writers to explicitly guide content selection.",
"This can be achieved by defining new aggregation operators and including new inference rules into the content selection component.",
"Moreover, this architecture can flexibly support other summarization tasks, such as generation of updates when new information on the topic becomes available.",
"We apply our method for generating summaries of Pubmed publications on nutrition and health.",
"Typically, a single topic in this domain is covered by multiple studies which often vary in their findings making it particularly appropriate for our model.",
"We perform extensive automatic and human evaluation to compare our method against state-of-the-art summarization and text generation techniques.",
"While seq2seq models receive competent fluency scores, our method performs stronger on task-specific metrics including relevance , content faithfulness and aggregation cognisance .",
"Our method is able to produce summaries that receive 3 We compare the selected content with other entries in the database, identifying two contradictions.",
"an absolute 20% more on aggregation cognisance, an absolute 7% more on content relevance and 7% on faithfulness to input documents than the next best baseline in traditional and update settings.",
"Text-to-text Summarization Neural sequence-to-sequence models (Rush et al., 2015; Cheng and Lapata, 2016; See et al., 2017) for document summarization have shown promise and have been adapted successfully for multi-document summarization (Zhang et al., 2018; Lebanoff et al., 2018; Baumel et al., 2018; Amplayo and Lapata, 2019; Fabbri et al., 2019).",
"Despite producing fluent text, these techniques may generate false information which is not faithful to the original inputs (Pudup-pully et al., 2019; Kryscinski et al., 2019), especially in low resource scenarios.",
"In this work, we are interested in producing faithful and fluent text cognizant of aggregation amongst input documents, where few parallel examples are available.",
"Recent language modeling approaches (Devlin et al., 2018; Stern et al., 2019; Shen et al., 2020; Donahue et al., 2020) can also be extended for text completion.",
"Our work is a text-infilling language model where we generate words in place of relation specific blanks to produce a faithful summary.",
"Prior work (Mueller et al., 2017; Fan et al., 2017; Guu et al., 2018) on text generation also control aspects of the produced text, such as style and length.",
"While these typically utilize tokens to control the modification, using prototypes to generate text is also very common (Guu et al., 2017; Li, 2018; Shah et al., 2019).",
"In this work, we utilize aggregation specific prototypes to guide aggregation cognizant surface realization.",
"Data-to-text Summrization Traditional approaches for data-to-text generation have operated on symbolic data from databases.",
"McKeown and Radev (1995); Radev and McKeown (1998); Barzilay et al. (1998) introduce two components of content selection and surface realization.",
"Content selection identifies and aggregates key symbolic data from the database which can then be realized into text using templates.",
"Unlike modern data-to-text systems (Wiseman et al., 2018; Puduppully et al., 2019; Sharma et al., 2019; Wenbo et al., 2019) these approaches capture document consensus and aggregation cognisance.",
"While the neural approaches alleviate the need for human intervention, they do need an abundance of parallel data, Figure 2: Illustrating the flow of our Nutribullets Hybrid system.",
"which are typically from one source only.",
"Hence, modern techniques do not deal with input docu-ments' consensus in low resource settings.",
"Our goal is to generate a text summary y for a food from a pool of multiple scientific abstracts X .",
"In this section, we describe the framework of our Nutribullets Hybrid system, illustrated in Figure 2. 3.1 Overview We attain food health entity-entity relations, for both input documents X and the summary y , from entity extraction and relation classification modules trained on corresponding annotations (Table 2).",
"Notations: For N input documents, we collect XG = {G xp } Np =1 , a database of entity-entity relations G x p .",
"G p = ( e k 1 , e k 2 , r k ) K k =1 is a set of K tuples of two entities e 1 , e 2 and their relation r .",
"r represents relations such as the effect of a nutrition entity e 1 on a condition e 2 (see Table 2).",
"4 We have raw text converted into symbolic data.",
"Similarly, we denote the corpus of summaries as Y = { ( y m , G ym , O ym ) Mm =1 } , where y m is a concise summary, G ym is the set of entity-entity relation tuples and O ym is the realized aggregation, in M data points.",
"Modeling: Joint learning of content selection, information aggregation and text generation for multi-4 We train an entity tagger and relation classifier to predict G and also for computing knowledge based evaluation scores.",
"document summarization can be challenging.",
"This is further exacerbated in our technical domain with few parallel examples and varied consensus amongst input documents.",
"To this end, we propose a solution using Content Selection and Aggregation and Surface Realization models.",
"Raw text from N input documents is converted into a mini-database XG of relation tuples.",
"The content selection and aggregation model operates on such symbolic data.",
"We use XG and Y to train the content selection model.",
"During inference, we identify from XG a subset C of content to present in the final output.",
"In order to produce a summary cognizant of consensus amongst inputs, we identify the aggregation operator O based on C and other relevant tuples in XG .",
"The surface realization model produces a relevant, faithful and aggregation cognizant output.",
"The model is trained only using Y .",
"During inference, the model realizes text using the selected content C and the aggregation operator O .",
"Our content selection model takes a mini-database of entity-entity relation tuples XG as input, and outputs the key tuples C and the aggregation operator O .",
"Content selection and aggregation consists of two parts",
"(i) identifying key content P ( C | XG ) and",
"(ii) subsequently identifying the aggregation operator O using C, XG .",
"volves selecting important, diverse and representative tuples from a database.",
"While clustering and selecting from the database tuples is a possible solution, we model our content selection as a finite Markov decision process (MDP).",
"This allows for an exploration of different tuple combinations while incorporating delayed feedback from various critical sources of supervision (similarity with target tuples, diversity amongst selected tuples etc).",
"We consider a multi-objective reinforcement learning algorithm (Williams, 1992) to train the model.",
"Our rewards (Eq. 2) allow for the selection of informative and diverse relation tuples.",
"The MDP's state is represented as s t = ( t, { c 1 , . . . , c t } , { z 1 , z 2 , ..., z m t } ) where t is the current step, { c 1 , . . . , c t } is the content selected so far and { z 1 , z 2 , ..., z m t } is the remaining entity-entity relation tuples in the m -sized database.",
"The action space is all the remaining tuples plus one special token, Z { STOP } .",
"5 The number of actions is equal to | m t | + 1 .",
"As the number of actions is variable yet finite, we parameterize the policy ( a | s t ) with a model f which maps each action and state ( a, s t ) to a score, in turn allowing a probability distribution over all possible actions using softmax.",
"At each step, the probability that the policy selects z i as a candidate is: ( a = z i | s t ) = exp( f ( t, z i , c i )) (cid:80) m t +1 j =1 exp( f ( t, z j , c j )) (1) where c i = arg max c j ( cos ( z i , c j )) is the selected content closest to z i , z i and c i are the encoded dense vectors, cos ( u, v ) = u v || u |||| v || is the cosine similarity of two vectors and f is a feedforward neural network with non-linear activation functions that outputs a scalar score for each action a .",
"The selection process starts with Z .",
"Our module iteratively samples actions from ( a | s t ) until selecting STOP , ending with selected content C and a corresponding reward.",
"We can even allow for the selection of partitioned tuple sets by adding 5 STOP and NEW LIST get special embeddings.",
"an extra action of \"NEW LIST\", which allows the model to include subsequent tuples in a new group.",
"We consider the following individual rewards: R e = (cid:80) c C cos ( e 1 c , e 1 y ) + cos ( e 2 c , e 2 y ) is the cosine similarity of the structures of the selected content C with the structures present in the summary y (each summary structure accounted with only one c ), encouraging the model to select relevant content.",
"R d = 1 [max i,j ( cos ( c j , c i )) < ] computes the similarity between pairs within selected content C , encouraging the selection of diverse tuples.",
"r p is a small penalty for each action step to encourage concise selection.",
"The multi-objective reward is computed as R = w e R e + w d R d | C | r p , (2) where w e , w d and r p are hyper-parameters.",
"During training the model is updated based on the rewards.",
"During inference the model selects an ordered set of key and diverse relation tuples corresponding to appropriate health conditions.",
"Consensus Aggregation Identifying the consensus amongst the input documents is critical in our multi-document summarization task.",
"We model the aggregation operator of our Content Selection using simple one line deterministic rules as shown in Table 1. The rules are applied to the key C entity-entity relation pairs in context of XG .",
"In our example in Figure 1, O is Contradiction because of rows 1&3 and rows 4&5 (rows 1&3 only would also make it Contradiction).",
"The surface realization model P ( y | O, C ) , performs the critical task of generating a summary guided by both the entity-entity relation tuples C and the aggregation operator O .",
"The model allows for robust, diverse and faithful summarization compared to traditional template and modern seq2seq approaches.",
"We propose to model this process as a prototype-driven text infilling task.",
"The entities from C are used as fixed tokens with relations as special blanks in between these entities.",
"This is prefixed by a prototype summary corresponding to O .",
"For the example shown in Figure 2, we concatenate using | SEN | a randomly sampled contradictory summary \"Kale contains substances ... help fight cancer ... but the human evidence is mixed .\" to C \"<blank> pears <controls> ovarian cancer <de-creases> breast cancer <blank>\" .",
"The infilling language model produces text corresponding to relations between entities while maintaining an overall structure which is cognizant of O .",
"6 The model is trained on the few sample summaries from the training set using G ym and O ym to produce y m .",
"Providing aggregation and content guidance during generation alleviates the low-resource issue.",
"In this section we describe the setting of summary updates.",
"In a real world setting, we would often receive new input documents such as scientific studies about the same subject which necessitate a change in an old summary.",
"In context of our food and health summarization task, the goal is to update an old summary about a food and health condition on receiving results from new scientific studies from Pubmed.",
"Our model can accommodate this scenario fairly easily.",
"We describe the minor changes to the Content Selection and Aggregation and Surface Realization models for such a setting.",
"We are provided an original summary and can extract it's content C (cid:48) and can also construct the mini-database XG from the text of the new documents.",
"We identify the aggregation between the new studies' XG and original summary's content C (cid:48) first.",
"Depending on the aggregation identified, 6 Summaries in our training data are labelled with O ym as belonging to one of the four categories of Under-reported, Population Scoping, Contradiction or Agreement to accommodate such training.",
"corresponding content C is selected from XG .",
"For instance, in case of a contradiction, we are keen on identifying content leading to this contradiction.",
"The subsequent Surface Realization is dependent on O , the selected C and the C (cid:48) present in the original summary ( P ( y | O, C + C (cid:48) ) ).",
"Dataset We utilize a real world dataset for Food and Health summaries, crawled from https:// www.healthline.com/nutrition (Shah et al., 2021).",
"The HealthLine dataset consists of scientific abstracts as inputs and human written summaries as outputs.",
"The dataset consists of 6640 scientific abstracts from Pubmed, each averaging 327 words.",
"The studies in these abstracts are cited by domain experts when writing summaries in the Healthline dataset, forming natural pairings of parallel data.",
"Individual summaries average 24.5 words and are created using an average of 3 Pubmed abstracts.",
"Each food has multiple bullet summaries, where each bullet typically talks about a different health impact (hydration, diabetes etc).",
"We assign each food article randomly into one of the train, development or test splits.",
"Entity tagging and relation classification annotations are provided for the Pubmed abstracts and the healthline summaries.",
"Settings: We consider three settings.",
"1. Single Issue: We use the individual food and health issue summaries as a unique instance of food and single issue setting.",
"We split 1894 instances 80%,10%,10% to train, dev and test.",
"2. Multiple Issues: We group each food's article Pubmed abstract inputs and multiple summary outputs as a single parallel instance.",
"464 instances are split 80%,10%,10% to train, dev and test.",
"3. Summary Update: We consider two kinds of updates new information is fused to an existing summary and new information contradicts an existing summary.",
"For fusion we consider single issue summaries that have multiple conditions from different Pubmed studies (bananas + low blood pressure from one study and bananas + heart health from another study).",
"We partition the Pubmed Automatic Evaluation Human Scores MODELROUGEL KG(G) KG(I) AGRELEVANCEFLUENCY Copy-gen 0.12 0.21 0.50 0.64 1.93 1.89 GraphWriter 0.14 0.03 0.69 0.64 1.86 2.76 Entity Data2text 0.16 0.13 0.57 0.67 2.03 3.43 Transformer 0.20 0.21 0.64 0.67 2.66 3.76 Ours 0.18 0.30 0.76 0.89 3.03 3.46 Table 3: Automatic evaluation Rouge-L score (RougeL), KG in gold(G), KG in input(I) and Aggregation Cognisance (Ag) in our model and various baselines in the single issue setting, is reported.",
"studies to stimulate an update.",
"The contradictory update setting is where we artificially introduce conflicting results in the input document set so that the aggregation changes from Agreement to Contradictory.",
"We have a total of 103 test instances.",
"All models are trained atop of Single issue data.",
"Evaluation We evaluate our systems using the following automatic metrics.",
"Rouge is an automatic metric used to compare the model output with the gold reference (Lin, 2004).",
"KG(G) computes the number of entity-entity pairs with a relation in the gold reference, that are generated in the output.",
"7 This captures relevance in context of the reference.",
"KG(I) , similarly, computes the number of entity-entity pairs in the output that are present in the input scientific abstracts.",
"This measures faithfulness with respect to the input documents.",
"Aggregation Cognisance (Ag) measures the accuracy of the model in producing outputs which are cognizant of the right aggregation from the input, (Under-reported, Contradiction or Agreement).",
"We use a rule-based classifier to identify the aggregation implied by the model output and compare it to the actual aggregation operator based on the input Pubmed studies.",
"In addition to automatic evaluation, we have human annotators score our models on relevance and fluency.",
"Given a reference summary, relevance indicates if the generated text shares similar information.",
"Fluency represents if the generated text is grammatically correct and written in well-formed English.",
"Annotators rate relevance and fluency on a 1-4 likert scale (Albaum, 1997).",
"We have 3 annotators score every data point and report the average across the scores.",
"Baselines In order to demonstrate the effectiveness of our method, we compare it against text2text and 7 We run entity tagging plus relation classification on top of the model output and gold summaries.",
"We match the gold ( e gi , e gj , r g ) tuples using word embedding based cosine similarity with the corresponding entities in the output structures ( e oi , e oj , r o ) .",
"A cosine score exceeds a threshold of 0.7 is set (minimize false positives) to identify a match.",
"data2text state-of-the-art ( sota ) methods.",
"Copy-gen (Text2text): See et al. (2017) is a sota technique for summarization, which can copy from the input or generate words.",
"Transformer (Text2text): Hoang et al. (2019) is a summarization system using a pretrained Transformer.",
"GraphWriter (Data2text): Koncel-Kedziorski et al. (2019) is a graph transformer based model, which generates text using a seed title and a knowledge graph.",
"Takes the database XG as input.",
"Entity (Data2text): Puduppully et al. (2019) is an entity based data2text model, takes XG as input.",
"Implementation Details Our policy network is a three layer feedforward neural network.",
"We use a Transformer (Vaswani et al., 2017) implementation for Surface Realization.",
"We train an off-the-shelf Neural CRF tagger (Yang and Zhang, 2018) for entity extraction.",
"We use BERT (Devlin et al., 2018) based classifiers to predict the relation between two entities in a text trained using crowdsourced annotations from (Shah et al., 2021).",
"Futher implementation details can be found in A. 6 Results In this section, we describe the performance of our Nutribullet Hybrid system and baselines on summarization and summary updates.",
"We report empirical results , human evaluation and present sample outputs, highlighting the benefits of our method.",
"Single and Multi-issues Summarization: We describe the results on the task of generating summaries.",
"Table 3 presents the automatic evaluation results for the food and single issue summarization task.",
"High KG(I) and KG(G) scores for our method indicate that the generated text is faithful to input entities and relevant.",
"In particular, a high Aggregation Cognisance (Ag) score indicates that our model generates summaries which are cog-Transformer (baseline) * Whole grain cereals may protect against obesity , diabetes and certain cancers.",
"nizant of the varying degrees of consensus in the input Pubmed documents.",
"Compared to other baselines we also receive a competitive score on the automatic Rouge metric, beating Copy-gen, Entity Data2text and GraphWriter baselines while falling short (by 1.7%) of the Transformer baseline.",
"The baselines, especially Transformer, tend to produce similar outputs for different inputs (see Table 4).",
"Since a lot of these patterns are learned from the human summaries, Transformer receives a high Rouge score.",
"However, as in the low resource regime, the baseline does not completely capture the content and aggregation, it fails to get a very high KG(G) or Ag score.",
"A similar trend is observed for the other baselines too, which in this low resource regime produce a lot of false information, reflected in their low KG(I) scores.",
"Human evaluation, conducted by considering scores,on a 1-4 Likert scale, from three annotators for each instance, shows the same pattern.",
"Our model is able to capture the most relevant information, when compared against the gold summaries while producing fluent summaries.",
"The Transformer baseline produces fluent summaries, which are not as relevant.",
"The performance is poorer for the Copy-gen, Entity Data2text and GraphWriter models.",
"the gold annotations with respect to the input doc-uments' clustering.",
"Our model conducts the extra task of grouping the selected tuples, using the \"New List\" action.",
"Our model performs better than the baselines on both the KG(I) and KG(G) metrics as seen in Table 5.",
"Again, the pattern of producing very similar and repetitive sentences hurts the baselines.",
"They fail to cover different issues and tend to produce false information, in this low resource setting.",
"Our model scores an 7% higher on KG(G) and 17% higher on KG(I) compared to the next best performance, in absolute terms.",
"Table 4 shows the comparison between the outputs produced by our method and the Transformer baseline on the benefits of whole-grains.",
"Our method conveys more relevant, factual and organized information in a concise manner.",
"Summary Update: We study the efficacy of our model to fuse information in existing summaries on receiving new Pubmed studies.",
"As the KG(G) metric in 6 shows, our model is able to select and fuse more relevant information.",
"Table 7 shows two examples of summaries on flaxseeds where our model successfully fuses new information.",
"evaluation results to demonstrate the efficacy of maintaining Aggregation Cognisance (Ag), which is critical when updating summaries on receiving contradictory results.",
"The high performance in this update setting demonstrates the Surface Realization model's ability to produce aggregation cognizant outputs, in contrast to the baselines that do not learn this reasoning in a low resource regime.",
"Analysis: Information Extraction and Content Aggregation Information extraction is the critical first step performed for the input documents in order to get symbolic data for content selection and aggregation.",
"To this end, we report the performance of the information extraction system, which is composed of two models entity extraction and relation classification.",
"As reported in Table 8, the entity extraction model, a crf-based sequence tagging model, receives a token-level F1 score of 79%.",
"The relation classification model, a BERT based text classifier, receives an accuracy of 69%.",
"The performance of the information extraction models is particularly important for the content aggregation sub-task.",
"In order to analyse this quantitatively, we perform manual analysis of the 179 instances in the dev set and compare them to the system identified aggregation information extraction followed by the deterministic rules in Table 1. Given the simplicity of our rules, system's 78% accuracy in Table 8 is acceptable.",
"Deeper analysis shows that the performance is lowest for Population Scoping and Contradiction with an accuracy of 52% and 56% respectively.",
"The performance of Population Scoping being low is down predominantly to the simplicity of the rules.",
"Most mistakes occur when the input studies are review studies that don't mention any population but analyze results from several past work.",
"Contradiction suffers because of the information extraction system and stronger models for the same should be able to alleviate the errors.",
"While modern models produce fluent text in multi-document summarization, they struggle to capture the consensus amongst the input documents.",
"This inadequacy magnified in low resource domains, is addressed by our model.",
"Our model is able to generate robust summaries which are faithful to content and cognizant of the varying consensus in the input documents.",
"Our approach is applicable in summarization and textual updates.",
"Extensive experiments, automatic and human evaluation underline its impact over state-of-the-art baselines.",
"We thank the MIT NLP group and the reviewers for their helpful discussion and comments.",
"This work is supported by DSO grant DSOCO1905."
] | [
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"objective",
"result",
"method",
"method",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"other",
"other",
"method",
"other",
"method",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"other",
"abstain",
"abstain",
"method",
"other",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"other"
] |
[
"Insufficient or even unavailable training data of emerging classes is a big challenge of many classification tasks, including text classification.",
"Recognising text documents of classes that have never been seen in the learning stage, so-called zero-shot text classification , is therefore difficult and only limited previous works tackled this problem.",
"In this paper, we propose a two-phase framework together with data augmentation and feature augmentation to solve this problem.",
"Four kinds of semantic knowledge (word embeddings, class descriptions, class hierarchy, and a general knowledge graph) are incorporated into the proposed framework to deal with instances of unseen classes effectively.",
"Experimental results show that each and the combination of the two phases achieve the best overall accuracy compared with baselines and recent approaches in classifying real-world texts under the zero-shot scenario.",
"As one of the most fundamental problems in machine learning, automatic classification has been widely studied in several domains.",
"However, many approaches, proven to be effective in traditional classification tasks, cannot catch up with a dynamic and open environment where new classes can emerge after the learning stage (Romera-Paredes and Torr, 2015).",
"For example, the number of topics on social media is growing rapidly, and the classification models are required to recognise the text of the new topics using only general information (e.g., descriptions of the topics) since labelled training instances are unfeasible to obtain for each new topic (Lee et al., 2011).",
"This scenario holds in many real-world domains such Piyawat Lertvittayakumjorn and Jingqing Zhang contributed equally to this project.",
"as object recognition and medical diagnosis (Xian et al., 2017; World Health Organization, 1996).",
"Zero-shot learning (ZSL) for text classification aims to classify documents of classes which are absent from the learning stage.",
"Although it is challenging for a machine to achieve, humans are able to learn new concepts by transferring knowledge from known to unknown domains based on high-level descriptions and semantic representations (Thrun and Pratt, 1998).",
"Therefore, without labelled data of unseen classes, a zero-shot learning framework is expected to exploit supportive semantic knowledge (e.g., class descriptions, relations among classes, and external domain knowledge) to generally infer the features of unseen classes using patterns learned from seen classes.",
"So far, three main types of semantic knowledge have been employed in general zero-shot scenarios (Fu et al., 2018).",
"The most widely used one is semantic attributes of classes such as visual concepts (e.g., colours, shapes) and semantic properties (e.g., behaviours, functions) (Lampert et al., 2009; Zhao et al., 2018).",
"The second type is concept ontology, including class hierarchy and knowledge graphs, which represents relationships among classes and features (Wang et al., 2018; Fergus et al., 2010).",
"The third type is semantic word embeddings which capture implicit relationships between words thanks to a large training text corpus (Socher et al., 2013; Norouzi et al., 2013).",
"Nonetheless, concerning ZSL in text classification particularly, there are few studies exploiting one of these knowledge types and none has considered the combinations of them (Pushp and Srivastava, 2017; Dauphin et al., 2013).",
"Moreover, some previous works used different datasets to train and test, but there is similarity between classes in the training and testing set.",
"For example, in (Dauphin et al., 2013), the class imdb.com in the training set naturally corresponds to the class Movies in A collection of classifiers Phase 1: Coarse-grained Classification Data augmentation A traditional classifier Phase 2: Fine-grained Classification A zero-shotclassifier Feature augmentation \" # \" # if # \" # Refinement *# *# ; , ; *,,# A classifier for / A classifier for | | *# Figure 1: The overview of the proposed framework with two phases.",
"To tackle the zero-shot text classification problem, this paper proposes a novel two-phase framework together with data augmentation and feature augmentation (Figure 1).",
"In addition, four kinds of semantic knowledge including word embeddings, class descriptions, class hierarchy, and a general knowledge graph (ConceptNet) are exploited in the framework to effectively learn the unseen classes.",
"Both of the two phases are based on convolutional neural networks (Kim, 2014).",
"The first phase called coarse-grained classification judges if a document is from seen or unseen classes.",
"Then, the second phase, named fine-grained classification , finally decides its class.",
"Note that all the classifiers in this framework are trained using labelled data of seen classes (and augmented text data) only.",
"None of the steps learns from the labelled data of unseen classes.",
"We propose a novel deep learning based two-phase framework, including coarse-grained and fine-grained classification, to tackle the zero-shot text classification problem.",
"Unlike some previous works, our framework does not require semantic correspondence between classes in a training stage and classes in an inference stage.",
"In other words, the seen and unseen classes can be clearly different.",
"We propose a novel data augmentation technique called topic translation to strengthen the capability of our framework to detect documents from unseen classes effectively.",
"We propose a method to perform feature augmentation by using integrated semantic knowledge to transfer the knowledge learned from seen to unseen classes in the zero-shot scenario.",
"In the remainder of this paper, we firstly explain our proposed zero-shot text classification framework in section 2. Experiments and results, which demonstrate the performance of our framework, are presented in section 3. Related works are discussed in section 4. Finally, section 5 concludes our work and mentions possible future work.",
"Let CS and CU be disjoint sets of seen and unseen classes of the classification respectively.",
"In the learning stage, a training set { ( x 1 , y 1 ) , . . . , ( x n , y n ) } is given where x i is the i -th document containing a sequence of words [ w i 1 , w i 2 , . . . , w it ] and y i CS is the class of x i .",
"In the inference stage, the goal is to predict the class of each document, y i , in a testing set which has the same data format as the training set except that y i comes from CS CU .",
"Note that",
"(i) every class comes with a class label and a class description (Figure 2a);",
"(ii) a class hierarchy showing superclass-subclass relationships is also provided (Figure 2b);",
"(iii) the documents from unseen classes cannot be observed to train the framework.",
"As discussed in the Introduction, our proposed classification framework consists of two phases",
"(Figure 1).",
"The first phase, coarse-grained classification, predicts whether an input document comes from seen or unseen classes.",
"We also apply a data augmentation technique in this phase to help the classifiers be aware of the existence of unseen classes without accessing their real data.",
"Then the second phase, fine-grained classification, finally specifies the class of the input document.",
"It uses either a traditional classifier or a zero-shot classifier depending on the coarse-grained prediction given by Phase 1. Also, feature augmentation based on semantic knowledge is used to provide additional information which relates the document and the unseen classes to generalise the zero-shot reasoning.",
"We use the following notations in Figure 1 and throughout this paper.",
"The list of embeddings of each word in the document x i is denoted by v iw = [ v iw 1 , v iw 2 , . . . , v iw t ] .",
"The embedding of each class label c is denoted by v c , c CS CU .",
"It is assumed that each class has a one-word class label.",
"If the class label has more than one word, a similar one-word class label is provided to find v c .",
"As augmented features, the relationship vector v iw j ,c shows the degree of relatedness between the word w j and the class c according to semantic knowledge.",
"Hence, the list of relationship vectors between each word in x i and each class c CS C U is denoted by v iw,c = [ v iw 1 ,c , v iw 2 ,c , . . . , v iw t ,c ] .",
"We will explain the construction method in section 2.4.1.",
"Given a document x i , Phase 1 performs a binary classification to decide whether y i CS or y i / CS .",
"In this phase, each seen class c s CS has its own CNN classifier (with a subsequent dense layer and a sigmoid output) to predict the confidence that x i comes from the class c s , i.e., p ( y i = c s | x i ) .",
"The classifier uses v iw as an input and it is trained using a binary cross entropy loss with all documents of its class in the training set as positive examples and the rest as negative examples.",
"For a test document x i , this phase computes p ( y i = c s | x i ) for every seen class c s in CS .",
"If there exists a class c s such that p ( y i = c s | x i ) > s , it predicts y i CS ; otherwise, y i / CS .",
"s is a classification threshold for the class c s , calculated based on the threshold adaptation method from (Shu et al., 2017).",
"During the learning stage, the classifiers in Phase 1 use negative examples solely from seen classes, so they may not be able to differentiate the positive class from unseen classes.",
"Hence, when the names of unseen classes are known in the inference stage, we try to introduce them to the classifiers in Phase 1 via augmented data so they can learn to reject the instances likely from unseen classes.",
"We do data augmentation by translating a document from its original seen class to a new unseen class using analogy.",
"We call this process topic translation .",
"In the word level, we translate a word w in a document of class c to a corresponding word w (cid:48) in the context of a target class c (cid:48) by solving an analogy question c : w :: c (cid:48) :?.",
"For example, solving the analogy company:firm :: village:? via word embeddings (Mikolov et al., 2013), we know that the word firm in a document of class com-pany can be translated into the word hamlet in the context of class village.",
"Our framework adopts the 3C OSMUL method by Levy and Goldberg (2014) to solve the analogy question and find candidates of w (cid:48) : w (cid:48) = argmax x V cos( x, c (cid:48) ) cos( x, w ) cos( x, c ) + (cid:15) where V is a vocabulary set and cos( a, b ) is a cosine similarity score between the vectors of word a and word b .",
"Also, (cid:15) is a small number (i.e., 0.001) added to prevent division by zero.",
"In the document level, we follow Algorithm 1 to translate a document of class c into the topic of another class c (cid:48) .",
"To explain, we translate all nouns, verbs, adjectives, and adverbs in the given document to the target class, word-by-word, using the word-level analogy.",
"The word to replace must have the same part of speech as the original word and all the replacements in one document are 1-to-1 relations, enforced by replace dict in Algorithm 1. With this idea, we can create augmented documents for the unseen classes by topic-translation from the documents of seen classes in the training dataset.",
"After that, we can use the augmented documents as additional negative examples for all the CNNs in Phase 1 to make them aware of the tone of unseen classes.",
"Phase 2 decides the most appropriate class y i for x i using two CNN classifiers: a traditional classifier and a zero-shot classifier as shown in Figure 1. If y i CS predicted by Phase 1, the traditional classifier will finally select a class c s CS as y i .",
"Otherwise, if y i / CS , the zero-shot classifier will be used to select a class c u CU as y i .",
"The traditional classifier and the zero-shot classifier have an identical CNN-based structure followed by two dense layers but their inputs and outputs are different.",
"The traditional classifier is a multi-class classifier ( |C S | classes) with a softmax output, so it requires only the word embeddings v iw as an input.",
"This classifier is trained using a cross entropy loss with a training dataset whose examples are from seen classes only.",
"In contrast, the zero-shot classifier is a binary classifier with a sigmoid output.",
"Specifically, it takes a text document x i and a class c as inputs and predicts the confidence p ( y i = c | x i ) .",
"However, in practice, we utilise v iw to represent x i , v c to represent the class c , and also augmented features v iw,c to provide more information on how intimate the connections between words and the class c are.",
"Altogether, for each word w j , the classifier receives the concatenation of three vectors (i.e., [ v iw j ; v c ; v iw j ,c ] ) as an input.",
"This classifier is trained using a binary cross entropy loss with a training data from seen classes only, but we expect this classifier to work well on unseen classes thanks to the distinctive patterns of v iw,c in positive examples of every class.",
"This is how we transfer knowledge from seen to unseen classes in ZSL.",
"The relationship vector v w j ,c contains augmented features we input to the zero-shot classifier.",
"v w j ,c shows how the word w j and the class c are related considering the relations in a general knowledge graph.",
"In this work, we use ConceptNet providing general knowledge of natural language words and phrases (Speer and Havasi, 2013).",
"A subgraph of ConceptNet is shown in Figure 2c as an illustration.",
"Nodes in ConceptNet are words or phrases, while edges connecting two nodes show how they are related either syntactically or semantically.",
"We firstly represent a class c as three sets of nodes in ConceptNet by processing the class hierarchy, class label, and class description of c .",
"(1) the class nodes is a set of nodes of the class label c and any tokens inside c if c has more than one word.",
"(2) superclass nodes is a set of nodes of all the superclasses of c according to the class hierarchy.",
"(3) description nodes is a set of nodes of all nouns in the description of the class c .",
"For example, if c is the class Educational Institution, according to Figure 2a-2b, the three sets of ConceptNet nodes for this class are: (1) educational institution, educational, institution (2) organization, agent (3) place, people, ages, education.",
"To construct v w j ,c , we consider whether the word w j is connected to the members of the three sets above within K hops by particular types of relations or not 1 .",
"For each of the three sets, we construct a vector with 3 K + 1 dimensions.",
"v [0] = 1 if w j is a node in that set; otherwise, v [0] = 0 .",
"for k = 0 , . . . , K 1 : v [3 k + 1] = 1 if there is a node in the set whose shortest path to w j is k + 1 .",
"Otherwise, v [3 k + 1] = 0 .",
"v [3 k + 2] is the number of nodes in the set whose shortest path to w j is k + 1 .",
"v [3 k +3] is v [3 k +2] divided by the total number of nodes in the set.",
"Thus, the vector associated to each set shows how w j is semantically close to that set.",
"Finally, we concatenate the constructed vectors from the three sets to become v w j ,c with 3 (3 K +1) dimensions.",
"We used two textual datasets for the experiments.",
"The vocabulary size of each dataset was limited by 20,000 most frequent words and all numbers were excluded.",
"(1) DBpedia ontology dataset (Zhang et al., 2015) includes 14 non-overlapping classes and textual data collected from Wikipedia.",
"Each class has 40,000 training and 5,000 testing samples.",
"(2) The 20newsgroups dataset 2 has 20 topics each of which has approximately 1,000 documents.",
"70% of the documents of each class were randomly selected for training, and the remaining 30% were used as a testing set.",
"In our experiments, two different rates of unseen classes, 50% and 25%, were chosen and the corresponding sizes of CS and CU are shown in Table 1. For each dataset and each unseen rate, the random",
"1 In this paper, we only consider the most common types of positive relations which are RelatedTo , IsA , PartOf , and AtLocation .",
"They cover 60% of all edges in ConceptNet.",
"selection of ( CS , CU ) were repeated ten times and these ten groups were used by all the experiments with this setting for a fair comparison.",
"All documents from CU were removed from the training set accordingly.",
"Finally, the results from all the ten groups were averaged.",
"In Phase 1, the structure of each classifier was identical.",
"The CNN layer had three filter sizes [3, 4, 5] with 400 filters for each filter size and the subsequent dense layer had 300 units.",
"For data augmentation, we used gensim with an implementation of 3C OSMUL ( Rehurek and Sojka, 2010) to solve the word-level analogy (line 5 in Algorithm 1).",
"Also, the numbers of augmented text documents per unseen class for every setting (if used) are indicated in Table 1. These numbers were set empirically considering the number of available training documents to be translated.",
"In Phase 2, the traditional classifier and the zero-shot classifier had the same structure, in which the CNN layer had three filter sizes [2, 4, 8] with 600 filters for each filter size and the two intermediate dense layers had 400 and 100 units respectively.",
"For feature augmentation, the maximum path length K in ConceptNet was set to 3 to create the relationship vectors 4 .",
"The DBpedia ontology 5 was used to construct a class hierarchy of the DBpedia dataset.",
"The class hierarchy of the 20newsgroups dataset was constructed based on the namespaces initially provided by the dataset.",
"Meanwhile, the classes descriptions of both datasets were picked from Macmillan Dictionary 6 as appropriate.",
"For both phases, we used 200-dim GloVe vectors 7 for word embeddings v w and v c (Penning-ton et al., 2014).",
"All the deep neural networks were implemented with TensorLayer (Dong et al., 2017a) and TensorFlow (Abadi et al., 2016).",
"Dataset Unseen rate | CS | | CU | #Augmented docs per c u DBpedia 25% 11 3 12,000 (14 classes) 50% 7 7 8,000 20news 25% 15 5 4,000 (20 classes) 50% 10 10 3,000 Table 1: The rates of unseen classes and the numbers of augmented documents (per unseen class) in the experiments 4 Based on our observation, most of the related words stay within 3 hops from the class nodes in ConceptNet.",
"We compared each phase and the overall framework with the following approaches and settings.",
"Phase 1: Proposed by (Shu et al., 2017), DOC is a state-of-the-art open-world text classification approach which classifies a new sample into a seen class or reject if the sample does not belong to any seen classes.",
"The DOC uses a single CNN and a 1-vs-rest sigmoid output layer with threshold adjustment.",
"Unlike DOC, the classifiers in the proposed Phase 1 work individually.",
"However, for a fair comparison, we used DOC only as a binary classifier in this phase ( y i CS or y i / CS ).",
"Phase 2: To see how well the augmented feature v w,c work in ZSL, we ran the zero-shot classifier with different combinations of inputs .",
"Particularly, five combinations of v w , v c , and v w,c were tested with documents from unseen classes only (traditional ZSL).",
"The whole framework: (1) Count-based model selected the class whose label appears most frequently in the document as y i .",
"(2) Label similarity (Sappadla et al., 2016) is an unsupervised approach which calculates the cosine similarity between the sum of word embeddings of each class label and the sum of word embeddings of every n-gram ( n = 1 , 2 , 3 ) in the document.",
"We adopted this approach to do single-label classification by predicting the class that got the highest similarity score among all classes.",
"(3) RNN AutoEncoder was built based on a Seq2Seq model with LSTM (512 hidden units), and it was trained to encode documents and class labels onto the same latent space.",
"The cosine similarity was applied to select a class label closest to the document on the latent space.",
"(4) RNN+FC refers to the architecture 2 proposed in (Pushp and Srivastava, 2017).",
"It used an RNN layer with LSTM (512 hidden units) followed by two dense layers with 400 and 100 units respectively.",
"(5) CNN+FC replaced the RNN in the previous model with a CNN, which has the identical structure as the zero-shot classifier in Phase 2. Both RNN+FC and CNN+FC predicted the confidence p ( y i = c | x i ) given v w and v c .",
"The class with the highest confidence was selected as y i .",
"For Phase 1, we used the accuracy for binary classification ( y, y i CS or y, y i / CS ) as an evaluation metric.",
"In contrast, for Phase 2 and the whole framework, we used the multi-class classification accuracy ( y i = y i ) as a metric.",
"The evaluation of Phase 1 (coarse-grained classification) checks if each x i was correctly delivered to the right classifier in Phase 2. Table 3 shows the performance of Phase 1 with and without augmented data compared with DOC.",
"Considering test documents from seen classes only, our framework outperformed DOC on both datasets.",
"In addition, the augmented data improved the accuracy of detecting documents from unseen classes clearly and led to higher overall accuracy in every setting.",
"Despite no real labelled data from unseen classes, the augmented data generated by topic translation helped Phase 1 better detect documents from unseen classes.",
"Table 4 shows some examples of augmented data from the DBpedia dataset.",
"Even if they are not completely understandable, they contain the tone of the target classes.",
"Although Phase 1 provided confidence scores for all seen classes, we could not use them to predict y i directly since the distribution of scores of positive examples from different CNNs are different.",
"Figure 3 shows that the distribution of confidence scores of the class Artist had a noticeably larger variance and was clearly different from the class Building.",
"Hence, even if p ( y i = Building | x i ) > p ( y i = Artist | x i ) , we cannot conclude that x i is more likely to come from the class Building.",
"This is why a traditional classifier in Phase 2 is necessary .",
"Regarding Phase 2, fine-grained classification is in charge of predicting y i and it employs two classifiers which were tested separately.",
"Assuming Phase 1 is perfect, the classifiers in Phase 2 should be able to find the right class.",
"The purpose of Table 5 is to show that the traditional CNN classifier in Phase 2 was highly accurate.",
"Besides, given test documents from unseen classes only, the performance of the zero-shot classifier in Phase 2 is shown in Table 6.",
"Based on the construction method, v w,c quantified the relatedness between words and the class but, unlike v w and v c , it did not include detailed semantic meaning.",
"Thus, the classifier using v w,c only could not find out the correct unseen class and neither [ v w ; v w,c ] and [ v c ; v w,c ] could do.",
"On the other Dataset DBpedia 20news Input \\ Unseen rate 50% 25% 50% 25% v w 0.993 0.992 0.878 0.861 Table 5: The accuracy of the traditional classifier in Phase 2 given documents from seen classes only.",
"hand, the combination of [ v w ; v c ] , which included semantic embeddings of both words and the class label, increased the accuracy of predicting unseen classes clearly.",
"However, the zero-shot classifier fed by the combination of all three types of inputs [ v w ; v c ; v w,c ] achieved the highest accuracy in all settings.",
"It asserts that the integration of semantic knowledge we proposed is an effective means for knowledge transfer from seen to unseen classes in the zero-shot scenario.",
"Last but most importantly, we compared the whole framework with four baselines as shown in Table 2. First, the count-based model is a rule-based model so it failed to predict documents from seen classes accurately and resulted in unpleasant overall results.",
"This was similar to the label similarity approach even though it had higher degree of flexibility.",
"Next, the RNN Autoencoder was trained without any supervision since y i was predicted based on the cosine similarity.",
"We believe the implicit semantic relatedness between classes caused the failure of the RNN Autoencoder.",
"Besides, the CNN+FC and RNN+FC had same inputs and outputs and it was clear that CNN+FC performed better than RNN+FC in the experiment.",
"However, neither CNN+FC nor RNN+FC was able to transfer the knowledge learned from seen to unseen classes.",
"Finally, our two-phase framework has competitive prediction accuracy on unseen classes while maintaining the accuracy on seen classes.",
"This made it achieve the highest overall accuracy on both datasets and both unseen rates.",
"In conclusion, by using integrated semantic knowledge, the proposed two-phase framework with data and feature augmentation is a promising step to tackle this challenging zero-shot problem.",
"Furthermore, another benefit of the framework is high flexibility.",
"As the modules in Figure 1 has less coupling to one another, it is flexible to improve or customise each of them.",
"For example, we can deploy an advanced language understanding model, e.g., BERT (Devlin et al., 2018), as a traditional classifier.",
"Moreover, we may replace ConceptNet with a domain-specific knowledge graph to deal with medical texts.",
"There are a few more related works to discuss besides recent approaches we compared with in the experiments (explained in section 3.3).",
"Dauphin et al. (2013) predicted semantic utterance of texts by mapping class labels and text samples into the same semantic space and classifying each sample to the closest class label.",
"Nam et al. (2016) learned the embeddings of classes, documents, and words jointly in the learning stage.",
"Hence, it can perform well in domain-specific classification, but this is possible only with a large amount of training data.",
"Overall, most of the previous works exploited semantic relationships between classes and documents via embeddings.",
"In contrast, our proposed framework leverages not only the word embeddings but also other semantic knowledge.",
"While word embeddings are used to solve analogy for data augmentation in Phase 1, the other semantic knowledge sources (in Figure",
"2) are integrated into relationship vectors and used as augmented features in Phase 2. Furthermore, our framework does not require any semantic correspondences between seen and unseen classes.",
"In the face of insufficient data, data augmentation has been widely used to improve generalisation of deep neural networks especially in computer vision (Krizhevsky et al., 2012) and multimodality (Dong et al., 2017b), but it is still not a common practice in natural language processing.",
"Recent works have explored data augmentation in NLP tasks such as machine translation and text classification (Saito et al., 2017; Fadaee et al., 2017; Kobayashi, 2018), and the algorithms were designed to preserve semantic meaning of an original document by using synonyms (Zhang and Le-Cun, 2015) or adding noises (Xie et al., 2017), for example.",
"In contrast, our proposed data augmentation technique translates a document from one meaning (its original class) to another meaning (an unseen class) by analogy in order to substitute unavailable labelled data of the unseen class.",
"Apart from improving classification accuracy, feature augmentation is also used in domain adaptation to transfer knowledge between a source and a target domain (Pan et al., 2010b; Fang and Chiang, 2018; Chen et al., 2018).",
"An early research paper applying feature augmentation in NLP is Daume III (2007) which targeted domain adaptation on sequence labelling tasks.",
"After that, feature augmentation was used in several NLP tasks such as cross-domain sentiment classification (Pan et al., 2010a), multi-domain machine translation (Clark et al., 2012), semantic argument classification (Batubara et al., 2018), etc.",
"Our work is different from previous works not only that we applied this technique to zero-shot text classification but also that we integrated many types of semantic knowledge to create the augmented features.",
"To tackle zero-shot text classification, we proposed a novel CNN-based two-phase framework together with data augmentation and feature augmentation.",
"The experiments show that data augmentation by topic translation improved the accuracy in detecting instances from unseen classes, while feature augmentation enabled knowledge transfer from seen to unseen classes for zero-shot learning.",
"Thanks to the framework and the integrated semantic knowledge, our work achieved the highest overall accuracy compared with all the baselines and recent approaches in all settings.",
"In the future, we plan to extend our framework to do multi-label classification with a larger amount of data, and also study how semantic units defined by linguists can be used in the zero-shot scenario.",
"We would like to thank Douglas McIlwraith, Nontawat Charoenphakdee, and three anonymous reviewers for helpful suggestions.",
"Jingqing and Piyawat would also like to thank the support from the LexisNexis R (cid:13) Risk Solutions HPCC Systems R (cid:13) academic program and Anandamahidol Foundation, respectively."
] | [
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"objective",
"objective",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"other",
"other",
"other",
"objective",
"other",
"objective",
"other",
"other",
"objective",
"other",
"other",
"other",
"method",
"objective",
"abstain",
"result",
"objective",
"other",
"other"
] |
[
"A core part of linguistic typology is the clas-sification of languages according to linguistic properties, such as those detailed in the World Atlas of Language Structure (WALS).",
"Doing this manually is prohibitively time-consuming, which is in part evidenced by the fact that only 100 out of over 7,000 languages spoken in the world are fully covered in WALS.",
"We learn distributed language representations, which can be used to predict typological properties on a massively multilingual scale.",
"Additionally, quantitative and qualitative analyses of these language embeddings can tell us how language similarities are encoded in NLP models for tasks at different typological levels.",
"The representations are learned in an unsupervised manner alongside tasks at three typological levels: phonology (grapheme-to-phoneme prediction, and phoneme reconstruction), morphology (morphological inflection), and syntax (part-of-speech tagging).",
"We consider more than 800 languages and find significant differences in the language representations encoded, depending on the target task.",
"For instance, although Norwegian Bokmal and Danish are typologically close to one another, they are phonologically distant, which is reflected in their language embeddings growing relatively distant in a phonological task.",
"We are also able to predict typological features in WALS with high accuracies, even for unseen language families.",
"For more than two and a half centuries, linguistic typologists have studied languages with respect to their structural and functional properties (Haspel-math, 2001; Velupillai, 2012).",
"Although typology has a long history (Herder, 1772; Gabelentz, 1891; Greenberg, 1960, 1974; Dahl, 1985; Com-rie, 1989; Haspelmath, 2001; Croft, 2002), computational approaches have only recently gained popularity (Dunn et al., 2011; Walchli, 2014; Ostling, 2015; Cotterell and Eisner, 2017; Asgari and Schutze, 2017; Malaviya et al., 2017; Bjerva and Augenstein, 2018).",
"One part of traditional typological research can be seen as assigning sparse explicit feature vectors to languages, for instance manually encoded in databases such as the World Atlas of Language Structures (WALS, Dryer and Haspelmath, 2013).",
"A recent development which can be seen as analogous to this is the process of learning distributed language representations in the form of dense real-valued vectors, often referred to as language embeddings (Tsvetkov et al., 2016; Ostling and Tiedemann, 2017; Malaviya et al., 2017).",
"We hypothesise that these language embeddings encode typological properties of language, reminiscent of the features in WALS, or even of parameters in Chomsky's Principles and Parameters framework (Chomsky, 1993).",
"In this paper, we model languages in deep neural networks using language embeddings, considering three typological levels: phonology, morphology and syntax.",
"We consider four NLP tasks to be representative of these levels: grapheme-to-phoneme prediction and phoneme reconstruction, morphological inflection, and part-of-speech tagging.",
"We pose three research questions ( RQ s): RQ 1 Which typological properties are encoded in task-specific distributed language representations, and can we predict phonological, morphological and syntactic properties of languages using such representations?",
"One of our key findings is that language representations differ considerably depending on the target task.",
"For instance, for grapheme-to-phoneme mapping, the differences between the representations for Norwegian Bokmal and Danish increase rapidly during training.",
"This is due to the fact that, although the languages are typologically close to one another, they are phonologically distant.",
"Computational linguistics approaches to typology are now possible on a larger scale than ever before due to advances in neural computational models.",
"Even so, recent work only deals with fragments of typology compared to what we consider here.",
"Computational typology has to a large extent focused on exploiting word or morpheme alignments on the massively parallel New Testament, in approximately 1,000 languages, in order to infer word order ( Ostling, 2015) or assign linguistic categories (Asgari and Schutze, 2017).",
"Walchli (2014) similarly extracts lexical and grammatical markers using New Testament data.",
"Other work has taken a computational perspective on language evolution (Dunn et al., 2011), and phonology (Cot-terell and Eisner, 2017; Alishahi et al., 2017).",
"Language embeddings In this paper, we follow an approach which has seen some attention in the past year, namely the use of distributed language representations, or language embeddings .",
"Some typological experiments are carried out by Ostling and Tiedemann (2017), who learn language embeddings via multilingual language modelling and show that they can be used to reconstruct genealogical trees.",
"Malaviya et al. (2017) learn language embeddings via neural machine translation, and predict syntactic, morphological, and phonetic features.",
"Contributions Our work bears the most resemblance to Bjerva and Augenstein (2018), who fine-tune language embeddings on the task of PoS tagging, and investigate how a handful of typological properties are coded in these for four Uralic languages.",
"We expand on this and thus contribute to previous work by:",
"(i) introducing novel qualitative investigations of language embeddings, in addition to thorough quantitative evaluations;",
"(ii) considering four tasks at three different typological levels;",
"(iii) considering a far larger sample of several hundred languages; and",
"(iv) grounding the language representations in linguistic theory.",
"There are several methods for obtaining distributed language representations by training a recurrent neural language model (Mikolov et al., 2010) simultaneously for different languages (Tsvetkov et al., 2016; Ostling and Tiedemann, 2017).",
"In these recurrent multilingual language models with long short-term memory cells (LSTM, Hochreiter and Schmidhuber, 1997), languages are embedded into an n -dimensional space.",
"In order for multilingual parameter sharing to be successful in this setting, the neural network is encouraged to use the language embeddings to encode features of language.",
"In this paper, we explore the embeddings trained by Ostling and Tiedemann (2017), both in their original state, and by further tuning them for our four tasks.",
"These are trained by training a multilingual language model with language representations on a collection of texts from the New Testament, covering 975 languages.",
"While other work has looked at the types of representations encoded in different layers of deep neural models (Kadar et al., 2017), we choose to look at the representations only in the bottom-most embedding layer.",
"This is motivated by the fact that we look at several tasks using different neural architectures, and want to ensure comparability between these.",
"We now turn to the theoretical motivation of the language representations.",
"The field of NLP is littered with distributional word representations, which are theoretically justified by distributional semantics (Harris, 1954; Firth, 1957), summarised in the catchy phrase You shall know a word by the company it keeps (Firth, 1957).",
"We argue that language embeddings, or distributed representations of language, can also be theoretically motivated by Chomsky's Principles and Parameters framework (Chomsky and Lasnik, 1993; Chomsky, 1993, 2014).",
"Language embeddings encode languages as dense real-valued vectors, in which the dimensions are reminiscent of the parameters found in this framework.",
"Briefly put, Chomsky argues that languages can be described in terms of principles 908 (abstract rules) and parameters (switches) which can be turned either on or off for a given language (Chomsky and Lasnik, 1993).",
"An example of such a switch might represent the positioning of the head of a clause (i.e. either head-initial or head-final).",
"For English, this switch would be set to the initial' state, whereas for Japanese it would be set to the final' state.",
"Each dimension in an n dimensional language embedding might also describe such a switch, albeit in a more continuous fashion.",
"The number of dimensions used in our language representations, 64 , is a plausible number of parameter vector dimensions (Dunn et al., 2011).",
"If we were able to predict typological features using such representations, this lends support to the argument that languages, at the very least, can be represented by theoretically motivated parameter vectors, with the given dimensionality.",
"In the experiments for RQ1 and RQ2 we predict typological features extracted from WALS (Dryer and Haspelmath, 2013).",
"We choose to investigate three linguistic levels of language: phonology, morphology, and syntax.",
"This is motivated by three factors:",
"(i) these features are related to NLP tasks for which data is available for a large language sample;",
"(ii) the levels cover a range from basic phonological and morphological structure, to syntactic structure, allowing us to approach our research question from several angles; and",
"(iii) the features in these categories are coded in WALS for a relatively large selection of languages.",
"We extract the three feature sets which represent these typological levels of language from WALS.",
"1 Phonological features cover 20 features ranging from descriptions of the consonant and vowel inventories of a particular language to presence of tone and stress markers.",
"As an example, consider WALS feature 13A (Tone).",
"2 This feature takes three feature values:",
"(i) no tones ,",
"(ii) simple tone system , and",
"(iii) complex tone system .",
"Most Indo-European languages, such as English, Spanish, and Russian, do not have any tones",
"(i).",
"Norwegian and Swedish are exceptions to this, as they both have simple tone systems",
"(ii) similar to that in Japanese.",
"Finally, complex tone systems",
"(iii) 1 These are defined in the chapter structure in WALS: http://wals.info/chapter 2 http://wals.info/feature/13A are typically found in several African languages as well as languages in South-East Asia.",
"Morphological features cover a total of 41 features.",
"We consider the features included in the Morphology chapter as well as those included in the Nominal Categories chapter to be morphological in nature.",
"3 This includes features such as the number of genders, usage of definite and indefinite articles and reduplication.",
"As an example, consider WALS feature 37A (Definite Articles).",
"4 This feature takes five values:",
"(i) Definite word distinct from demonstrative ,",
"(ii) Demonstrative word used as definite article ,",
"(iii) Definite affix ,",
"(iv) No definite, but indefinite article , and",
"(v) No definite or indefinite article .",
"Again, most Indo-European languages fall into category",
"(i), with Norwegian, Swedish, and Danish as relative outliers in category",
"(iii).",
"Word-order features cover 56 features, encoding properties such as the ordering of subjects, objects and verbs.",
"As an example, consider WALS feature 81A (Order of Subject, Object and Verb).",
"5 This feature takes all possible combinations of the three word classes as its feature values, with the addition of a special class for No dominant order .",
"Most languages in WALS fall into the categories SOV (41.0%) and SVO (35.4%).",
"The general set-up of the experiments in this paper is as follows.",
"We aim at answering our three research questions dealing with typological properties and similarities as encoded in language embeddings.",
"In order to do this, we attempt to predict typological features as they are encoded in WALS, using language embeddings which have been fine-tuned during training on tasks related to different typological properties.",
"The main interest in this paper is therefore not on how well each model performs on a given NLP task, but rather on what the resulting language embeddings encode.",
"Concretely, we use language embeddings ~l i from a given training iteration of a given task as input to a k-NN classifier, which outputs the typological class a language belongs to (as coded in WALS).",
"We train separate classifiers for each 3 This choice was made as, e.g., feature 37A (Definite Articles) includes as a feature value whether a definite affix is used.",
"typological property and each target task.",
"When i = 0 , this indicates the pre-trained language embeddings as obtained from Ostling and Tiedemann (2017).",
"Increasing i indicates the number of iterations over which the system at hand has been trained.",
"In each experiment, for a given iteration i , we consider each ~l i L where L is the intersection L task L pre , where L task is the set of languages for a given task, and L pre is the set of languages for which we have pre-trained embeddings.",
"All results in the following sections are the mean of three-fold cross-validation, and the mean over the WALS features in each given category.",
"6 We run the experiments in a total of three settings:",
"(i) evaluating on randomly selected lan-guage/feature pairs from a task-related feature set;",
"(ii) evaluating on an unseen language family from a task-related feature set ;",
"(iii) evaluating on randomly selected language/feature pairs from all WALS feature sets.",
"This allows us to establish how well we can predict task-related features given a random sample of languages",
"(i), and a sample from which a whole language family has been omitted",
"(ii).",
"Finally,",
"(iii) allows us to compare the task-specific feature encoding with a general one.",
"A baseline reference is also included, which is defined as the most frequently occurring typological trait within each category.",
"7 For instance, in the morphological experiments, we only consider the 41 WALS features associated with the categories of morphology and nominal categories.",
"The overlap between languages for which we have data for morphological inflection and languages for which these WALS features are coded is relatively small (fewer than 20 languages per feature).",
"This small dataset size is why we have opted for a non-parametric k -Nearest Neighbours classifier for the typological experiments.",
"We use k = 1 , as several of the features take a large number of class values, and might only have a single instance represented in the training set.",
"Table 1 shows the datasets we consider (detailed in later sections), the typological class they are related to, the size of the language sample in the 6 The mean accuracy score is a harsh metric, as some features are very difficult to predict due to them, e.g., being very language specific or taking a large number of different values.",
"7 The languages represented in several of the tasks under consideration have a high Indo-European bias.",
"Hence, several of the properties have a relatively skewed distribution, providing us with a strong baseline.",
"task, and the size of the intersection L task L pre .",
"The number of pre-trained language embeddings, | L pre | , is 975 in all cases.",
"We focus the evaluation for each task-specific language embedding set on the typological property relevant to that dataset.",
"In addition, we also evaluate on a set of all typological properties in WALS.",
"Note that the evaluation on all properties is only comparable to the evaluation on each specific property, as the set of languages under consideration differs between tasks.",
"We use grapheme-to-phoneme (G2P) as a proxy of a phonological task (Deri and Knight, 2016; Peters et al., 2017).",
"The dataset contains over 650,000 such training instances, for a total of 311 languages (Deri and Knight, 2016).",
"The task is to produce a phonological form of a word, given its orthographic form and the language in which it is written.",
"Crucially, this mapping is highly different depending on the language at hand.",
"For instance, take the word written variation , which exists in both English and French: (English, variation) -> -vE@ri\"eIS@n (French, variation) -> -vaKja\"sjO 5.1.1 Experiments and Analysis We train a sequence-to-sequence model with attention for the task of grapheme-to-phoneme mapping.",
"8 The model takes as input the characters of each source form together with the language embedding for the language at hand and outputs a predicted phonological form.",
"Input and output alphabets are shared across all languages.",
"The system is trained over 3,000 iterations.",
"Quantitative results Since we consider Grapheme-to-Phoneme as a phonological task, we focus the quantitative evaluation on phonological features from WALS.",
"We run experiments using the language embeddings as features for a simple 8 The system is described in detail in Section 8.",
"k-NN classifier.",
"The results in Table 2 indicate that G2P is a poor proxy for language phonology, however, as typological properties pertaining to phonology are not encoded.",
"That is to say, the k-NN results do not outperform the baseline, and performance is on par even after fine tuning (no significant difference, p > 0 . 05 ).",
"In the unseen setting, however, we find that pre-trained language embeddings are significantly better ( p < 0 . 05 ) at predicting the phonological features than both fine-tuned ones and the baseline.",
"Qualitative results We now turn to why this task is not a good proxy of phonology.",
"The task of grapheme-to-phoneme is more related to the processes in the diachronic development of the writing system of a language than it is to the actual genealogy or phonology of the language.",
"This is evident when considering the Scandinavian languages Norwegian and Danish which are typologically closely related, and have almost exactly the same orthography.",
"In spite of this fact, the phonology of the two languages differs drastically due to changes in Danish phonology, which impacts the mapping from graphemes to phonemes severely.",
"Hence, the written forms of the two languages should be very similar, which makes the language embeddings based on language modelling highly similar to one another.",
"However, when the embeddings are fine-tuned on a task taking orthography as well as phonology into account, this is no longer the case.",
"Figure 1 shows that the language embeddings of Norwegian Bokmal and Danish diverge from each other, which is especially striking when comparing to the converging with the typologically much more distant languages Tagalog and Finnish.",
"However, the absolute difference between Norwegian Bokmal and both Taga-log/Finnish is still greater than that of Norwegian Bokmal and Danish even after 3,000 iterations.",
"As a second phonological task, we look at phonological reconstruction using word lists from the Automated Similarity Judgement Program (ASJP, Wichmann et al. (2016)).",
"This resource contains word lists of at least 40 words per language for more than 4,500 languages.",
"The task we consider is to reproduce a given source phonological form, also given the language, for instance: (English, wat3r) -> wat3r The intuition behind these experiments is that languages with similar phonetic inventories will be grouped together, as reflected in changes in the language embeddings.",
"We train a sequence-to-sequence model with attention, framed as an auto-encoding problem, using the same sequence-to-sequence architecture and setup as for the grapheme-to-phoneme task.",
"The model takes as input the characters of each source form together with the language embedding for the language at hand and outputs the predicted target form which is identical to the source form.",
"Quantitative results Since we also consider phonological reconstruction to be a phonological task, we focus the quantitative evaluation on phonological features from WALS.",
"As with the G2P experiments, Table 3 shows that the fine-tuned embeddings do not offer predictive power above the most frequent class baseline ( p > 0 . 05 ).",
"Observing the changes in the language embeddings reveals that the embeddings are updated to 911 a very small extent, indicating that these are not used by the model to a large extent.",
"This can be explained by the fact that the task is highly similar for each language, and that the model largely only needs to learn to copy the input string.",
"We do, however find that evaluating on a set with an unseen language family does yield results significantly above baseline levels with the pre-trained embeddings ( p < 0 . 05 ), which together with the G2P results indicate that the language modelling objective does encode features to some extent related to phonology.",
"We use data from the Unimorph project, specifically the data released for the SIGMORPHON-2017 shared task (Cotterell et al., 2017).",
"9 This data covers 52 languages, thereby representing a relatively large typological variety.",
"Whereas the shared task has two subtasks, namely inflection and paradigm cell filling, we only train our system using the inflection task.",
"This was a choice of convenience, as we are not interested in solving the task of morphological paradigm cell filling, but rather observing the language embeddings as they are fine-tuned.",
"Furthermore we focus on the high-resource setting in which we have access to 10,000 training examples per language.",
"The inflection subtask is to generate a target inflected form given a lemma with its part-of-speech as in the following example: (release, V;V.PTCP;PRS) -> releasing 6.1.1 Morphological experiments We train a sequence-to-sequence model with attention over 600 iterations, using the same sequence-to-sequence architecture from the previous tasks.",
"Quantitative results Since this is considered a morphological task, Table 4 contains results using the language embeddings to predict morphological properties.",
"The fine-tuned language embeddings in this condition are able to predict morphological properties in WALS significantly above baseline levels and pre-trained embeddings ( p < 0 . 05 ).",
"We further also obtain significantly better results in the unseen setting ( p < 0 . 05 ), in which the language family evaluated on is not used in training.",
"This indicates that these properties are important to the model when learning the task at hand.",
"Qualitative results The performance of the fine-tuned embeddings on prediction of morphological features is above baseline for most features.",
"For 18 out of the 35 features under consideration both the baseline and k-NN performances are at 100% from the outset, so these are not considered here.",
"10 Figure 2 shows two of the remaining 17 features.",
"11 We can observe two main patterns: For some features such as 49A (Number of cases), fine-tuning on morphological inflection increases the degree to which the features are encoded in the language embeddings.",
"This can be explained by the fact that the number of cases in a language is central to how morphological inflection is treated by the model.",
"For instance, languages with the same number of cases might benefit from sharing certain parameters.",
"On the other hand, the feature 38A (Indefinite Articles) mainly encodes whether the indefinite word is distinct or not from the word for one , and it is therefore not surprising that this is not learned in morphological inflection.",
"We use PoS annotations from version 2 of the Universal Dependencies (Nivre et al., 2016).",
"As 10 This is partially explained by the fact that certain categories were completely uniform in the small sample as well as by the Indo-European bias in the sample.",
"we are mainly interested in observing the language embeddings, we down-sample all training sets to 1,500 sentences (approximate number of sentences of the smallest data sets) so as to minimise any size-related effects.",
"We approach the task of PoS tagging using a fairly standard bi-directional LSTM architecture based on Plank et al. (2016), detailed in Section 8.",
"Quantitative results Table 5 contains results on WALS feature prediction using language embeddings fine-tuned on PoS tagging.",
"We consider both the set of word order features, which are relevant for the dataset, and a set of all WALS features.",
"Using the fine-tuned embeddings is significantly better than both the baseline and the pre-trained embeddings ( p < 0 . 05 ), in both the random and the unseen conditions, indicating that the model learns something about word order typology.",
"This can be expected, as word order features are highly relevant when assigning a PoS tag to a word.",
"Qualitative results We now turn to the syntactic similarities between languages as encoded in the fine-tuned language embeddings.",
"We consider a set of the North-Germanic languages Icelandic, Swedish, Norwegian Nynorsk, Danish, Norwegian Bokmal, the West-Germanic language English, and the Romance languages Spanish, French, and Italian.",
"We apply hierarchical clustering using UPGMA (Michener and Sokal, 1957) to the pre-trained language embeddings of these languages.",
"12 Striking here is that English is grouped together with the Romance languages.",
"This can be explained by the fact that English has a large amount of vocabulary stemming from Romance loan words, which under the task of language modelling results in a higher similarity with such languages.",
"We then cluster the embeddings fine-tuned on PoS tagging in the same way.",
"In this condition, English has joined the rest of the Germanic languages' cluster.",
"This can be explained by the fact that, in terms of word ordering and morphosyntax, English is more similar to these languages than it is to the Romance ones.",
"We can also observe that, whereas the orthographically highly similar Norwegian Bokmal and Danish form the first sub-cluster in the pre-trained condition, Norwegian Nynorsk replaces Danish in this pairing when fine-tuning on PoS tagging.",
"This can be explained by the fact that morpho-syntactic similarities between the two written varieties of Norwegian are more similar to one another.",
"The system architecture used in the sequence-to-sequence tasks, i.e., G2P, phonological reconstruction, and morphological inflection is depicted in Figure 3.",
"The system is based on that developed by Ostling and Bjerva (2017) and is implemented using Chainer (Tokui et al., 2015).",
"We modify the architecture by concatenating a language embedding, ~l , to the character embeddings before encoding.",
"In the grapheme-to-phoneme and phonological reconstruction experiments, the one-hot feature mapping before decoding is irrelevant and therefore omitted.",
"The rest of the hyper-parameters are the same as in Ostling and Bjerva (2017).",
"This system is based on Plank et al. (2016), and is implemented using DyNet (Neubig et al., 2017).",
"We train using the Adam optimisation algorithm (Kingma and Ba, 2014) over a maximum of 10 epochs using early stopping.",
"We make two modifi-cations to the bi-LSTM architecture of Plank et al. 12 Included in the Supplements due to space restrictions.",
"L a n g u a g e M od e lli n g P o S t a gg i n g Romance Germanic Figure 4: Language similarities changing during fine tuning.",
"146 147 148 149",
"Figure 3: System architecture used in the seq-to-seq tasks (morphological inflection, G2P, and phonological reconstruction).",
"Figure adapted with permission from Ostling and Bjerva (2017).",
"(2016).",
"First of all, we do not use any atomic embedded word representations, but rather use only character-based word representations.",
"This choice was made so as to encourage the model not to rely on language-specific vocabulary.",
"Additionally, we concatenate a pre-trained language embedding to each word representation.",
"In our formulation, each word w is represented as LST M c ( w ) + ~l , where LST M c ( w ) is the final states of a character bi-LSTM running over the characters in a word and ~l is an embedded language representation.",
"We use a two-layer deep bi-LSTM with 100 units in each layer, and 100-dimensional character embeddings.",
"The rest of the hyper-parameters are the same as in Plank et al. (2016).",
"13 9 Discussion and Conclusions The language embeddings obtained by fine-tuning on linguistic tasks at various typological levels were found to include typological information somehow related to the task at hand.",
"This lends some support to the theoretical foundations of such representations, in that it shows that it is possible to learn something akin to a vector of continuous Chomskyan parameters (Chomsky, 1993).",
"The features which are encoded depend to a large degree on the task at hand.",
"The language embeddings resulting from the phonological tasks did not encode phonological properties in the sense of WALS features, whereas the pre-trained ones did.",
"13 Both modified systems are included in the Supplements, and will be made publicly available.",
"The morphological language embeddings were found to encode morphological features, and the PoS language embeddings were similarly found to encode word order features.",
"A promising result is the fact that we were able to predict typological features for unseen language families.",
"That is to say, without showing, e.g., a single Austronesian training instance to the k-NN classifier, typological features could still be predicted with high accuracies.",
"This indicates that we can predict typological features with language embeddings, even for languages for which we have no prior typological knowledge.",
"Table 6 contains a comparison of the top five and bottom five feature prediction accuracies for the ASJP task.",
"14 In the case of the phonologically oriented ASJP task it is evident that the embeddings still encode something related to phonology, as four out of five best features are phonological.",
"The changes in the features encoded in language embeddings are relatively monotonic.",
"Features are either learnt, forgotten, or remain static throughout training.",
"This indicates that the language representations converge towards a single optimum.",
"Training language embeddings in the task of multilingual language modelling has been found to reproduce trees which are relatively close matches",
"14 See the Supplement for the remaining tasks.",
"to more traditional genealogical trees ( Ostling and Tiedemann, 2017).",
"We show a similar analysis considering pre-trained and PoS fine-tuned embeddings, and it is noteworthy that fine-tuning on PoS tagging in this case yielded a tree more faithful to genealogical trees, such as those represented on glottolog.org .",
"Figure 4 shows an example of this, in which a language modelling objective places English with Romance languages.",
"This makes sense, as the English lexicon contains a large amount of Romance vocabulary.",
"When fine-tuning on PoS tagging, however, English is placed among the Germanic languages, as it shares more syntactic similarities with these.",
"Another striking result in terms of language similarities in fine-tuned language embedding spaces was found in the G2P task.",
"We here found that the phonological differences between some otherwise similar languages, such as Norwegian Bokmal and Danish, were accurately encoded.",
"We would also like to thank Robert Ostling for giving us access to the pre-trained language embeddings.",
"Isabelle Augenstein is supported by Eu-rostars grant Number E10138.",
"We further gratefully acknowledge the support of NVIDIA Corporation with the donation of the Titan Xp GPU used for this research."
] | [
"abstain",
"abstain",
"method",
"method",
"abstain",
"result",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"objective",
"abstain",
"abstain",
"other",
"method",
"other",
"other",
"other",
"abstain",
"other",
"other",
"abstain",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"method",
"objective",
"objective",
"other",
"abstain",
"other",
"other",
"other",
"other",
"other",
"method",
"method",
"method",
"objective",
"other",
"other",
"abstain",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other"
] |
[
"Phonemes are defined by their relationship to words: changing a phoneme changes the word.",
"Learning a phoneme inventory with lit-tle supervision has been a longstanding challenge with important applications to underresourced speech technology.",
"In this paper, we bridge the gap between the linguistic and statistical definition of phonemes and propose a novel neural discrete representation learning model for self-supervised learning of phoneme inventory with raw speech and word labels.",
"Given the availability of phoneme segmentation and some mild conditions, we prove that the phoneme inventory learned by our approach converges to the true one with an exponentially low error rate.",
"Moreover, in experiments on TIMIT and Mboshi benchmarks, our approach consistently learns a better phoneme-level representation and achieves a lower error rate in a zero-resource phoneme recognition task than previous state-of-the-art self-supervised representation learning algorithms.",
"Thanks to recent developments in self-supervised speech representation learning (van den Oord et al., 2017, 2019; Chorowski et al., 2019; Baevski et al., 2020), there is new hope for the development of speech processing systems without the need for full textual transcriptions.",
"Supervised speech processing systems for tasks such as automatic speech recognition (ASR) rely on a large amount of textual transcriptions, but self-supervised systems can be applied to under-resourced languages in which such annotation is either scarce or unavailable.",
"A key task of the self-supervised system is to learn a discrete representation.",
"While it is possible to dis-cretize the speech solely on the basis of its acoustic properties, a more desirable discrete representation would serve as a bridge from the continuous acoustic signal toward higher-level linguistic structures such as syntax and semantics.",
"Such a representation would make it possible to repurpose algorithms developed for written languages so that they could be used for unwritten languages in tasks such as speech translation and spoken language understanding.",
"Words are the obvious choice for a discrete, semantic-driven speech representation, but a practical speech understanding system needs at least thousands of words; learning them in an unsupervised manner may be challenging.",
"Phonemes may be a more learnable representation.",
"According to the standard linguistic definition, phonemes are closely linked to words: Definition 1. (Linguistic definition of phonemes (Swadesh, 1934)) Phonemes are the smallest units in speech such that given a correct native word, the replacement of one or more phonemes by other phonemes (capable of occurring in the same position) results in a native word other than that intended, or a native-like nonsense word.",
"For example, the sentences he th inks and he s inks differ by exactly one phoneme but have very different meaning.",
"The optimal compactness of a phoneme inventory as specified in the definition leads to three advantages.",
"First, learning phonemes requires lower sample complexity than learning words since the number of distinct phonemes is much smaller than the number of distinct words in a language.",
"Second, the phonemes are much more abundant and more balanced in classes than words within a speech corpus, which makes sample complexity less of an issue when learning phonemes.",
"Third, phonemes are more generalizable in the sense that knowing the phoneme inventory allows the learner to memorize previously unseen words as sequences of phonemes, and, having memorized them, to begin seeking clues to their meaning.",
"Motivated by the semantic-driven definition of phonemes, we formulate the problem of learning a phoneme inventory as a self-supervised learning problem, where a small amount of semantic su-8027 pervision is available.",
"The required supervision specifies which acoustic segments are instances of the same word, and which are instances of different words.",
"Such supervision might be acquired in a naturalistic setting by asking native speakers to name objects in a set of standardized images, as is commonly done in primary education classrooms, or by asking for the translations of common words in a second language, a common baseline approach in dialectology and historical linguistics (Swadesh, 1952).",
"Our contributions are threefold: (1) we propose a computationally tractable definition of phoneme that is almost equivalent to the linguistic definition.",
"(2) We propose a finite-sample objective function for learning phoneme-level units and prove that when the phoneme segmentation is available and under mild conditions, the empirical risk minimizer (ERM) of this objective will find the correct phoneme inventory with exponentially low error rate.",
"(3) We propose a novel neural network called information quantizer to optimize the proposed objective, which achieve state-of-the-art results in the phoneme inventory discovery task on the TIMIT and low-resourced Mboshi benchmarks with much less training data than previous approaches.",
"Due to the challenge of learning phonemes, early works on unsupervised speech representation learning (Park and Glass, 2005; Lee and Glass, 2012; Ondel et al., 2016) focus on learning speech segments sharing similar acoustic properties, or phones , without taking into account the meaning of the speech they are part of.",
"There are two main approaches in this direction.",
"One approach is to learn discrete phone-like units without any textual labels by modeling phone labels of the speech segments as latent variables.",
"In particular, (Park and Glass, 2005; Jansen et al., 2010) first detect segments with recurring patterns in the speech corpus followed by graph clustering using the similarity graph formed by the segments.",
"(Lee and Glass, 2012; Ondel et al., 2016; Kamper et al., 2016) develop probabilistic graphical models to jointly segment and cluster speech into phone-like segments.",
"An extension to the latent variable approach is to introduce additional latent variables such as speaker identity (Ondel et al., 2019) or language identity (Yusuf et al., 2020) and develop mechanisms to disentangle these variables.",
"With the advance of deep learning, neural network models have also been proposed to learn unsupervised phone-level representation either by first learning a continuous representation (Chung et al., 2019; Feng et al., 2019; Nguyen et al., 2020) followed by off-line clustering, or by learning a discrete representation end-to-end with Gumbel softmax (Eloff et al., 2019b; Baevski et al., 2020) or vector-quantized variational autoencoder (VQ-VAE) (van den Oord et al., 2017; Chorowski et al., 2019; Baevski et al., 2019).",
"However, codebooks learned by the neural approaches tend to be much larger than the number of phonemes (Baevski et al., 2020), leading to low scores in standard phoneme discovery metrics.",
"The second approach utilizes weak supervision such as noisy phone labels predicted by a supervised, multilingual ASR system trained on other languages.",
"Along this direction, early works (Schultz and Waibel, 1998; Lf et al., 2009; Swietojanski et al., 2012) have showed that phonetic knowledge gained from one language can be leveraged to develop ASR systems for another language using an HMM-based or DNN-HMM hybrid approach.",
"Instead of using phone labels, (Stuker et al., 2003) explores the use of articulatory features as supervision for the multilingual ASR.",
"Recently, ( Zelasko et al., 2020a,b; Feng et al., 2021a) systematically study the performance of zero-shot crosslingual ASR on 13 languages trained with international phonetic alphabet (IPA) tokens and found that the system tends to perform poorly on unseen languages.",
"Instead, (Feng et al., 2021b) is able to discover phone-like units by clustering bottleneck features (BNF) from a factorized time-delay neural network (TDNN-f) trained with phone labels predicted by a crosslingual ASR (Feng et al., 2021a).",
"Several works have since shifted focus toward the more challenging phoneme discovery problem by formulating it as a self-supervised learning problem where the semantics of the speech are known, such as from translation, phoneme-level language models or other sensory modalities such as vision.",
"(Jansen, 2013) has studied the use of pairwise word identity labels for training phoneme discovery models based on Gaussian mixture models (GMM); (Harwath and Glass, 2019) analyzes the hidden layers of a two-branch neural network trained to retrieve spoken captions with semantically related images and finds strong correlation between segment representation and 8028 phoneme boundaries.",
"(Harwath et al., 2020) adds hierarchical vector quantization (VQ) layers in the same retrieval network and is able to find a much smaller codebook than the unsupervised neural approach (Baevski et al., 2020), and achieve high correlation with the phoneme inventory.",
"(Godard et al., 2018; Boito et al., 2019) has studied the possibility of learning semantic units using an attention-based speech-to-text translation system, though the units appear to correlate more with words.",
"Works on unsupervised speech recognition (Chen et al., 2019) attempt to learn to recognize phonemes by leveraging the semantic information from a phoneme language model unpaired with the speech, typically by matching the empirical prior and posterior distributions of phonemes either using cross entropy (Yeh et al., 2019) or adversarial loss (Chen et al., 2019; Baevski et al., 2021).",
"Such models, however, have a slightly different objective as they assume knowledge about the phoneme inventory of the language and instead tries to find the alignment between the speech and phonemes, rather than induce the phoneme inventory from scratch.",
"We use capital letters to denote random variables and lower-case letters to represent samples of random variables.",
"We use PX := P { X = x } to denote both probability mass and density functions of random variable X , depending on whether it is continuous or discrete.",
"Further, denote PY | X ( y | x ) := P { Y = y | X = x } as the true conditional probability distribution of random variable Y = y given random variable X = x .",
"The linguistic definition of phonemes can be rephrased as follows.",
"Define X to be the set of all physical acoustic segments that can ever be produced as instances of the phonemes of a given language.",
"Definition 1 can be phrased as follows: Two sequences of segments x = [ x 1 , , x T ] and x (cid:48) = [ x 1: t 1 , x (cid:48) t , x t +1: T ] , differing only in that x (cid:48) t (cid:54) = x t , are instances of different words, y (cid:48) (cid:54) = y , if and only if x (cid:48) t and x t are instances of different phonemes.",
"In order to design effective algorithms, we will work with a relaxation of this definition, which we call the statistical definition of phonemes.",
"Definition 2. (Statistical definition of phonemes) Let X be the set of all speech segments in a language, and let X be a random vector taking values in X and Y be a random variable representing the word of which X is one segment.",
"The phoneme inventory of a language is the minimal partition Z = { Z 1 , , ZK } of X (i.e., X = Kk =1 Z k , Z j Z k = , 1 j, k K ), such that if a speech segment pair ( x, x (cid:48) ) X 2 satisfies ( x, x (cid:48) ) Z 2 k for some k { 1 , , K } , then their conditional distributions satisfy PY | X = x = PY | X = x (cid:48) .",
"In other words, given only the knowledge that two acoustic sequences contain instances of the same phoneme, the resulting conditional distributions across possible word labels are the same.",
"The fundamental intuition of Definition 2 is that different phonemes have different distributions across the words of the language.",
"Two instances of the same phoneme, x and x (cid:48) , might have different likelihoods PX = x | Y and PX = x (cid:48) | Y , e.g., because of allophony; but their posteriors PY | X = x and PY | X = x (cid:48) cannot be different without violating Definition 1. The relationship between Definition 1 and Definition 2 is given by the following proposition, whose proof is in Appendix A.3.",
"Proposition 1. Let Z = Kk =1 Z k be a partition of X .",
"If, for all possible { PY | X = x s } s (cid:54) = t , for any spoken word x = [ x 1 , , x T ] , and for any segment pairs ( x t , x (cid:48) t ) Z 2 k , k { 1 , , K } , changing x t 8029 Figure 2: Network architecture of information quantizer to x (cid:48) t does not alter the identity of the word, i.e., arg max y PY | X 1: T ( y | x 1: t 1 , x (cid:48) t , x t +1: T ) = arg max y PY | X 1: T ( y | x ) , (2) but for any segment pairs x t Z k , x (cid:48)(cid:48) t Z l for k (cid:54) = l , changing x t to x (cid:48) t alters the identity of the word, i.e., arg max y PY | X 1: T ( y | x 1: t 1 , x (cid:48)(cid:48) t , x t +1: T ) (cid:54) = arg max y PY | X 1: T ( y | x ) , (3) then Z is a phoneme inventory from Definition 2. Define the phoneme assignment function z : X { 1 , , K } such that z ( x ) = k if x Z k .",
"Suppose a segment X is randomly chosen from X with probability distribution PX and its phoneme label is another random variable Z := z ( X ) , then by Definition 2, for any pair x, x (cid:48) X such that z ( x ) = z ( x (cid:48) ) , we have PY | X = x = PY | X = x (cid:48) = PY | Z = z ( x ) .",
"The phoneme inventory is thereby completely characterized by the phoneme label function z ( ) as well as the set of distributions associated with each class PY | Z .",
"Let z ( ) be the phoneme assignment function from Definition 2 and assume the size of the phoneme inventory is known to be K .",
"Given a training set D = { ( x ( i ) , y ( i ) ) } ni =1 , where each x ( i ) is an acoustic segment extracted from a spoken word, and each y ( i ) Y is the corresponding word label, a semantic-driven phoneme discovery (SPD) system tries to find an assignment function that minimizes the token error rate (TER) : PTER ( z ) := min P { z ( X ) (cid:54) = ( z ( X )) } , (4) where is the set of all permutations of length K , which is used because the problem is unsupervised and z ( ) is not available during training.",
"An assignment function z is said to achieve exact discovery if PTER ( z ) = 0 .",
"It can be easily shown that TER is equivalent to standard evaluation metrics for phoneme discovery such as normalized mutual information (NMI) (Yusuf et al., 2020; Harwath et al., 2020; Feng et al., 2021b) and token F1 (Dun-bar et al., 2017), as presented in Appendix A.2.",
"Thus, to provide guarantees for NMI and token F1, it suffices to provide a guarantee for TER.",
"We solve the SPD problem using a novel type of neural network called an information quantizer (IQ), depicted in Figure 2. An IQ ( , q ) QK consists of four main components: A pre-segmentation network, a speech encoder e 1 ( ) , a word posterior c 2 ( ) and a quantizer q : | Y | C = { Q 1 , , QK } , where [ 1 , 2 ] = and C is the distribution codebook and Q k 's are called the code distributions of q .",
"IQ performs phoneme discovery in three stages.",
"The pre-segmentation stage takes a raw speech waveform as input and extracts phoneme-level segments x = [ x 1 , , x T ] in a self-supervised fashion (Kreuk et al., 2020).",
"Afterwards, in the joint distribution learning stage, the speech encoder extracts phoneme-level representations e 1 ( x ) = [ e 1 ( x 1 ) , , e 1 ( x T )] before passing them into the word posterior network to estimate the distribution of word labels, Y , given the presence in the word of acoustic phonetic segment X = x : P Y | X = x t = c 2 ( e 1 ( x t )) , 1 t T. (5) 8030 Note that it is crucial that no recurrent connection exists between segments since our goal is to learn the probability of a word label given the presence of one phoneme segment.",
"Finally, in the quantization stage, the quantizer creates the phoneme inventory by assigning each segment x t an integer index via codeword assignment function z ( x t ) such that z ( x t ) = k if q ( P Y | X = x t ) = Q k .",
"The loss function that IQ minimizes has two goals: learn a good estimator for the conditional distribution PY | X and learn a good quantization function q ( ) .",
"The first goal is achieved by minimizing the cross entropy loss: LCE ( P n , ) := 1 n n (cid:88) i =1 log P Y | X ( y ( i ) | x ( i ) ) , (6) where P n is the empirical joint distribution.",
"The second goal is achieved by minimizing the KL-divergence between the estimated conditional distribution before and after quantization: LQ ( P n , , q ) := 1 n n (cid:88) i =1 DKL ( P Y | X = x ( i ) || q ( P Y | X = x ( i ) )) , (7) where P n := 1 n n (cid:88) i =1 x ( i ) P Y | X = x ( i ) is the smoothed version of the empirical distribution.",
"The final loss function of IQ for SPD is then: LIQ ( P n , , q ) := LCE ( P n , ) + LQ (cid:16) P n , , q (cid:17) , ( P 1 ) where > 0 is some hyperparameter set to approximately 1 for most experiments.",
"Further, we restrict q to be nearest-neighbor so that: q ( P ) = arg min Q k :1 k KDKL ( P || Q k ) .",
"(8) This restriction does not increase the loss ( P 1 ) and serves as a regularization during phoneme discovery, as shown in Appendix A.3.",
"We show that when the phoneme segmentation is available and under mild assumption, IQ is able to achieve exact discovery of phoneme inventory.",
"First, let us state the main assumptions of the paper.",
"Assumption 1. (boundedness of the density ratio) There exist universal constants C l < C u such that , q QK , ( x, y ) X Y , log PY | X ( y | x ) P Y | X ( y | x ) [ C l , C u ] , log PY | X ( y | x ) q ( P Y | X ( y | x )) [ C l , C u ] .",
"Assumption 2. (log-smoothness of the density ratio) There exists > 0 such that 1 , 2 , x, y X Y , (cid:12)(cid:12)(cid:12)(cid:12) log P 1 Y | X ( y | x ) P 2 Y | X ( y | x ) (cid:12)(cid:12)(cid:12)(cid:12) (cid:107) 1 2 (cid:107) .",
"Assumption 3. (realizability) There exists a nonempty subset such that P Y | X = PY | X , .",
"Assumption 4. The true prior of the phoneme inventory is known to be PZ ( z ) = 1 K , 1 z K .",
"The first two assumptions are similar to the ones in (Tsai et al., 2020).",
"Assumption 3 assumes that the true probability measure is within the function class, which combined with Assumption 1 requires the true distribution to share the same support as the estimated one.",
"However, such assumption can be relaxed so that DKL ( P Y | X || PY | X ) , for some small enough > 0 , which does not affect the essential idea behind our analysis and can be achieved by some rich class of universal ap-proximators such as neural networks (Hornik et al., 1989).",
"The last assumption ensures the inventory to be identifiable by assuming knowledge of the prior of the phoneme inventory.",
"before giving some intuitive explanation.",
"Theorem 1. Given Assumption 1-4, let the information quantizer ( , q ) with assignment function z be an empirical risk minimizer (ERM) of ( P 1 ): LIQ ( P n , , q ) = min ,q QKLIQ ( P n , , q ) .",
"For any (0 , 1] , with probability at least 1 , the cluster assignment function z of the ERM information quantizer q achieves PTER ( z ) = 0 if the sample size n satisfies:",
"x for some > 0 and DJS ( P || Q ) := 12 DKL (cid:16) P || P + Q 2 (cid:17) + 12 DKL (cid:16) Q || P + Q 2 (cid:17) is the Jensen-Shannon divergence.",
"The bound in Theorem 1 captures two main factors determining the sample complexity of exact phoneme discovery: the first factor is how close the word distributions of phonemes are from each other as measured by their Jensen-Shannon (JS) divergence, and the second factor is how hard it is for the training data to cover all the phonemes.",
"The theorem works essentially because ( P 1 ) can be viewed as an approximation of the mutual information between the codeword z ( X ) and word type Y , I ( z ( X ); Y ) .",
"Suppose P Y | X PY | X and let H ( | ) denotes conditional entropy, we have: LIQ ( P n , , q ) H ( Y | X ) + DKL ( PY | X || q ( PY | X )) I ( X ; Y ) + DKL ( PY | X || q ( PY | X )) = I ( z ( X ); Y ) , which is minimized if q ( PY | X ) = PY | z ( X ) .",
"Datasets We construct four training datasets consisting of spoken words only.",
"The vocabulary set with | Y | = 224 is selected from head words of noun phrases from the Flickr30kEntities dataset (Hodosh et al., 2010) that appear at least 500 times.",
"For the Flickr audio word dataset, spoken words in the vocabulary are extracted from Flickr audio dataset (Harwath and Glass, 2015).",
"For the Librispeech and TIMIT word dataset with | Y | = 224 , spoken words are extracted from Librispeech (Vassil et al., 2015) 460-hour train-clean TIMIT Token F1 NMI Boundary F1 (Yusuf et al., 2020) -40.1 0 .",
"subset, resulting in a dataset of about 6 hours and 0.1 hours; for Librispeech and TIMIT word dataset with | Y | = 524 and | Y | = 824 , we supplement the dataset with the speech for the top 300 frequent words and top 600 frequent words respectively (ex-cluding the visual words) in Librispeech, resulting in datasets of about 15 and 21 hours.",
"For Mboshi dataset, we found only about 20 actual words occur more than 100 times, so instead we use n -grams with either n 3 (all except uniand bi-grams) or n 2 (all except unigrams) that occur more than 100 times as words, resulting in a vocabulary size of 161 and 377 respectively.",
"Note that the amount of labeled data we need is much lower than previous works (Yusuf et al., 2020): around 30 hours, (Feng et al., 2021b): around 600 hours) and the vocabulary size used is much smaller than the total vocabulary size in the language.",
"More details of the sets can be found in Appendix B. We also test our 8032 models on two standard phoneme discovery benchmarks, which contain whole-sentence utterances with many words unseen during training.",
"The first dataset is TIMIT (Garofolo et al., 1993), an English corpus consisting of about 5 hours speech and Mboshi (Godard et al., 2017), which contains about 2.4 hours speech from a low-resource language.",
"For both datasets, we follow the split in (Yusuf et al., 2020), (Feng et al., 2021b) Baselines For phoneme discovery from segmented words, we compare our model (IQ) to four baselines.",
"The first two baselines use continuous representation: the CPC+k-means model performs k-means clustering on the segment-level CPC features, and the k-means model performs k-means clustering after the model is trained on the word recognition task.",
"The last two baselines use discrete representations: the Gumbel variational information bottleneck (Alemi et al., 2017) (Gumbel VIB) is a neural model with a Gumbel softmax (Jang et al., 2016) layer to approximate the codebook assignment function z ( ) , and we set = 0 .",
"001 and decay the temperature of the Gumbel softmax from 1 to 0 .",
"1 linearly for the first 300000 steps, keeping it at 0 .",
"1 afterwards, which works best in our experiments; the deterministic information bottleneck (DIB), a generalization of (Strouse and Schwab, 2016) for continuous feature variable X , which assumes the same deterministic relation between speech X and codebook unit Z as ours, but optimizes the models in a pipeline fashion (first the speech encoder and then the quantizer) by performing clustering on the learned conditional distributions.",
"The CPC features used are trained in a self-supervised fashion on the 960-hour LibriSpeech dataset and released by (Nguyen et al., 2020).",
"All models share the same speech encoder as IQ.",
"For the whole-sentence datasets, we compare our models to three phoneme discovery systems, namely, the unsupervised H-SHMM trained with multilingual speech (Yusuf et al., 2020), the ResDAVEnet-VQ (Harwath et al., 2020) with visual supervision and the TDNN-f system by (Feng et al., 2021b) trained with multilingual speech.",
"To study how well our model performs in extreme low-resource speech recognition compared to other neural speech representation learning models, we compare our models to wav2vec (Schneider et al., 2019), wav2vec 2.0 (Baevski et al., 2020) (small, trained on the 960-hour LibriSpeech), vq-wav2vec with Gumbel softmax and k-means as discretiza-tion strategies (Baevski et al., 2019), CPC (van den Oord et al., 2019) and VQ-CPC (van Niekerk et al., 2020), using the pretrained models released by the authors.",
"Implementation details of the baselines and our models are in Appendix C. Evaluation metrics Standard metrics are used such as NMI and boundary F1 for the quality of codebook and segmentation respectively with the same implementation as in prior works (Yusuf et al., 2020; Feng et al., 2021b).",
"In addition, token F1 (Dunbar et al., 2017) is also reported.",
"To examine the benefit of using our discovered phoneme inventory for low-resource speech recognition, we also evaluate using equivalent phone error rate (equiv. PER: Ondel et al. 2019).",
"This metric can be viewed as a proxy for phone error rate (PER) applicable beyond supervised speech recognizers.",
"The results on visual word-only test sets of Flickr audio and Librispeech are shown in Table 1. On both datasets, IQ outperforms both Gumbel VIB and DIB in terms of all metrics, especially on Flickr",
"audio, which has more phonemes than Librispeech and a larger test set.",
"Moreover, the performance of IQ is very robust to the codebook size, achieving good results even when the codebook size is very different from the size of the true phoneme inventory, suggesting our theory may be able to work with a relaxed Assumption 4. 6.2 Sentence-level Phoneme Discovery The results on TIMIT and Mboshi are shown in Table 2 and Table 3a respectively.",
"On TIMIT, our model is able to outperform the visually grounded baseline (Harwath et al., 2020) for all training vocabulary, and all three baselines for | Y | = 524 and | Y | = 824 with and without gold segmentation in terms of all three metrics.",
"Further, we also empirically verify the sample complexity bound in Theorem 1 as IQ performs better in Token F1 and NMI as the training vocabulary size get larger, which generally increases the JS divergence.",
"On Mboshi, IQ with CPC feature consistently outpeforms (Feng et al., 2021b) in token F1 and boundary F1, and IQ with CPC+BNF features consistently outperform (Feng et al., 2021b) in all three metrics under various level of word supervision.",
"The performance of our model on Mboshi compared with other neural self-supervised models are shown in Table 3b.",
"We found that IQ outperforms the best self-supervised model, CPC+k-means in equiv.",
"PER by 34% and 20% absolute with and without gold segmentation respectively and 12% absolute in terms of boundary F1, suggesting that IQ is able to learn consistent phoneme-like sequence useful for zero-resource or extremely low-resource speech recognition.",
"Effect of segmentation and codebook size The use of unsupervised phoneme segmentation deteriorates the NMI by about 18% and 28% absolute on TIMIT and Mboshi respectively for our models since the distributional property of phonemes does not apply exactly to non-phoneme segments.",
"On the other hand, in Appendix F we show that the quality of codeword assignments by IQs is very robust against varying codebook size, after experimenting with codebook size from 30 to 70 on TIMIT and Mboshi.",
"Multilingual and word supervision are complimentary In all vocabulary sizes, concatenating the multilingual BNF from (Feng et al., 2021b) to the CPC output representation from the segmental speech encoder in Figure 2 significantly improves token F1 and NMI to allow our best models to outperform baselines in all three metrics.",
"IQ codebook resembles true phonemes From Figure 3b, we observe that the codeword assignments by IQ correlates well with the actual phonemes, but tends to confuse the most between phonemes within the same manner class, such as nasals /n/ and /m/.",
"This is also confirmed by the t-SNE plot in Figure 3a, where the embeddings of most manner classes are well-clustered, except for related manner classes such as affricate and fricative, or glide and vowel.",
"Further, from the examples shown in Figure 4, we can see that IQ is not only better at grouping segments of the same 8034 phonemes but also at detecting segment boundaries than the baselines.",
"Also, across different examples, IQ assign the same codes to phonemes such as /a/ (31) and /s/ (7) more consistently than other models do.",
"Please check Appendix G for more speech examples.",
"Limitation While our theory predicts that with gold segmentation, the TER of IQ is asymptotically zero, in practice TER is nonzero due to the violation of Assumption 4, i.e., the phonemes are not uniformly distributed for languages such as Mboshi.",
"As a result, the model often discards information of the rare phonemes by merging them into a more frequent phoneme cluster.",
"Evidently, from Figure 5, where we use ABX accuracy (Munson and Gardner, 1950) to score how reliable the IQ codebook can identify segments of the same phoneme, we observe a strong correlation is observed between ABX accuracy and the frequency of the phonemes.",
"Motivated by the linguistic definition of phonemes, we propose information quantizer (IQ), a new neural network model for self-supervised phoneme discovery that can take advantage of word-level supervision.",
"We demonstrate in two ways that word-level supervision is beneficial for phoneme inventory discovery: theoretically, we prove that IQ can achieve zero token error rate asymptotically with the help of word labels; empirically, we show that IQ out-performs various speech-only algorithms in phoneme discovery tasks under both simulated (En-glish) and realistic (Mboshi) low-resource settings.",
"In the future, we would like to apply the discovered phoneme inventory to develop better low-resource speech technologies such speech translation and speech synthesis systems."
] | [
"abstain",
"abstain",
"objective",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"method",
"other",
"abstain",
"abstain",
"other",
"method",
"abstain",
"other",
"abstain",
"abstain",
"other",
"abstain",
"other",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"result",
"result",
"result",
"abstain",
"result",
"result",
"abstain",
"abstain",
"result",
"result",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"objective",
"objective",
"objective"
] |
[
"Context-aware machine translation models are designed to leverage contextual information, but often fail to do so.",
"As a result, they inaccurately disambiguate pronouns and polysemous words that require context for resolution.",
"In this paper, we ask several questions: What contexts do human translators use to resolve ambiguous words?",
"Are models paying large amounts of attention to the same context?",
"What if we explicitly train them to do so?",
"To answer these questions, we introduce SCAT (Supporting Context for Ambiguous Transla-tions), a new English-French dataset comprising supporting context words for 14K translations that professional translators found useful for pronoun disambiguation.",
"Using SCAT , we perform an in-depth analysis of the context used to disambiguate, examining positional and lexical characteristics of the supporting words.",
"Furthermore, we measure the degree of alignment between the model's attention scores and the supporting context from SCAT , and apply a guided attention strategy to encourage agreement between the two.",
"1 1 Introduction There is a growing consensus in machine translation research that it is necessary to move beyond sentence-level translation and incorporate document-level context (Guillou et al., 2018; Laubli et al., 2018; Toral et al., 2018).",
"While various methods to incorporate context in neural machine translation (NMT) have been proposed (Tiedemann and Scherrer (2017); Miculicich et al. (2018); Maruf and Haffari (2018), inter alia ), it is unclear whether models rely on the right context that is actually sufficient to disambiguate dif-ficult translations.",
"Even when additional context 1 Our SCAT data and code for experiments are available at https://github.com/neulab/contextual-mt.",
"Human En Look after her a lot.",
"Okay.",
"Any questions?",
"Have we got her report?",
"Yes, it 's in the infirmary already Fr Dorlotez-la.",
"D'accord.",
"Vous avez des questions ?",
"On dispose de son rapport.",
"Oui, il est `a l'infirmerie.",
"Context-aware baseline En Look after her a lot.",
"Okay.",
"Any questions?",
"Have we got her report?",
"Yes, it 's in the infirmary already.",
"Fr Dorlotez-la.",
"D'accord.",
"Vous avez des questions ?",
"On dispose de son rapport ?",
"Oui, elle est dej`a `a l'infirmerie.",
"Model w/ attention regularization En Look after her a lot.",
"Okay.",
"Any questions?",
"Have we got her report?",
"Yes it 's in the infirmary already.",
"Fr Dorlotez-la.",
"D'accord.",
"Vous avez des questions ?",
"On dispose de son rapport ?",
"Oui, il est dej`a `a l'hopital Table 1: Translation of the ambiguous pronoun it.",
"is provided, models often perform poorly on evaluation of relatively simple discourse phenomena (Muller et al., 2018; Bawden et al., 2018; Voita et al., 2019b,a; Lopes et al., 2020) and rely on spurious word co-occurences during translation of polysemous words (Emelin et al., 2020).",
"Some evidence suggests that models attend to uninformative tokens (Voita et al., 2018) and do not use contextual information adequately (Kim et al., 2019).",
"To understand plausibly why current NMT models are unable to fully leverage the disambiguating context they are provided, and how we can develop models that use context more effectively, we pose the following research questions:",
"(i) In context aware translation, what context is intrinsically useful to disambiguate hard translation phenomena such as ambiguous pronouns or word",
"senses?;",
"(ii) Are context-aware MT models paying attention to the relevant context or",
"not?; and",
"(iii) If not, can we encourage them to do so?",
"To answer the first question, we collect annotations of context that human translators found useful in choosing between ambiguous translation options (3).",
"Specifically, we ask 20 professional translators to choose the correct French translation between two contrastive translations of an ambiguous word, given an English source sentence and the previous sourceand target-side sentences.",
"The translators additionally highlight the words they found the most useful to make their decision, giving an idea of the context useful in making these decisions.",
"We collect 14K such annotations and release SCAT (Supporting Context for Ambiguous Translations), the first dataset of human rationales for resolving ambiguity in document-level translation.",
"Analysis reveals that inter-sentential target context is important for pronoun translation, whereas intra-sentential source context is often sufficient for word sense disambiguation.",
"To answer the second question, we quantify the similarity of the attention distribution of context-aware models and the human annotations in SCAT (4).",
"We measure alignment between the baseline context-aware model's attention and human rationales across various model attention heads and layers.",
"We observe a relatively high alignment between self attention scores from the top encoder layers and the source-side supporting context marked by translators, however, the model's attention is poorly aligned with target-side supporting context.",
"For the third question, we explore a method to regularize attention towards human-annotated disambiguating context (5).",
"We find that attention regularization is an effective technique to encourage models to pay more attention to words humans find useful to resolve ambiguity in translations.",
"Our models with regularized attention outperform previous context-aware baselines, improving translation quality by 0 .",
"54 BLEU, and yielding a relative improvement of 14 .",
"7 % in contrastive evaluation.",
"An example of translations from a baseline and our model, along with the supporting rationale by a professional translator is illustrated in Table 1.",
"Neural Machine Translation.",
"Current NMT models employ encoder-decoder architectures (Bahdanau et al., 2015; Vaswani et al., 2017).",
"First, the encoder maps a source sequence x = ( x 1 , x 2 , ..., x S ) to a continuous representation z = ( z 1 , z 2 , ..., z S ) .",
"Then, given z , the decoder generates the corresponding target sequence y = ( y 1 , y 2 , ..., y T ) , one token at a time.",
"Sentence-level NMT models take one source sentence and generate one target sentence at a time.",
"These models perform reasonably well, but given that they only have intra-sentential context , they fail to handle some phenomena that require inter-sentential context to accurately translate.",
"Well-known examples of these phenomena include gender-marked anaphoric pronouns (Guillou et al., 2018) and maintenance of lexical coherence (Laubli et al., 2018).",
"Document-Level Translation.",
"Document-level translation models learn to maximize the probability of a target document Y given the source document X : P ( Y | X ) = (cid:81) Jj =1 P ( y j | x j , C j ) , where y j and x j are the j -th target and source sentences, and C j is the collection of contextual sentences for the j -th sentence pair.",
"There are many methods for incorporating context (6), but even simple concatenation (Tiedemann and Scherrer, 2017), which prepends the previous source or target sentences to the current sentence separated by a (cid:104) BRK (cid:105) tag, achieves comparable performance to more sophisticated approaches, especially in high-resource scenarios (Lopes et al., 2020).",
"Evaluation.",
"BLEU (Papineni et al., 2002) is most widely used to evaluate MT, but it can be poorly correlated with human evaluation (Callison-Burch et al., 2006; Reiter, 2018).",
"Recently, a number of neural evaluation methods, such as COMET (Rei et al., 2020), have shown better correlation with human judgement.",
"Nevertheless, common automatic metrics have limited ability to evaluate discourse in MT (Hardmeier, 2012).",
"As a remedy to this, researchers often use contrastive test sets for a targeted discourse phenomenon (Muller et al., 2018), such as pronoun anaphora resolution and word sense disambiguation, to verify if the model ranks the correct translation of an ambiguous sentence higher than the incorrect translation.",
"We first conduct a user study to collect supporting context that translators use in disambiguation, and analyze characteristics of the supporting words.",
"We recruited 20 freelance English-French translators on Upwork.",
"2 The translators are native speakers of at least one of the two languages and have a job success rate of over 90 %.",
"Each translator is given 400 examples with an English source sentence and two possible French translations, and one out of 5 possible context levels: no context ( 0+0 ), only the previous source sentence as context ( 1+0 ), only the previous target sentence ( 0+1 ), the previous source sentence and target sentence ( 1+1 ), and the 5 previous source and target sentences ( 5+5 ).",
"We vary the context level in each example to measure how human translation quality changes.",
"Translators provide annotations using the interface shown in Figure 1.",
"They are first asked to select the correct translation out of the two contrastive translations, and then highlight word(s) they found useful to arrive at their answer.",
"In cases where multiple words are sufficient to disambiguate, translators were asked to mark only the most salient words rather than all of them.",
"Further, translators also reported their confidence in their answers, choosing from not at all, somewhat, and very.",
"We perform this study for two tasks: pronoun anaphora resolution (PAR), where the translators are tasked with choosing the correct French gendered pronoun associated to a neutral English pronoun, and word sense disambiguation (WSD), where the translators pick the correct translation of a polysemous word.",
"PAR, and WSD to a lesser extent, have been commonly studied to evaluate context-aware NMT models (Voita et al., 2018; Lopes et al., 2020; Muller et al., 2018; Huo et al., 2020; Nagata and Morishita, 2020).",
"Pronoun Anaphora Resolution.",
"We annotate examples from the contrastive test set by Lopes et al. (2020).",
"This set includes 14 K examples from the OpenSubtitles2018 dataset (Lison et al., 2018) with occurrences of the English pronouns it and they that correspond to the French translations il or elle and ils or elles, with 3 .",
"5 K examples for each French pronoun type.",
"Through our annotation effort, we obtain 14K examples of supporting context for pronoun anaphora resolution in ambiguous translations selected by professional human translators.",
"Statistics on this dataset, SCAT : Supporting Context for Ambiguous Translations , are provided in Appendix A. Word Sense Disambiguation.",
"There are no existing contrastive datasets for WSD with a context window larger than 1 sentence, therefore, we automatically generate contrastive examples with context window of 5 sentences from OpenSubtitles2018 by identifying polysemous English words and possible French translations.",
"We describe our methodology in Appendix B. Quality.",
"For quality control, we asked 8 internal speakers of English and French, with native or bilingual proficiency in both languages, to carefully annotate the same 100 examples given to all professional translators.",
"We compared both the answer accuracies and the selected words for each hired translator against this control set and discarded submissions that either had several incorrect answers while the internal bilinguals were able to choose the correct answer on the same example, or that highlighted contextual words that the internal annotators did not select and that had little relevance to the ambiguous word.",
"Furthermore, among the 400 examples given to each annotator, the first hundred are identical, allowing us to measure the inter-annotator agreement for both answer and supporting context selection.",
"First, for answer selection on PAR, we find 91.0% overall agreement, with Fleiss' free-marginal Kappa = 0 .",
"82 .",
"For WSD, we find 85.9% overall agreement with = 0 .",
"72 .",
"This indicates a substantial inter-annotator agreement for the selected answer.",
"In addition, we measure the inter-annotator agreement for the selected words by calculating the F1 between the word selections for each pair of annotators given identical context settings.",
"For PAR, we obtain an average F1 of 0.52 across all possible pairs, and a standard deviation of PAR WSD Context Correct Not confident Correct Not confident 0 + 0 78.4 27.0 88.7 7.0 1 + 0 90.6 13.2 88.7 6.5 0 + 1 93.0 9.2 87.5 6.7 1 + 1 93.6 6.7 87.1 6.5 5 + 5 95.9 2.8 88.7 5.9 No ante 75.4 33.8 Has ante 96.0 3.3 Table 2: Percentage of correct and zero-confidence answers by varying context level.",
"0.12.",
"For WSD, we find an average F1 of 0.46 and a standard deviation of 0.12.",
"There is a high agreement between annotators for the selected words as well.",
"Table 2 shows the accuracy of answers and the percentage of answers being reported as not at all confident for each of the 5 different context levels.",
"For PAR, there is a large increase in accuracy and confidence when just one previous sentence in either language is provided as context compared to no context at all.",
"Target-side context also seems more useful than source: only target-side context gives higher answer accuracy than only source-side context, while the accuracy does not increase significantly by having both previous sentences.",
"For WSD, we do not observe significant differences in answer accuracy and confidence between the different context levels (Figure",
"2).The high answer accuracy with 0+0 context and the low rate of zero-confidence answers across all settings suggest that the necessary disambiguating information is often present in the intra-sentential context.",
"Alternatively, this may be partially due to characteristics of the automatically generated dataset itself: we found that some examples are misaligned so the previous sentences given as context do not actually correspond to the context of the current sentences, and therefore do not add useful information.",
"We also observe that translators tend to report a high confidence and high agreement in incorrect answers as well.",
"This can be explained by the tendency to select the masculine pronoun in PAR (Figure 3) or the prevailing word sense in WSD.",
"mine its gender, so we hypothesize that the antecedent is of high importance for disambiguation.",
"In our study, 72 .",
"4% of the examples shown to annotators contain the antecedent in the context or current sentences.",
"We calculate how answer accuracy and confidence vary between examples that do or do not contain the pronoun antecedent.",
"We find that the presence of the antecedent in the context leads to larger variations in answer accuracy than the level of context given, demonstrating the importance of antecedents for resolution.",
"Next, we examine the words that were selected as rationales from several angles.",
"Distance.",
"Figure 4 shows, for each context level, the number of highlighted words at a given distance (in sentences) from the ambiguous word.",
"For PAR, when no previous sentences are provided, there are as many selected words from the source as the target context.",
"With inter-sentential context, experts selected more supporting context from the target side.",
"One possible reason is that the source and target sentences on their own are equally descriptive to perform PAR, but one may look for the coreference chain of the anaphoric pronoun in the target context to determine its gender, whereas the same coreference chain in the source context would not necessarily contain gender information.",
"Moreover, the antecedent in the target side is more reliable 0+0 0+1 1+0 1+1 5+5 1+0 0+1 0+0 1+0 0+1 1+1 5+5 Figure 4: Sentence distance of the highlighted words for each context level for PAR and WSD.",
"than the source antecedent, since the antecedent can have multiple possible translations with different genders.",
"For WSD, we find that inter-sentential context is seldom highlighted, which reinforces our previous claim that most supporting context for WSD can be found in the current sentences.",
"Part-of-Speech and Dependency.",
"We use spaCy (Honnibal and Montani, 2017) to predict part-of-speech (POS) tags of selected words and syntactic dependencies between selected words and the ambiguous word.",
"In Table 3a, we find that nominals are the most useful for PAR, which suggests that human translators look for other referents of the ambiguous pronoun to determine its gender.",
"This is reinforced by Table 3b, where the antecedent of the pronoun is selected the most often.",
"For WSD, proper nouns and pronouns are not as important as nouns, probably because they do not carry as much semantic load that indicates the sense of the ambiguous word.",
"Determiners, verbs and adpositions are relatively important since they offer clues on the syntactic dependencies of the ambiguous word on other words as well as its role in the sentence, and modifiers provide additional PAR WSD POS Source Target Total Source Target Total noun 1550 2952 4502 3340 937 4277 proper noun 136 4056 4192 192 304 496 pronoun 2676 389 3065 119 204 323 verb 247 859 1106 406 367 773 determiner 499 498 997 800 1091 1891 auxiliary 310 136 446 78 85 163 adjective 105 319 424 291 226 517 adposition 65 172 237 283 481 764 conjunction 71 63 134 83 92 175 numeral 37 39 76 22 440 462 particle 37 8 45 61 0 61",
"(b) Dependency relation Table 3: Most frequent part-of-speech and dependency relation of highlighted words.",
"PAR Listen, these big celebrities, they do it different than anybody else?",
"Jesus, you know if they knew you had hidden cameras in that bedroom...",
"Dis-moi, ces DET vedettes ante NOUN , elles PRON le font differemment des autres?",
"Bon Dieu, tu te rends compte que si elles /ils savaient que cette chambre cache des cameras...",
"Your charm is only exceeded VERB by your frankness NOUN .",
"The main difference between PAR and WSD is that for PAR, the key supporting information is gender .",
"The source side does not contain explicit information about the gender of the ambiguous pronoun whereas the target side may contain other gendered pronouns and determiners referring to the ambiguous pronoun.",
"For WSD however, the key supporting information is word sense .",
"While the source and target sides contain around the same amount of semantic information, humans may prefer to attend to source sentences that express how the ambiguous word is used in the sentence.",
"Next, we study NMT models and quantify the degree to which the model's attention is aligned with the supporting context from professional translators.",
"We incorporate the 5 previous source and target sentences as context to the base Transformer (Vaswani et al., 2017) by prepending the previous sentences to the current sentence, separated by a (cid:104) BRK (cid:105) tag, as proposed by Tiedemann and Scherrer (2017).",
"To calculate similarity between model attention and highlighted context, we first construct a human attention vector human , where 1 corresponds to tokens marked by the human annotators, and 0 otherwise.",
"We compare this vector against the model's attention for the ambiguous pronoun for a given layer and head, model , across three metrics: Dot Product.",
"KL Divergence.",
"We compute the KL divergence between the model attention and the normalized human attention vector KL ( human-norm || model ( )) , where the normalized distribution human-norm is uniform over all tokens selected by humans and a very small constant (cid:15) elsewhere such that the sum of values in human-norm is equal to 1 .",
"Probes Needed.",
"We adapt the probes needed metric by Zhong et al. (2019) to measure the number of tokens we need to probe, based on the model attention, to find a token highlighted by humans.",
"This corresponds to the ranking of the first highlighted token after sorting all tokens by descending model attention.",
"The intuition is that the more attention the model assigns to supporting context, the fewer probes are needed to find a supporting token.",
"We compute the similarity between the model attention distribution for the ambiguous pronoun and the supporting context from 1,000 SCAT samples.",
"In Table 5, for each attention type we report the best score across layers and attention heads.",
"We also report the alignment score between a uniform distribution and supporting context for comparison.",
"We find that although there is a reasonably high alignment between encoder self attention and SCAT , decoder attentions have very low alignment with SCAT .",
"We hypothesize that by encouraging models to increase attention on words that humans use to resolve ambiguity, translation quality may improve.",
"We apply attention regularization to guide model attention to increase alignment with the supporting context from SCAT .",
"To do so, we append the translation loss with an attention regularization loss between the normalized human attention vector human-norm and the model attention vector for the corresponding ambiguous pronoun model : R ( ) = KL ( human-norm || model ( )) where is a scalar weight parameter for the loss.",
"During training, we randomly sample batches from SCAT with p = 0 .",
"2 .",
"We train with the standard MT objective on the full dataset, and on examples from SCAT , we additionally compute the attention regularization loss.",
"For document translation, we use the English and French data from OpenSubtitles2018 (Lison et al., 2018), which we clean then split into 16M training, 10,036 development, and 9,740 testing samples.",
"For attention regularization, we retain examples from SCAT where 5+5 context was given to the annotator.",
"We use 11,471 examples for training and 1,000 for testing.",
"We first train a baseline model, where the 5 previous source and target sentences serve as context and are incorporated via concatenation.",
"This baseline model is trained without attention regularization.",
"We explore two models with attention regularization: (1) attnreg-rand , where we jointly train on the MT objective and regularize attention on a randomly initialized model; (2) attnreg-pre , where we first pre-train the model solely on the MT objective, then we jointly train on the MT objective and regularize attention.",
"We describe the full setup in Appendix C. 5.4 Evaluation As described in Section 2, we evaluate translation outputs with BLEU and COMET.",
"In addition, to evaluate the direct translation quality of specific phenomena, we translate the 4,015 examples from Lopes et al. (2020) containing ambiguous pronouns that were not used for attention regularization, and we compute the mean word f-measure of translations of the ambiguous pronouns and other words, with respect to reference texts.",
"We also perform contrastive evaluation on the same subset of Lopes et al. (2020) with a context window of 5 sentences ( Big-PAR ) and the contrastive test sets by Bawden et al. (2018), which include 200 examples on anaphoric pronoun translation and 200 examples on lexical consistency/word sense disambiguation.",
"The latter test sets were crafted manually, have a context window of 1 sentence, and either the previous source or target sentence is necessary to disambiguate.",
"Context-aware models often suffer from error propagation when using previously decoded output tokens as the target context (Li et al., 2020a).",
"Therefore, during inference, we experiment with both using the gold target context ( Gold ) as well as using previous output tokens ( Non-Gold ).",
"Before delving into the main results, we note that we explored regularizing different attention vectors in the model (Appendix C.3) and obtain the best BLEU and COMET scores for attnreg-rand when regularizing the self-attention of the top encoder layer, cross-attention of the top decoder layer and self-attention of the bottom decoder layer.",
"For attnreg-pre , regularizing self-attention in the top decoder layer gives the best scores.",
"Thus, we use these as the default regularization methods below.",
"Moving on to the main results in Table 6, we observe that attnreg-rand improves on all metrics, which demonstrates that attention regularization is an effective method to improve translation quality.",
"Although attnreg-pre does not improve general translation scores significantly, it yields considerable gains in word f-measure on ambiguous pronouns and achieves some improvement over the baseline on contrastive evaluation on Big-PAR and PAR.",
"Attention regularization with supporting context for PAR seems to especially improve models on similar tasks.",
"The disparity between BLEU/COMET scores and targeted evaluations such as word f-measure and contrastive evaluation further suggests that general MT metrics are somewhat insensitive to improvements on specific discourse phenomena.",
"For both models with attention regularization, there are no significant gains in WSD.",
"As discussed in 3.4, WSD and PAR require different types of supporting context, so it is natural that regularizing attention using supporting context extracted from only one task does not always lead to improvement on the other.",
"We now investigate how models trained with attention regularization handle context differently compared to the baseline model.",
"How does attention regularization influence alignment with human rationales?",
"We revisit the similarity metrics from 4.2 to measure alignment with SCAT .",
"In Table 5, the dot product alignment over attention in the decoder increases with attention regularization, suggesting that attention regularization guides different parts of the model to pay attention to useful context.",
"Interestingly, although only the encoder self-attention was explicitly regularized for attnreg-pre , the model seems to also have learned better alignment for attention in the decoder.",
"Moreover, attnreg-pre generally has better alignment than attnreg-rand , suggesting that models respond more to attention regularization once it has been trained to perform translation.",
"Which attention is the most useful?",
"For each of attnreg-rand and attnreg-pre , we perform attention regularization on either the encoder self-attention, decoder cross-attention or decoder self-attention only.",
"In Table 7, encoder self-attention seems to contribute the most to both translation performance and contrastive evaluation.",
"Although attnreg-rand models achieve higher BLEU and COMET scores, attnreg-pre obtain higher scores on metrics targeted to pronoun translation.",
"Attention regularization seems to have limited effect on WSD performance, the scores vary little between attention types.",
"How much do models rely on supporting context?",
"We compare model performance on contrastive evaluation on SCAT when it is given full context, and when we mask either the supporting context, random context words with p = 0 .",
"1 , the source context, the target context, or all of the context.",
"In Table 8, we find that baseline varies little when the supporting context is masked, which again suggests that context-aware baselines do not use the relevant context, although they do observe a drop in contrastive performance when the source and all context are masked.",
"Models with attention regularization, especially attnreg-pre observe a large drop in contrastive performance when supporting context is masked, which indicates that they learned to rely more on supporting context.",
"Furthermore, for attnreg-pre , the score after masking supporting context is significantly lower than when masking all context, which may indicate that having irrelevant context can have an adverse effect.",
"Another interesting finding is that both baseline and attnreg-rand seem to rely more on the source context than the target context, in contrast to human translators.",
"This result corroborates prior results where models have better alignment with supporting context on attention that attends to the source (encoder self-attention and decoder cross-attention), and regularizing these attention vectors contributes more to translation quality than regularizing the decoder self-attention.",
"Most current context-aware NMT approaches enhance NMT by including sourceand/or target-side surrounding sentences as context to the model.",
"Tiedemann and Scherrer (2017) concatenate the previous sentences to the input; Jean et al. (2017); Bawden et al. (2018); Zhang et al. (2018) use an additional encoder to extract contextual features; Wang et al. (2017) use a hierarchical RNN to encode the global context from all previous sentences; Maruf and Haffari (2018); Tu et al. (2018) use cache-based memories to encode context; Miculicich et al. (2018); Maruf et al. (2019) use hierarchical attention networks; Chen et al. (2020) add document-level discourse structure information to the input.",
"While Maruf et al. (2019); Voita et al. (2018) also find higher attention mass attributed to relevant tokens in selected examples, our work is the first to guide model attention in context-aware NMT using human supervision and analyze its attention distribution in a quantitative manner.",
"However, recent studies suggest that current context-aware NMT models often do not use context meaningfully.",
"Kim et al. (2019) claim that improvements by context-aware models are mostly from regularization by reserving parameters for context inputs, and Li et al. (2020b) show that replacing the context in multi-encoder models with random signals leads to similar accuracy as using the actual context.",
"Our work addresses the above disparities by collecting human supporting context to regularize model attention heads during training.",
"Though attention is usually learned in an unsupervised manner, recent work supervises attention with word alignments (Mi et al., 2016; Liu et al., 2016), event arguments and trigger words (Liu et al., 2017; Zhao et al., 2018), syntactic dependencies (Strubell et al., 2018) or word lexicons (Zou et al., 2018).",
"Our work is closely related to a large body of work that supervises attention using human rationales for text classification (Barrett et al., 2018; Bao et al., 2018; Zhong et al., 2019; Choi et al., 2020; Pruthi et al., 2020).",
"Our work, however, is the first to collect human evidence for document translation and use it to regularize the attention of NMT models.",
"In this work, we collected a corpus of supporting context for translating ambiguous words.",
"We examined how baseline context-aware translation models use context, and demonstrated how context annotations can improve context-aware translation accuracy.",
"While we obtain promising results for context-aware translation by testing one method for attention regularization, our publicly available SCAT dataset could enable future research on alternative attention regularizers.",
"Moreover, our analyses demonstrate that humans rely on different types of context for PAR and WSD in English-French translation, similar user studies can be conducted to better understand the usage of context in other ambiguous discourse phenomena, such as ellipsis, or other language pairs.",
"We also find that regularizing attention using SCAT for PAR especially improves anaphoric pronoun translation, suggesting that supervising attention using supporting context from different tasks may help models resolve other types of ambiguities.",
"One caveat regarding our method for collecting supporting context from humans is the difference between translation , translating text from the input, and disambiguation , choosing between translation candidates.",
"During translation, humans might pay more attention to the source sentences to understand the source material, but during disambiguation, we have shown that human translators rely more often on the target sentences.",
"One reason why the model benefits more from increased attention on source may be because the model is trained and evaluated to perform translation, not disambiguation.",
"A future step would be to explore alternative methods for extracting supporting context, such as eye-tracking during translation (O'Brien, 2009).",
"We would like to thank Emma Landry, Guillaume Didier, Wilson Jallet, Baptiste Moreau-Pernet, Pierre Gianferrara, and Duy-Anh Alexandre for helping with a preliminary English-French translation study.",
"We would also like to thank Nikolai Vogler for the original interface for data annotation, and the anonymous reviewers for their helpful feedback.",
"This work was supported by the European Research Council (ERC StG Deep-SPIN 758969), by the P2020 programs MAIA and Unbabel4EU (LISBOA-01-0247-FEDER-045909 and LISBOA-01-0247-FEDER-042671), and by the Fundacao para a Ciencia e Tecnologia through contract UIDB/50008/2020."
] | [
"abstain",
"abstain",
"method",
"abstain",
"method",
"objective",
"method",
"method",
"abstain",
"abstain",
"other",
"objective",
"objective",
"objective",
"abstain",
"objective",
"objective",
"objective",
"objective",
"objective",
"objective",
"objective",
"objective",
"abstain",
"objective",
"objective",
"objective",
"objective",
"objective",
"objective",
"objective",
"objective",
"objective",
"abstain",
"objective",
"objective",
"objective",
"objective",
"objective",
"objective",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"method",
"abstain",
"objective",
"abstain",
"abstain",
"method",
"result",
"objective",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"abstain",
"other",
"other",
"method",
"other",
"method",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other"
] |
[
"Distant supervision (DS) is an important paradigm for automatically extracting relations.",
"It utilizes existing knowledge base to collect examples for the relation we intend to extract, and then uses these examples to automatically generate the training data.",
"However, the examples collected can be very noisy, and pose significant challenge for obtaining high quality labels.",
"Previous work has made remarkable progress in predicting the relation from distant supervision, but typically ignores the temporal relations among those supervising instances.",
"This paper formulates the problem of relation extraction with temporal reasoning and proposes a solution to predict whether two given entities participate in a relation at a given time spot.",
"For this purpose, we construct a dataset called WIKI-TIME 1 which additionally includes the valid period of a certain relation of two entities in the knowledge base.",
"We propose a novel neural model to incorporate both the temporal information encoding and sequential reasoning.",
"The experimental results show that, compared with the best of existing models, our model achieves better performance in both WIKI-TIME dataset and the well-studied NYT-10 dataset.",
"As an important technique to automatically complete the knowledge base and reduce labeling efforts, distant supervision (DS) for relation extraction has drawn much attention.",
"In DS, we align the entity pair ( head, tail ) from a triple (cid:104) head, rel, tail (cid:105) extracted from a huge knowledge base (e.g., Freebase, Wikidata) with sentences from free texts (e.g., Wikipedia, New York Times) Jian Li is the corresponding author.",
"to obtain the training examples, and the label of such an example is the corresponding relation rel",
"Therefore, DS can automatically create a set of training data for each entity pair.",
"However, the noisy training data problem (Riedel et al., 2010) significantly affects the performance of DS.",
"Therefore, most of the recent approaches (Riedel et al., 2010; Hoffmann et al., 2011; Zeng et al., 2015; Lin et al., 2016) follow a common assumption called the at-least-once assumption, which treats all aligned sentences of each entity pair as one training sample.",
"We refer to a sentence as an instance and all sentences aligned to one entity pair as a mention set in the following, respectively.",
"The models in previous work (Zeng et al., 2015; Lin et al., 2016; Luo et al., 2017) generally include two parts, encoding and fusion .",
"The former encodes each instance into a low-dimensional representation.",
"The latter combines representation of each instance.",
"Then, their combination is used to predict the relation.",
"Although the approaches mentioned above seem promising, they have the following limitations: 1. They all use a separate but identical encoding module among instances and introduce no difference temporally.",
"2. They only adopt single step of fusion and introduce no sentence-level reasoning.",
"We remark that the aforementioned approaches may be enough for the standard NYT-10 dataset (Riedel et al., 2010), because the dataset only extracts instances from New York Times corpus from the year 2005 to 2007 and consists of few mention sets with long time span.",
"However, as one can easily imagine, ignoring temporal information may cause inaccurate predictions, especially when a mention set has a long time span and some instances express different relations.",
"For example, suppose we want to predict the relation between Angelina Jolie and Brad Pitt (using Wiki-data).",
"The knowledge base contains a factual relation of spouse between them with the valid period from August 2014 to September 2016.",
"However, the extracted mention set contains instances about their marriage in 2014, as well as their divorce in 2016.",
"Because existing models do not encode temporal information, the relation they extract is likely to be the one with highest confidence.",
"In this example, their models may predict the relation of marriage since the instances may suggest a higher confidence for the relation of marriage.",
"But the correct prediction should be divorce.",
"As shown in the above example, we can see it is necessary to include temporal information in DS.",
"On the other hand, in fusion module, most existing work focused on denoising using methods such as attention or reinforcement learning.",
"We want to argue that a sentence-level reasoning can also be useful since there are instances which are not direct positive examples for the given relation, but can provide supporting evidence.",
"We call them remote instances.",
"Consider the Jolie-Pitt example again.",
"Suppose we are to predict their relation after their divorce.",
"The instances about their marriage also indirectly help to infer their divorce since marriage is the premise of divorce.",
"Hence, we need an algorithm that can incorporate temporal information and perform reasoning over remote instances.",
"In this paper, we address both limitations and extend the task to predict the relation of a particular entity pair at any specific time spot.",
"The problem can be formulated as a sequence labeling problem (See 2).",
"We propose a novel relation extraction architecture that can address both aforementioned limitations.",
"Our model follows the popular encoding-fusion architecture, but makes two crucial modifications.",
"Firstly, we introduce temporal encoding to model the temporal information among the instances in the encoding.",
"Secondly, we use the Memory Network (Sukhbaatar et al., 2015; Miller et al., 2016) to iteratively reason over temporally augmented encodings in the fusion part.",
"Moreover, we evaluate our model on the widely studied NYT-10 dataset (Riedel et al., 2010) and a new WIKI-TIME dataset.",
"The construction of WIKI-TIME is similar to that of the NYT-10 dataset except for two important differences.",
"One is that we only consider triples (cid:104) head, rel, tail (cid:105) with the valid period ( T 1 , T 2) .",
"For example, the triple (cid:104) Jolie, married, P itt (cid:105) has a valid period of (2014 . 08 , 2016 . 09) .",
"The other is that we extract contextual temporal information for each aligned instance.",
"We use Wikidata (Vrandecic and Krotzsch, 2014) as knowledge base and Wikipedia as free corpus.",
"Both automatic and manual evaluation are applied in the experiments.",
"The experimental results show that, compared with existing models, our model can achieve compara-ble/better performance in both WIKI-TIME and standard NYT-10 datasets.",
"We introduce a new task aiming to solve the problem of relation extraction with temporal information.",
"We propose a novel relation extraction architecture, which encodes both the temporal and semantic information and includes remote instances for temporal reasoning.",
"We construct a new WIKI-TIME dataset by aligning Wikidata to Wikipedia, which is specially designed for the task of relation extraction with temporal information.",
"The experiment results show that, compared with the best of existing models, our model achieves comparable/better performance both in WIKI-TIME dataset and stan-dart NYT-10 dataset.",
"Given two entities (cid:104) head, tail (cid:105) and their corresponding mention set S = { s 1 , s 2 , , s T } , where s i denotes the i th instance, the task aims to predict the probability for specific relation r of (cid:104) head, tail (cid:105) :",
"entities at any specific time spot.",
"Because modeling over any specific time spot is non-trivial, we relax the goal to predict the relation between two given entities at any mentioned time spots.",
"Note that we can infer the relations at other time spots using prediction at mentioned ones.",
"Formally, the relation r t at t ( t 1 , t 2 ] can be infered by r t 1 .",
"Therefore, we can model the problem as a sequence labeling problem with noisy inputs.",
"Given two entities, we collect the chronologically sorted list of its mention instances and the time spot associated with each instance .",
"We denote the list by S = { ( s 1 , t 1 ) , , ( s T , t T ) } , where ( s i , t i ) is the i th instance and the associated time spot.",
"Our goal is to predict the probability of relation r at time spot t i : P ( r t i | S = { ( s 1 , t 1 ) , , ( s T , t T ) } , t i ) .",
"Note that RNN-like models are not suitable for this sequence labeling problem, because the input sequence contains noisy sentences and lacks direct dependency between time steps.",
"We propose a neural model called TempMEM which models the sequence labeling problem by creating query sequence based on each mentioned time spot.",
"TempMEM also follows the encoding-fusion framework (Zeng et al., 2015; Lin et al., 2016; Luo et al., 2017).",
"However, we make two crucial mod-ifications to the original framework.",
"First, for the encoding part, we use time-aware encoding modules for instances instead of identical ones.",
"Second, we use the memory network to iteratively reason over instances, which makes use of remote instances.",
"For sentence encoding, here we apply the Convolutional Neural Network (CNN) and the Piecewise Convolutional Neural Network (PCNN) (Zeng et al., 2015).",
"Note that, since TempMEM has no preferance over specific sentence encoding, other encoding modules like word memory (Feng et al., 2017) or self-attention can also be used here.",
"The inputs of convolution layers are word embeddings concatenated with position features.",
"For a detailed description of the inputs, we refer the readers to (Zeng et al., 2015).",
"First, the convolution layer extracts local features with sliding window w over the input representation.",
"Formally, the convolution operates on the concatenation of the input representations X k : k + w of instance j with the shared parameters W c RD w and b c R 1 : o c,k = W c X k : k + w + b c , (3) where o c,k is the k th output of channel c .",
"Then, we use the piece-wise max-pooling layer.",
"It divides the outputs of filters into three parts { o c, 0: h , o c,h : t , o c,t : N } and performs max-pooling over each part: o c = [ max 0 k<h ( o c,k ) , max h k<t ( o c,k ) , max t k<N ( o c,k )] , (4) where h and t denote the indices of the head and tail entities, respectively.",
"The concatenation of the output of all channels c is considered as the convolutional representation of instance j : O j = [ o 1 , o 2 , , o C ] , (5) where C denotes the number of filters.",
"In order to introduce temporal priorities among instances, it is necessary to inject temporal information into the encoding part.",
"We want the temporal encoding to have the following characteristics: The temporal encodings should comply with the chronological order of instances.",
"The difference between two time spots determines the similarity between two temporal encodings.",
"Since directly encoding the time spot value leads to huge difference among mention sets of the dataset, we propose an approximate approach with PE encoding (Vaswani et al., 2017) based on the rank (i.e. position of an instance in a mention set with chronological order): P E ( j ) = (cid:40) sin ( j/ 10000 d/d m ) if d %2 = 0 cos ( j/ 10000 ( d 1) /d m ) if d %2 = 1 , (6) where j is the rank of instance s , d is the dimension, and d m is the dimension of temporal encoding.",
"Obviously, the PE encoding complies with the chronological order and the similarity between Figure 1: Overall TempMEM architecture two PE encodings (by dot product) is determined by their rank difference.",
"Then, we concatenate the corresponding temporal encoding with the convolutional features of instance j to form the final representation of each instance with a learnable scale factor : m j = [ O j ; P E ( j ) ] .",
"In the fusion part, we use the Memory Network to perform temporal reasoning among different instances.",
"Each encoded instance is considered to be a memory slot.",
"Then, we construct a time specific query and iteratively compute the weighted attention over all instances.",
"We detail the process in the following sections.",
"We construct each query with the guidance of the following intuition.",
"So, we construct our queries based on four key variables, ( relation, head, tail, t i ) .",
"Specifically, we combine the embeddings (pre-trained by TransE (Bordes et al., 2013)) of head and tail and project the combination through an affine matrix q RD e D r , where D e and D r denote the dimension of relation and entity embedding, respectively.",
"After the projection, we add the randomly initialized relation embedding.",
"The formal definition of a query is given below: q r = R r + ( E head + E tail ) q , (8) where R r RD r is the embedding of specific relation r and E RD e is the entity embedding.",
"Finally, we also concatenate the query with the same temporal encoding defined in 3.2.2 to obtain the i th query: q r,i = [ q r ; P E ( i ) ] .",
"In this part, we introduce how to use the queries perform temporal reasoning.",
"Two operations are involved, memory addressing and reading.",
"One of our key motivations is to consider the remote instances.",
"So, instead of using single step attention computation as in previous work (Lin et al., 2016; Luo et al., 2017; Ji et al., 2017), we perform an overall H steps of memory addressing and reading to obtain the final prediction.",
"Within each step (also called hop), we update the query value by adding the output of the previous step, which provides a gradual shift in attention.",
"Next, we introduce the whole process in detail.",
"Memory Addressing In addressing, we compute the similarity between the query vector q i,r and each candidate memory slot key K j .",
"Note that the encoding output m j is not in the same continuous space as the query vector.",
"So, we adopt linear projections to both memory keys: K j = A Th m j , (10) where A h RD m D r .",
"Then, we compute the similarity score and importance probability using the bilinear form, s i,j = q Ti,r W a K j , (11) p i,j = exp ( s i,j ) (cid:80) M j =1 exp ( s i, j ) , (12) where W a RD r D m is the model parameter to be learned and i, j are the indices of queries and memory slots.",
"As for the addressing step, it worths noting that the query and memory slots are both concatenated with temporal encodings.",
"If we define the embedding layer A as the identity matrix, each similarity score of a query-memory pair can be divided into two parts, s i,j = q Ti o j + 2 P E ( i ) T P E ( j ) .",
"Each query can automatically attend to instances with either close encoding representations or close temporal encodings.",
"This tradeoff also accords with our intuition, since the confidence of a relational factual statement decreases when the time span increases.",
"Memory reading The value of each memory slot, which is also projected by an affine matrix B RD m D r , is read by computing the weighted sum over all memory slots with the importance probability derived in the addressing step: q i = (cid:88) j p i,j V j , (14) where V j = B Th m j .",
"Iterative computation Here, we combine the above two operations as a single step for reasoning.",
"We use h [1 , H ] to denote a particular step, where H is the total step number.",
"To achieve a step-by-step reasoning, we update the next step query q h +1 with the summation of the current step output q h and the current query q h : q h +1 = q h + q h .",
"During training, we add dropout with probability p b at the final query step.",
"By combining the previous hop query and the output in this way, TempMEM can gain information from the last read output and shift addressing attention to remote instances.",
"Here we introduce the learning and optimization details of TempMEM.",
"We use query-level CrossEntropy loss as our objective function: J ( ) = N s (cid:88) s =1 T (cid:88) i =1 y t log p ( y t | S s , , t i ) , (17) where N s is the number of sets and T is the length of query sequence.",
"We use stochastic gradient descent (SGD) to minimize our objective function.",
"For the exploration of optimization, we add small white noise to the gradients (Neelakantan et al., 2015).",
"We also anneal the learning rate l by 0 < < 1 (i.e., l l ) for every epochs.",
"We evaluate our model on two datasets, the widely used NYT-10 dataset which is developed by (Riedel et al., 2010) and the WIKI-TIME dataset we created.",
"This dataset is generated by aligning Freebase entities to New York Times corpus (NYT) of years from 2005 to 2007.",
"There are 53 pre-defined relations including a particular relation NA which indicates no relation between head and tail .",
"The training data contains 522,611 sentences, 281,270 entity pairs, and 18,252 of them are relational facts.",
"The testing data contains 172,448 sentences, 96,678 entity pairs, and 1,950 of them are relational facts.",
"Similar to NYT-10, the WIKI-TIME dataset is also generated by aligning knowledge base entities to free corpus, except that we choose Wikidata and Wikipedia instead of Freebase and NYT news.",
"The motivation of creating WIKI-TIME is to generate a time aligned dataset that can support temporal reasoning.",
"Hence, we filter knowledge base entities that participate in relations with informative temporal features, such as start time, end time.",
"Besides, we tag the aligned sentences with their time expressions in contexts.",
"Then, we align the contextual time expressions with the valid period of each relation to achieve labeling.",
"For example, sentence like: On September 19, 2016, Jolie filed for divorce from Pitt, citing irreconcilable differences. is labeled with no relation (NA).",
"The dataset contains 57 relations.",
"The training set contains 97,616 sentences and 20,085 entity pairs.",
"The test set contains 39,990 sentences and 8,641 entity pairs.",
"2 4.2 Experiment Details Hyper-Parameter Settings For WIKI-TIME experiments, we construct query over each appeared time spot in the mention set.",
"On the other hand, for NYT-10 experiments, we adopt a single query without temporal encoding to compare results with other baseline methods since the dataset only contains one label for each mention set.",
"Among all experiments, we use 230 convolution kernels with windows size 3. The dropout probability p d is set to 0.5.",
"We try various max hops values H (from 1 to 5) to test how reasoning works in our model.",
"We train the models with 20 epochs and 50 epochs for NYT-10 dataset and WIKI-TIME dataset and report the best performance.",
"As for optimization step, we adopt SGD with gradient plus Gaussian noise with standard deviation of 0.01, which helps to better generalize.",
"Also, we apply gradient decay of rate ( = 0 . 5 ) over every = 10 epochs.",
"The learning rates for NYT-10 and WIKI-TIME experiments are set to 0.001 and 0.01, respectively.",
"With regard to inputs, we use 50-d Glove (Pen-nington et al., 2014) word embeddings pretri-aned on Wikipedia and Gigaword and 5-d pos-tion embedding.",
"The temporal encodings are either initialized with random 50-d vectors, which are learned during training, or set directly with PE.",
"For entity embeddings, we use the TransE (Bordes et al., 2013) entity vectors pretrained on Wikidata released by the OpenKE platform.",
"descending order (without NA relation) and compute the precision with threshold for each recall value.",
"Also, we report the P @ N values which indicate the precision over N predictions with the highest confidence scores.",
"To test the effect of iterative reasoning over instances, we implement the neural models proposed in previous work (Zeng et al., 2015; Lin et al., 2016), from the source code released by authors.",
"Since the previous models perform prediction in bag-level, the label is given by the latest relation appeared in KB.",
"As for our models, we fix the number of hops H = 2 and set the encoding to CNN.",
"3 The notations of the experiments are shown in Table 1. As shown in Figure 2 and Table 2, we have the following observations: (1) All TempMEM models achieve better performance compared with the previous neural models (CNNONE, CNN ATT).",
"3 The PCNN encoding is not used in WIKI-TIME dataset.",
"The detailed explanation is given in Appendix A .",
"Recall that the hop number is set to 2. This can be seen as an ablation experiment.",
"The results suggest that the remote instances can generally help relation extraction task.",
"(2) TempMEM + P clearly outperforms TempMEM + R, which proves that the properly chosen temporal encodings help the performance.",
"Note that, in the columns P@N 200 and P@N 300 of Table 2, we find that the pure TempMEM outperforms TempMEM + P and TempMEM + R. Based on the results in Table 3, their drop of performance comes from the noisy labeling problem of distant supervision.",
"Also, TempMEM can catch relation changes through the timeline of two entities.",
"We refer the readers to the case study in Appendix B .",
"Since the WIKI-TIME is distantly collected, we want to obtain a more precise view of how the models perform.",
"So, we apply the manual evaluation to verify our experimental results.",
"We randomly pick 200 mention sets in the test set of WIKI-TIME and ask two annotators to label the relation for each instance.",
"The annotation rule is to label the instance with the relation that can be inferred from the instance itself or previous instances.",
"As shown in Table 3, the manual evaluated F1 scores are basically consistent with the PR curves in Figure 2, which indirectly proves the WIKI-TIME's quality.",
"Also, we find that the TempMEM + P achieves the best performance and shows obvious advantages in both query-level and bag-level F1 scores over the naive TempMEM (i.e., with no temporal encodings).",
"This proves the effectiveness of our temporal encodings.",
"In this section, we discuss the effect of different number of hops in TempMEM.",
"We change the hop value from 1 to 5 and evaluate the precision and recall of our models in query-level.",
"The hyper-parameters are fixed.",
"The temporal encoding is set to PE and each model is trained for 30 epochs.",
"The results of the hop number experiment are depicted in Figure 3. From the results, we can observe that models show better performance with hop number 2 and 4. Most of the improvement of the model with hop number 4 resides in the recall range [0, 0.05], but the performance remains in the similar trend with other models in the recall range [0.05, 0.2].",
"In addition, we notice that the performance of the models fluctuates with the increase in the number of hops and the model with even hop number generally perform better than its predecessor, e.g. models with hop number 4 and 5. We believe that the reason might lie in the distribution of the hop distance between origin instance and useful remote instance.",
"In this section, we report our results on the well-studied NYT-10 dataset.",
"By evaluating our model in the NYT-10 dataset, our objective is to prove the power of reasoning among remote instances.",
"Note that, in the NYT-10 dataset, there is no temporal information for each instance, so we only use one query for each mention set and there's no",
"(a) Results with CNN encoding.",
"(b) Results with PCNN encoding.",
"Figure 4: Precison-Recall curve on NYT-10 .",
"Best viewed in color.",
"temporal encoding for each instance.",
"Also, we do not use the entity embedding for the NYT-10 experiments.",
"The results are shown in Figure 4. For both CNN and PCNN models, We can see that our models exceed the performance of all other models (CNN ATT, CNN ONE, CNN AVE, PCNN ATT, PCNN ONE, PCNN AVE) in the range of low recall values.",
"In the high recall range, our models also have results about the same as the best model among others.",
"This suggests that even without the temporal encoding, reasoning over remote instances is indeed useful in relation extraction task.",
"Distant supervision for relation extraction is an important, automatic method of completing knowledge base.",
"(Riedel et al., 2010) made the at-least-once assumption that led the distant supervision for relation extraction to multi-instance learning.",
"(Hoff-mann et al., 2011) and (Surdeanu et al., 2012) tried to model the task with a multi-instance, multi-label setting using the classical graph model.",
"Recently, some work focused on applying deep neural network to the DS task.",
"(Zeng et al., 2014) was the first trial to apply deep learning in relation extraction by solving a classification problem with fully supervised approach.",
"(Zeng et al., 2015) moved a step further and introduced the multi-instance learning paradigm by using only the most important instance to predict relation.",
"(Lin et al., 2016; Liu et al., 2017; Ji et al., 2017) improved the previous work by adding attention mechanism to instances and automatically reducing the weights of noisy instances.",
"There are other approaches that tried to reduce the impact of noise in DS by using active learning (Sterckx et al., 2014) and reinforcement learning (Feng et al., 2018).",
"However, previous work focused on denoising but ignored the exploration of the remote instances and introduced no temporal information to support relation extraction.",
"In this paper, we introduce temporal information into DS and combine it with the memory network to perform reasoning over instances.",
"(Feng et al., 2017) also used the memory network in the context of distant supervision.",
"Their work performed word-level and relation-level reasoning to model the importance of words and dependency between relations.",
"Their motivation was to gain better sentence encoding and relation modeling, while in our model, we apply the sentence-level memory network to understand the inference process among instances.",
"Also, this work is related to temporal relation extraction.",
"(Dligach et al., 2017) was the first approach to use neural models for temporal relation extraction.",
"(Tourille et al., 2017) used a Bi-LSTM to identify narrative containers between events and time expressions.",
"(Cheng and Miyao, 2017) introduced dependency paths and used a common-root to solve the cross-sentence dependency.",
"(Meng and Rumshisky, 2018) leveraged the Neural Turing Machine to enhance context-awareness of the temporal relation extraction model.",
"Previous work in temporal relation extraction was dedicated to event timelining and focused on dealing with relations between event and time expression.",
"In constrast, our model aims to solve general entity to entity relation extraction by instance-level temporal reasoning based on a coarse-grained timelining.",
"Another related research aspect is the temporal slot filling (TSF) task introduced in knowledge base population (Surdeanu, 2013; Ji et al., 2014).",
"Distant supervision approaches (Garrido et al., 2013; Cucerzan and Sil, 2013; Sil and Cucerzan, 2014) (from Freebase and Wikipedia infoboxes) are widely applied to address the lack of supervising data.",
"(Reinanda and De Rijke, 2014) performed the prior-sampling on distant supervision data to correct the mismatch of distributions.",
"The TSF task is similar to the task defined in this paper in the sense that TSF also asks the model to identify the start and end date of one knowledge triple (cid:104) head, rel, tail (cid:105) .",
"The difference is that the TSF task's objective is to predict validity period given the head , rel and tail , while in our setting, we predict rel between two entities in different periods.",
"In this paper, we formulate the task of distant supervision with temporal relation reasoning by modeling it as a sequence labeling problem.",
"Following the DS paradigm, we created a new dataset called WIKI-TIME which is designed for the temporal relation extraction task.",
"In addition, we propose an encoding-fusion model, TempMEM, which combines both encoding and reasoning temporally.",
"At each computation step, our model can automatically attend information with either close representation or close temporal encoding.",
"In experiments, we compare our model with the existing methods in both the well-known NYT-10 dataset and our WIKI-TIME dataset.",
"Both automatic and manual evaluation are applied in the experiments.",
"The experimental results show that our model not only realizes better performance in relation extraction by introducing instance-level reasoning but also improves the reasoning by bringing the temporal information in.",
"In the future, we plan to further explore the effect of different encoding modules like Bi-LSTM or self-attention and try to model temporal information with more sophisticated choices.",
"The research is supported in part by the National Basic Research Program of China Grant 2015CB358700, the National Natural Science Foundation of China Grant 61822203, 61772297, 61632016, 61761146003, and a grant from Mi-crosoft Research Asia.",
"Also, we want to thank the very kind comments from anonymous reviewers, which help improve this paper."
] | [
"abstain",
"method",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"result",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"objective",
"abstain",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"result",
"objective",
"objective",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"result",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"objective",
"abstain",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"other",
"other",
"method",
"objective",
"method",
"objective",
"objective",
"abstain",
"method",
"abstain",
"result",
"objective",
"other",
"other"
] |
[
"Machine translation (MT) is currently evaluated in one of two ways: in a monolingual fashion, by comparison with the system output to one or more human reference translations, or in a trained crosslingual fashion, by building a supervised model to predict quality scores from human-labeled data.",
"In this paper, we propose a more cost-effective, yet well performing unsupervised alternative SentSim : relying on strong pretrained multilingual word and sentence representations, we directly compare the source with the machine translated sentence, thus avoiding the need for both reference translations and labelled training data.",
"The metric builds on state-of-the-art embedding-based approaches namely BERTScore and Word Mover's Distance by incorporating a notion of sentence semantic similarity.",
"By doing so, it achieves better correlation with human scores on different datasets.",
"We show that it outperforms these and other metrics in the standard monolingual setting (MT-reference translation), a well as in the source-MT bilingual setting, where it performs on par with glass-box approaches to quality estimation that rely on MT model information.",
"Automatically evaluating machine translation (MT) as well as other language generation tasks has been investigated for decades, with substantial progress in recent years due to the advances of pretrained contextual word embeddings.",
"The general goal of such evaluation metrics is to estimate the semantic equivalence between the input text (e.g. a source sentence or a document) and an output text that has been modified in some way (e.g. a translation or summary), as well as the general quality of the output (e.g. fluency).",
"As such, by definition metrics should perform some forms of input-output comparisons.",
"*Contributed equally to this work.",
"However, this direct comparison has been proven hard in the past because of the natural differences between the two versions (such as different lan-guages).",
"Instead, evaluation metrics have resorted to comparison against one or more correct outputs produced by humans, a.k.a. reference texts, where comparisons at the string level are possible and straightforward.",
"A multitude of evaluation metrics have been proposed following this approach, especially for MT, the application we focus on in this paper.",
"These include the famous BLEU (Papineni et al., 2002) and METEOR (Banerjee and Lavie, 2005) for machine translation, ROUGE (Lin, 2004) for summarization, and CIDER (Vedantam et al., 2014) for image captioning.",
"These traditional metrics are based on simple-word, n-gram matching mechanisms or slight relaxations of these (e.g. synonyms) which are computationally efficient, but suffer from various limitations.",
"In order to overcome the drawbacks of the traditional string-based evaluation metrics, recent work (Williams et al., 2018; Bowman et al., 2015; Echizen'ya et al., 2019; Cer et al., 2017; Echizen'ya et al., 2019) has investigated metrics that perform comparisons in the semantic space rather than at the surface level.",
"Notably, applications of Word Mover's Distance (WMD; Kusner et al., 2015), such as WMDo (Chow et al., 2019), VIFIDEL (Madhyastha et al., 2019) and moverscore (Zhao et al., 2019), which compute similarity based on continuous word embeddings using pretrained representations.",
"These have been shown to consistently outperform previous metrics on various language generation evaluation tasks.",
"However, these metrics have two limitations:",
"(i) they still rely on reference outputs, which are expensive to collect, only cover one possible correct answer, and do not represent how humans do evaluation;",
"(ii) they are bag-of-embeddings approaches which capture semantic similarity at the token level, but are unable to capture the meaning of the sentence or text as a whole, including correct word order.",
"In this paper, focusing on MT, to address these limitations we first posit that evaluation can be done by directly comparing the source to the machine translation using multilingual pretrained embeddings, such as multilingual BERT, avoiding the need of reference translations.",
"We note that this is different from quality estimation (QE) metrics (Specia et al., 2013; Shah et al., 2015) , which also compare source and machine translated texts directly, but assume an additional step of supervised learning against human labels for quality.",
"Second, we introduce Sentence Semantic Similarity (SSS) , an additional component to be combined with bag-of-embeddings distance metrics such as BERTScore.",
"More specifically, we propose to explore semantic similarity at the sentence level based on sentence embeddings (Sellam et al., 2020; Reimers and Gurevych, 2020; Thakur et al., 2020) and linearly combine it with existing metrics that use word embeddings.",
"By doing so, the resulting metrics have access to word and compositional semantics, leading with improved performance.",
"The combination is a simple weighted sum, and does not require training data.",
"As a motivational example, consider the case in Table 1, from the WMT-17 Metrics task (Zhang et al., 2019).",
"When faced with MT sentences that contain a negated version of the reference (MT3 and MT4), token-level metrics such as BERTScore and WMD cannot correctly penalize these sentences since they match representations of words in both versions without a full understanding of the semantics of the sentences.",
"As a consequence, they return a high score for these incorrect translations, higher than the score for correct paraphrases of the reference (MT1 and MT2).",
"Sentence similarity, on the other hand, correctly captures this mismatch in meaning, returning relatively lower scores for Translations 3 and 4.",
"However on their own they may be too harsh, since the remaining of the sentence has the same meaning.",
"The combination of these two metrics (last column) balances between these two sources of information and, as we will later show in this paper, has higher correlation with human scores.",
"1. We investigate and show the effectiveness of linearly combining sentence-level semantic similarity with different metrics using token-level semantic similarity.",
"The resulting combined metric, SentSim, consistently achieves higher Pearson Correlation with human judgements of translation quality than both word and sentence similarity alone.",
"2. We show, for the first time, that these metrics can be effective when comparing system-generated sentences directly against source sentences, in a crosslingual fashion.",
"3. Our SentSim metric outperforms existing metrics on various MT datasets in monolingual and crosslingual settings.",
"Various natural language generation tasks, including machine translation, image captioning, among others, produce sentences as output.",
"These are evaluated either manually or automatically by comparison against one or multiple reference sentences.",
"A multitude of metrics have been proposed for the latter, which perform comparisons at various granularity levels, from characters to words to embedding vectors.",
"The goal of such metrics is to replace human judgements.",
"In order to understand how well they fare at this task, metrics are evaluated by how similar their scores are to human assigned judgements on held-out datasets.",
"For absolute quality judgements, Pearson Correlation is the most popularly used metric for such a comparison (Mathur et al., 2020).",
"Recent studies have showed that the new generation of automatic evaluation metrics, which instead of lexical overlap are based on word semantics using continuous word embedding, such as BERT (Devlin et al., 2019), ElMo (Peters et al., 2019), XLNet (Yang et al., 2019) or XLM-Roberta (Conneau et al., 2019), have significantly higher Pearson Correlation with the human judgements when comparing reference sentences with system generated sentences.",
"Zhang et al. (2019) introduce BERTscore , an automatic evaluation metric based on contextual word embeddings, and tests it for text generation tasks such as machine translation and imaging captioning, using embeddings including BERT, XLM-Roberta, and XLNet (more details in Section 3.2).",
"Mathur et al. (2019) present supervised and unsupervised metrics which are based on BERT embeddings for improving machine translation evaluation.",
"Zhao et al. (2019) introduce moverscore , a metric which generates high-quality evaluation BERTScore SSS SSS + BERTScore REF We have made a complete turnaround.",
"results on a number of text generation tasks including summarization, machine translation, image captioning, and data-to-text generation, using BERT embeddings.",
"Clark et al. (2019) present semantic metrics for text summarization based on the sentence mover's similarity and ELMo embeddings.",
"Chow et al. (2019) introduce a fluency-based word mover's distance ( WMDo ) metric for machine translation evaluation using Word2Vec embeddings (Mikolov et al., 2013).",
"Lo (2019) presents Yisi , a unified automatic semantic machine translation quality evaluation and estimation metric using BERT embeddings.",
"There is also a bulk of work on metrics that take a step further to optimize their scores using machine learning algorithms trained on human scores for quality (Sellam et al., 2020; Ma et al., 2017).",
"They often perform even better, but the reliance on human scores for training, in addition to reference translations at inference time, makes them less applicable in practice.",
"A separate strand of work that relies on contextual embeddings is that of Quality Estimation (Moura et al., 2020; Fomicheva et al., 2020a; Ranasinghe et al., 2020; Specia et al., 2020).",
"These are also trained on human judgements of quality, but machine translations are compared directly to the source sentences rather than against reference translations.",
"In addition to embeddings for words, embeddings for full sentences have been shown to work very well to measure semantic similarity.",
"These are extracted using Transformer models that are specifi-cally trained for capturing sentence semantic meanings using BERT, Roberta, and XLM-Roberta embeddings (Reimers and Gurevych, 2019; Reimers and Gurevych, 2020; Thakur et al., 2020) and provide state-of-art performance pretrained models for many languages.",
"1 In this paper, we take inspiration from these lines 1 https://github.com/UKPLab/sentence-transformers of previous works to propose unsupervised metrics that combine word and sentence semantic similarity and show that this can be effective for both MT-reference and source-MT comparisons.",
"In this section, we first describe in more detail the metrics that we have used in our experiments, namely semantic sentence cosine similarity, WMD and BERTScore.",
"Then we present our simple approach to linearly combine these metrics.",
"Kusner et al. (2015) presents word mover's distance (WMD) metric, a special case of Earth mover's distance (Rubner et al., 2000), computing the semantic distance between two text documents by aligning semantically similar words and capturing the word traveling flow between the similar words utilizing the vectorial relationship between their word embeddings (Mikolov et al., 2013).",
"WMD has been proven to generate consistently high-quality results for the tasks of measuring text similarity and text classification (Kusner et al., 2015).",
"A text document is represented as a vector D , where each element is denoted as the normalized frequency of a word in the document such that: D = [ d 1 , d 2 , ...., d n ] T (1) where d i = c i / (cid:80) nj c j and c i is the frequency that the i th word which appears c i times in a given text document.",
"Assuming there are two given words from different text document denoted as i and j , then the euclidean distance in the embedding x i and x j for the two words is defined as: c ( i, j ) = (cid:107) x i x j (cid:107) 2 (2) where c ( i, j ) is defined as the \"word traveling cost\" from x i in one document to x j in the other document.",
"Now, assuming there are two documents, one is the source document denoted as A where the word i belongs to, and another one is the target document denoted as B where the word j belongs to.",
"A flow matrix T is defined in which every element is denoted as T ij , suggesting the number of times the word i in document A moves to the word j in document B .",
"Then, the value of the flow matrix is normalized based on the total count of words in the vocabulary such that: (cid:88) j T ij = d i , (cid:88) i T ij = d j (3) The semantic distance calculated by WMD can be then defined as follows: WMD = min T 0 n (cid:88) i,j =1 T ij c ( i, j ) (4) WMD, or the semantic distance between two text documents, can thus be computed by optimizing values in the flow matrix T .",
"In other words, WMD corresponds to the minimal semantic distance to move semantically similar words (via their embeddings) from one text document to another.",
"BERTScore (Zhang et al., 2020) is designed to evaluate semantic similarity between sentences in the same language, namely a reference sentence and a machine-generated sentence.",
"Assume a reference sentence is denoted as x = ( x 1 , ...., x k ) and a candidate sentence is denoted as x = ( x 1 , ...., x k ) , BERTScore uses contextual embeddings such as BERT (Devlin et al., 2019) or ELMo (Peters et al., 2019) to represent word tokens in the sentences.",
"It finds word matchings between the reference and candidate sentence using cosine similarity, which can be optionally reweighted by the inverse document frequency scores (IDF) of each word.",
"BERTScore matches each word token x in reference sentence to the closest word token x in candidate sentence for computing recall, and matches each word token x in candidate sentence to the closest word token x in reference sentence for computing precision.",
"It combines recall with precision to produce an F1 score.",
"However, only recall is used for evaluation in most cases, which is defined as follows: RBERT = 1 | x | (cid:88) x i x max x j x x Ti x j (5) In essence, BERTScore can be viewed as a hard word alignment given a pair of sentences using contextual embeddings, in which each word is aligned to one other word, the closest in the embedding space according to the cosine distance between their vectors.",
"A commonly used method to measure sentence similarity is using the cosine distance between the two vectors summarizing the sentences:",
"where and are the vectors representing the two sentences.",
"The higher the value obtained through cosine similarity between two sentences vectors based on the pretrained sentence representation (Reimers and Gurevych, 2019; Reimers and Gurevych, 2020; Thakur et al., 2020), the stronger their similarity.",
"In order to bring the notion of semantic similarity to token similarity metrics, we combine the sentence cosine similarity using semantically fine-tuned sentence embedding with the metrics using contextual word embeddings.",
"Assume that the generated score from sentence level metric is denoted as A , the value generated from token-level metric is denoted as B and the gold truth from human judgement is denoted as S .",
"Our combination metric, namely SentSim, is as follows: SentSim ( A, B ) = w 1 e A + w 2 e B (7) where A and B are normalized to the range between 0 and 1, w 1 and w 2 are the weights given to two metric scores.",
"If metric B is negatively correlated with S , i.e., if it is a distance metric like WMD, we give it e 1 B .",
"We use e B for similarity metrics such as SSS and BERTScore.",
"In equation 7, we apply exponential for similarity scores as the linear addition of two similarity scores ( A + B ) in lower-order leads to a large variance and inconsistency in the correlation with human scores.",
"Lower-order models are too simple to fit the relationship between similarities.",
"Therefore, a non-linear model is required to project these similarities into higher-order ( A n + B n ).",
"Given the Taylor Series Expansion (Abramowitz and Stegun, 1965) of exponential function, we can get a factorial average of two similarities from lower-order to higher-order as follows: SentSim ( A, B ) = (cid:88) n =1 w 1 A n + w 2 B n n !",
"Our final metric is given in Equation 8, which follows from Equation 7 using Taylor Series Expansion.",
"This was also shown in (Kilickaya et al., 2017; Clark et al., 2019), which convert distance scores to similarities by using the exponential function.",
"In Section 5, we report experiments with two linear metric combinations: SSS + WMD and SSS + BERTScore , where we give equal weight to each metric ( w 1 = w 2 = 0.5).",
"We have also investigated the linear combination between Sentence Mover's Distance (Zhao et al., 2019) and token-level metrics, but the performance is poorer than SSS, so we only show results in the Appendix A.1.",
"In this section, we describe two types of experimental scenarios, monolingual and crosslingual evaluation, as well as the three datasets and pretrained embeddings we used.",
"The first evaluation setting we experimented with is the standard monolingual evaluation task scenario (MT-REF), which takes reference sentences and machine generated sentences in the same language as input.",
"The second one is the crosslingual evaluation task scenario (SRC-MT), which directly assesses the similarity between source sentences and machine generated sentences in different languages.",
"We compute our combined metrics for each task scenario separately.",
"We use various datasets with absolute human judgements from recent evaluation campaigns.",
"Multi-30K (Elliott et al., 2016) is a multilingual (English-German (en-de) and English-French (en-fr)) image description dataset.",
"We use the 2018 test set, in which each language pair contains more than 2K sentence tuples, including source sentences, reference sentences, machine generated sentences, and the corresponding human judgement scores in an (0-100) continuous range.",
"Therefore, this dataset can be used for both crosslingual and monolingual task scenarios.",
"WMT-17 (Bojar et al., 2017) is a dataset containing multiple language pairs from the WMT News Translation task used for segment-level system evaluation in the Metrics task.",
"We used all seven to-English datasets: German-English (de-en), Chinese-English (zh-en), Latvian-English (lv-en), Czech-English (cs-en), Finnish-English (fi-en), Russian-English (ru-en), Turkish-English (tr-en) and two from-English datasets: English-Russian (en-ru), English-Chinese (en-zh).",
"Each language has 560 sentence tuples, where each tuple has a source sentence, a reference sentence and multiple system generated sentences, in addition to a human score varying from 0 to 100.",
"WMT-17 can be used in both monolingual and crosslingual evaluation task scenarios, and is our main experimental data.",
"More recent WMT Metrics task datasets do not report metrics results using absolute judgements, but rather convert these into pairwise judgements.",
"While such relative judgements are useful to assess metrics ability to rank different MT systems, they are not applicable to assess metrics in their ability to estimate quality in absolute terms, which are what we are interested in.",
"WMT-20 (Fomicheva et al., 2020b) is the dataset used in the WMT20 quality estimation task, where participants are expected to directly predict the translation quality between source sentences and machine generated sentences without using reference sentences.",
"This dataset has seven language pairs: Sinhala-English (si-en), Nepalese-English (ne-en), Estonian-English (et-en), English-German (en-de), English-Chinese (en-zh), Romanian-English (ro-en), Russian-English (ru-en).",
"We use the test set, witwhere each language pair contains 1K tuples with source and machine generated sentences, as well as human judgements in the 0-100 range.",
"Therefore, with this dataset we can only perform crosslingual evaluation.",
"For each language model, we consider embeddings at the token level and sentence level individually and in combination.",
"In our experiments, Roberta-Large and XLM-Roberta-Base for monolingual and crosslingual assessments respectively.",
"because the former significantly outperforms the latter (Conneau et al., 2019), as also shown by Reimers and Gurevych (2020) for crosslingual semantic textual similarity (STS) tasks (Cer et al., 2017).",
"For a fair comparison with previous metrics like WMD 0 , we replaced their original embeddings with XLM-Roberta-Base embeddings.",
"For the semantic sentence embedding, we used XLM-Roberta-Base embeddings from Sentence Transformer, which were trained on SNLI (Bow-man et al., 2015) + MultiNLI (Williams et al., 2018) and then fine-tuned on the STS benchmark training data.",
"These sentence embeddings have been shown to provide good representations of the semantic relationship between two sentences, but they had not yet been tested for machine translation evaluation.",
"Without using semantic embeddings, the performance of SSS is not consistent across different languages pairs given our experimental datasets (see Appendix A.1).",
"XLM-Roberta-Large embeddings are not used in our experiments because they are not available in the pre-trained Sentence Transformer package yet.",
"For monolingual word and semantic sentence embeddings we use the Roberta-Large model, which has shown the best performance with BERTScore (Zhang et al., 2019).",
"The evaluation results are presented in this section.",
"Our code and data can be found on github 2 .",
"From Table 2, we can observe the Pearson correlation results of our metrics by comparing the source sentences with machine translated sentences using both single metrics and their combinations in the Multi-30K dataset.",
"The result reveals that SSS + WMD outperforms all individual metrics and the other combined metrics.",
"It is clear that SSS is better than both WMD and BERTScore, with WMD outperforming BERTScore in this specific crosslingual task.",
"In Table 3, the benefit of SSS becomes even more evident.",
"It again outperforms WMD and BERTScore, with BERTScore also significantly outperforming WMD in this case.",
"Moreover, SSS + BERTScore showed the best and more stable performance for all language pairs in the WMT-17 dataset.",
"This can be clearly visualised for en-lv as an example in Figure 1, where we plot metric scores in the Y axis against human scores in the X axis.",
"We believe the differences in the performance of the combined metric in the Multi-30K and WMT17 datasets happens because the sentence length differs significantly in these datasets: sentences in Multi-30K have on average 12-14 words, much shorter than those in the WMT-17 dataset.",
"Because WMD optimizes the word alignment globally for the whole sentence, instead of optimizing word alignment locally like BERTScore, the performance of WMD is better than BERTScore when sentence length is shorter, but it becomes a harder optimization problem when the sentence MT-REF Metrics de-en zh-en fi-en lv-en ru-en cs-en tr-en Avg BLEU 0.366 0.440 0.444 0.321 0.413 0.344 0.441 0.396 METEOR 0.460 0.557 0.631 0.450 0.525 0.480 0.596 0.528 MEANT 2.0 0.565 0.639 0.687 0.586 0.607 0.578 0.596 0.608 WMD o (Word2Vec) 0.531 0.595 0.689 0.505 0.562 0.513 0.561 0.565 WMD o (BERT) 0.546 0.623 0.710 0.543 0.585 0.531 0.637 0.596 WMD 0.730 0.769 0.827 0.736 0.733 0.698 0.770 0.752 BERTScore 0.745 0.775 0.833 0.756 0.746 0.710 0.751 0.759 SSS 0.612 0.653 0.730 0.703 0.700 0.622 0.654 0.668 SSS + WMD 0.755 0.779 0.847 0.781 0.786 0.731 0.781 0.780 SSS + BERTScore 0.770 0.785 0.860 0.792 0.796 0.746 0.782 0.790 Table 4: Pearson Correlation with human scores for the WMT-17 dataset (to English) with Roberta-Large in the MT-REF setting.",
"length is long.",
"This may explain why the performance of SSS + WMD is better than that of SSS + BERTScore in Multi-30K but lower than that of SSS + BERTScore in the WMT-17 dataset.",
"SSS also outperforms WMD and BERTScore in the WMT-20 dataset, as Table 5 shows.",
"SSS + BERTScore reaches the best performance in three out of seven language pairs and is the best metric in comparison with BERTScore or WMD alone.",
"The metrics that outperform SSS + BERTScore for three language pairs require multiple passes of the neural machine translation decoder to score or generate multiple translations (D-TP and D-Lex-Sim, respectively), or require supervised machine learning (Leaderboard baseline).",
"In the machine generated sentence to reference sentence case, as Table 2 shows, SSS + WMD achieves the best result in the monolingual Multi-30K tasks for both German to German and French to French using XLM-Roberta-Base embeddings.",
"However, for other datasets in this standard setting where we compare sentences in a monolingual fashion, as we can observe from Table 4 for the WMT-17 dataset, SSS + BERTScore is the best metric.",
"The reason for the differences is again likely to be the sentence lengths in the two datasets.",
"If taken independently, the performance of SSS is not as good here as that of WMD or BERTScore.",
"The two variants of the combined metrics still outperform any metric on their own, and reach the best performance results in this dataset.",
"It can also be observed from Table 4 that WMD o with Word2Vec is far behind than that with BERT embedding or BERTScore SSS SentSim E1 REF The food tastes good.",
"our WMD with Roberta-Large.",
"It indicates that the importance of using the pretrained contextual embedding as the representation of tokens.",
"A visual example of correlation plots can be seen in Figure 2 for the en-lv language pair again.",
"Generally, the metrics' performances in the case of SRC-MT are much lower than in the MT-REF setting.",
"This can be attributed to the embeddings used.",
"First, the models' embeddings are not the same in these two cases.",
"In the case of MT-REF, monolingual embeddings are used, which are known to be stronger; however these cannot be used in the case of SRC-MT evaluation, where crosslingual embeddings are used instead, which have been trained on more than 100 languages.",
"Also, the way the crosslingual embeddings were generated does not rely on specific alignments or mappings between tokens or sentences in different languages, which can make them suboptimal.",
"Second, the size of pretrained model for the case of MT-REF (Roberta-Large) is much larger than that of SRC-MT (XLM-Roberta-Base).",
"As previously mentioned, pre-trained semantic sentence embeddings using XLM-Roberta-Large are not available, so we instead provide a comparison with Roberta-Base for the MT-REF case with WMT-17 in Section 5.5 to show the impact of model size.",
"Since both XLM-Roberta-Base and Roberta-Large have multiple layers, selecting a good layer or combination of layers is important for WMD and BERTScore.",
"Here we use the WMT-17 dataset to study these representation choices.",
"The Pearson Correlation of WMD with human judgement scores for the SRC-MT setting by specific XLM-Roberta-Base's layers is shown in Figure",
"3. Se-Figure 2: Comparing BERTScore and SSS + BERTscore for lv-en in WMT-17 MT-REF case.",
"lecting Layer 9 as the token embeddings for XLM-Roberta-Base leads to the best average Pearson Correlation among 9 language pairs in this SRC-MT setting.",
"For Roberta-Large, in Figure 4 we study the performance of different layers using the WMT17 dataset in the MT-REF setting.",
"Among the 24 Metrics de-en zh-en fi-en lv-en ru-en cs-en tr-en Avg WMD 0.667 0.743 0.818 0.693 0.705 0.663 0.744 0.719 BERTScore 0.683 0.740 0.818 0.693 0.707 0.675 0.718 0.719 SSS 0.612 0.655 0.705 0.680 0.642 0.599 0.644 0.648 SSS + WMD 0.718 0.767 0.832 0.755 0.736 0.703 0.764 0.754 SSS + BERTScore 0.728 0.767 0.843 0.755 0.744 0.717 0.758 0.759 Table 7: Pearson Correlation with human scores for WMT-17 dataset with Roberta-Base in the MT-REF setting (to English).",
"output layers, the best layer seems to be 17.",
"This is inline with the results described in (Zhang et al., 2019), where the best layer for Roberta-Large to use in BERTScore is also found to be layer 17.",
"For illustration purposes, Table 6 shows a few cases where SSS performs better than token-level metric because it adds the notion of sentence meaning and where, as a consequence, SentSim performs better (examples E1 and E2).",
"It also show cases where SSS is too sensitive to semantic changes (example E3).",
"SSS also performs well in the SRC-MT case (example E4).",
"Here, the second machine translation has very different and incorrect word order, and the token-level metric (BERTScore) has very low performance compared to SSS, but both token-level and SSS metrics capture the incorrect word order.",
"The combined metric (SentSim), therefore, is very robust.",
"To analyse the impact of pre-trained embeddings, Table 7 shows the performance of Roberta-Base in the case of WMT-17 MT-REF.",
"As with the general trend in NLP, this confirms that stronger embeddings (Roberta-Large, Table 4) lead to better performance.",
"The same trend was observed for the other test sets.",
"In this paper, we propose to combine sentence-level and token-level evaluation metrics in an unsupervised way.",
"In our experiments on a number of standard datasets, we demonstrate that this combination is more effective for MT evaluation than the current state-of-the-art unsupervised token-level metrics, substantially outperforming these as well as sentence-level semantic metrics on their own.",
"The sentence level metric seems to capture higher-level or compositional semantic similarity, which complements the token-level semantic similarity information.",
"We also show that this combination approach can be applied both in the standard monolingual evaluation setting, where machine translations are compared to reference translations, and in a crosslingual evaluation setting, where reference translations are not available and machine translations are directly compared with the source sentences.",
"In future work, we will aim to improve the crosslingual metric and explore other types of multilingual embeddings for better mapping across different languages.",
"Lucia Specia was supported by funding from the Bergamot project (EU H2020 grant no. 825303)."
] | [
"abstain",
"objective",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"method",
"abstain",
"abstain",
"other",
"other",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"result",
"objective",
"other"
] |
[
"We investigate the use of NLP as a measure of the cognitive processes involved in storytelling, contrasting imagination and recollection of events.",
"To facilitate this, we collect and release HIPPOCORPUS , a dataset of 7,000 stories about imagined and recalled events.",
"We introduce a measure of narrative flow and use this to examine the narratives for imagined and recalled events.",
"Additionally, we measure the differential recruitment of knowledge attributed to semantic memory versus episodic memory (Tulving, 1972) for imagined and recalled storytelling by comparing the frequency of descriptions of general commonsense events with more specific realis events .",
"Our analyses show that imagined stories have a substantially more linear narrative flow, compared to recalled stories in which adjacent sentences are more disconnected.",
"In addition, while recalled stories rely more on autobiographical events based on episodic memory, imagined stories express more commonsense knowledge based on semantic memory.",
"Finally, our measures reveal the effect of narrativization of memories in stories (e.g., stories about frequently recalled memories flow more linearly; Bartlett, 1932).",
"Our findings highlight the potential of using NLP tools to study the traces of human cognition in language.",
"When telling stories, people draw from their own experiences (episodic knowledge; Conway et al., 1996, 2003) and from their general world knowledge (semantic knowledge; Bartlett, 1932; Oatley, 1999).",
"For example, in Figure 1 (top), a recalled story about a birth will likely recount concrete events from that day, relying heavily on the author's episodic memory (Tulving, 1972).",
"On the Research conducted during an internship at Microsoft Research.",
"other hand, an imagined story about a wedding (Figure 1, bottom) will largely draw from the author's commonsense knowledge about the world (Kintsch, 1988; Graesser et al., 1981).",
"We harness neural language and commonsense models to study how cognitive processes of recollection and imagination are engaged in storytelling.",
"We rely on two key aspects of stories: narrative flow (how the story reads) and semantic vs. episodic knowledge (the types of events in the story).",
"We propose as a measure of narrative flow the likelihood of sentences under generative language models conditioned on varying amounts of history.",
"Then, we quantify semantic knowledge by measuring the frequency of commonsense events (from the ATOMIC knowledge graph; Sap et al., 2019), and episodic knowledge by counting realis events (Sims et al., 2019), both shown in Figure 1. We introduce HIPPOCORPUS , 1 a dataset of 6,854 diary-like short stories about salient life events, to examine the cognitive processes of remembering and imagining.",
"Using a crowdsourc-ing pipeline, we collect pairs of recalled and imagined stories written about the same topic.",
"By design, authors of recalled stories rely on their episodic memory to tell their story.",
"We demonstrate that our measures can uncover differences in imagined and recalled stories in HIPPOCORPUS .",
"Imagined stories contain more commonsense events and elaborations, whereas recalled stories are more dense in concrete events.",
"Additionally, imagined stories flow substantially more linearly than recalled stories.",
"Our findings provide evidence that surface language reflects the differences in cognitive processes used in imagining and remembering.",
"Additionally, we find that our measures can uncover narrativization effects, i.e., the transforming of a memory into a narrative with repeated recall or passing of time (Bartlett, 1932; Reyna and Brainerd, 1995; Christianson, 2014).",
"We find that with increased temporal distance or increased frequency of recollection, recalled stories flow more linearly, express more commonsense knowledge, and are less concrete.",
"We construct HIPPOCORPUS , containing 6,854 stories (Table 1), to enable the study of imagined and recalled stories, as most prior corpora are either limited in size or topic (e.g., Greenberg et al., 1996; Ott et al., 2011).",
"See Appendix A for additional details (e.g., worker demographics; A.2).",
"We collect first-person perspective stories in three stages on Amazon Mechanical Turk (MTurk), using a pairing mechanism to account for topical variation between imagined and recalled stories.",
"Stage 1: recalled.",
"We ask workers to write a 1525 sentence story about a memorable or salient event that they experienced in the past 6 months.",
"Workers also write a 23 sentence summary to be used in subsequent stages, and indicate how long ago the events took place (in weeks or months; TIMESINCEEVENT ).",
"Stage 2: imagined.",
"A new set of workers write imagined stories, using a randomly assigned summary from stage 1 as a prompt.",
"Pairing imagined stories with recalled stories allows us to control for variation in the main topic of stories.",
"Stage 3: retold past.",
"After 23 months, we contact workers from stage 1 and ask them to re-tell their stories, providing them with the summary of their story as prompt.",
"Post-writing questionnaire (all stages).",
"Immediately after writing, workers describe the main topic of the story in a short phrase.",
"We then ask a series of questions regarding personal significance of their story (including frequency of recalling the event: FREQUENCYOFRECALL ; see A.1 for questionnaire details).",
"Optionally, workers could report their demographics.",
"2 3 Measures To quantify the traces of imagination and recollection recruited during storytelling, we devise a measure of a story's narrative flow, and of the types of events it contains (concrete vs. general).",
"Inspired by recent work on discourse modeling (Kang et al., 2019; Nadeem et al., 2019), we use language models to assess the narrative linearity of a story by measuring how sentences relate to their context in the story.",
"We compare the likelihoods of sentences under two generative models (Figure 2).",
"The bag model makes the assumption that every sentence is drawn independently from the main theme of the story (represented by E ).",
"On the other hand, the chain model assumes that a story begins with a 2 With IRB approval from the Ethics Advisory Board at Microsoft Research, we restrict workers to the U.S., and ensure they are fairly paid ($7.59.5/h).",
"where the log-probability of a sentence s in a context C (e.g., topic E and history s 1: i 1 ) is the sum of the log-probabilities of its tokens w t in context: log p ( s | C ) = (cid:80) log p ( w t | C , w 0: t 1 ) .",
"We compute the likelihood of sentences using OpenAI's GPT language model (Radford et al., 2018, trained on a large corpus of English fic-tion), and we set E to be the summary of the story, but find similar trends using the main event of the story or an empty sequence.",
"We measure the quantity of episodic and semantic knowledge expressed in stories, as proxies for the differential recruitment of episodic and semantic memory (Tulving, 1972) in stories.",
"Realis Event Detection We first analyze the prevalence of realis events, i.e., factual and non-hypothesized events, such as I visited my mom",
"(as opposed to irrealis events which have not happened, e.g., I should visit my mom).",
"By definition, realis events are claimed by the author to have taken place, which makes them more likely to be drawn from from autobiographical or episodic memory in diary-like stories.",
"We train a realis event tagger",
"(using BERT-base; Devlin et al., 2019)",
"on the annotated literary events corpus by Sims et al.",
"(2019), which slightly outperforms the original author's models.",
"We provide further training details in Appendix B.1.",
"edge included explicitly in stories, as a proxy for semantic memory, a form of memory that is thought to encode general knowledge about the world",
"(Tulving, 1972).",
"While this includes facts about how events unfold",
"(i.e., scripts or schemas; Schank and Abelson, 1977; van Kesteren et al., 2012), here we focus on commonsense knowledge, which is also encoded in semantic memory",
"(McRae and Jones, 2013).",
"Given the social focus of our stories, we use the social commonsense knowledge graph ATOMIC",
"(Sap et al., 2019).",
"4 For each story, we first match possible ATOMIC events to sentences by selecting events that share noun chunks and verb phrases with sentences",
"(e.g., getting married",
"(cid:32)",
"PersonX gets married; Figure 1).",
"We then search the matched sentences' surrounding sentences for commonsense inferences",
"(e.g., be very happy",
"(cid:32)",
"happy; Figure 1).",
"We describe this algorithm in further detail in Appendix B.2.",
"In our analyses, the measure quantifies the number of story sentences with commonsense tuple matches in the two preceding and following sentences.",
"To supplement our analyses, we compute several coarse-grained lexical counts for each story in HIPPOCORPUS .",
"Such approaches have been used in prior efforts to investigate author mental states, temporal orientation, or counterfactual thinking in language (Tausczik and Pennebaker, 2010; Schwartz et al., 2015; Son et al., 2017).",
"We count psychologically relevant word categories using the Linguistic Inquiry Word Count (Pennebaker et al., 2015, LIWC;), focusing only on the cognitive processes, positive emotion, negative emotion, and I-word categories, as well as the ANALYTIC and TONE summary variables.",
"5 Additionally, we measure the average concreteness level of words in stories using the lexicon by Brys-baert et al. (2014).",
"We summarize the differences between imagined and recalled stories in HIPPOCORPUS in Table 2. For our narrative flow and lexicon-based analyses,",
"4 ATOMIC contains social and inferential knowledge about the causes (e.g., X wants to start a family) and effects (e.g., X throws a party, X feels loved) of everyday situations like PersonX decides to get married.",
"5 See liwc.wpengine.com/interpretingliwc-output/ for more information on LIWC variables.",
"we perform paired t -tests.",
"For realis and commonsense event measures, we perform linear regressions controlling for story length.",
"6 We Holm-correct for multiple comparisons for all our analyses (Holm, 1979).",
"Imagined stories flow more linearly.",
"We compare l , i.e., pairwise differences in NLL for sentences when conditioned on the full history vs. no history (density plot shown in Figure 3).",
"When averaging l over the entire story, we find that sentences in imagined stories are substantially more predictable based on the context set by prior sentences than sentences in remembered stories.",
"This effect is also present with varying history sizes (see Figure 5 in Appendix C.1).",
"Recalled stories are more event-dense.",
"As seen in Table 2, we find that imagined stories contain significantly fewer realis events (controlling for story length).",
"7 Imagined stories express more commonsense knowledge.",
"Using the same analysis method, our results show that sentences in imagined stories are more likely to have commonsense inferences in their neighborhood compared to recalled stories.",
"Lexical differences.",
"Lexicon-based counts uncover additional differences between imagined and recalled stories.",
"Namely, imagined stories are more self-focused (I-words), more emotional 6 Linear regressions use z -scored variables.",
"We confirm that our findings hold with multivariate regressions as well as when adding participant random effects in Appendix C.2.",
"7 Note that simply using verb count instead of number of realis events yields the opposite effect, supporting our choice of measure.",
"(TONE , positive and negative emotion) and evoke more cognitive processes.",
"8 In contrast, recalled stories are more concrete and contain more logical or hierarchical descriptions (ANALYTIC ).",
"Discussion.",
"Our interpretation of these findings is that the consolidated memory of the author's life experience permeates in a more holistic manner through the sentences in the recalled story.",
"Imagined stories are more fluent and contain more commonsense elaborations, which suggests that authors compose a story as a sequence, relying more on preceding sentences and commonsense knowledge to generate the story.",
"While our findings on linearity hold when using different language models trained on Wikipedia articles (Dai et al., 2019) or English web text (mostly news articles; Radford et al., 2019), a limitation of the findings is that GPT is trained on large corpus of fiction, which may boost linearity scores for imagined (vs. recalled) sentences.",
"Future work could explore the sensitivity of our results to changes in the language model's training domain or neural architecture.",
"We further investigate how our narrative and commonsense measures can be used to uncover the narrativization of recalled events (in recalled and retold stories).",
"These analyses aim to investigate the hypothesis that memories are narrativized 8 The cognitive processes LIWC category counts occurrences of words indicative of cognitive activity (e.g., think, because, know).",
"over time (Bartlett, 1932), and that distant autobiographical memories are supplemented with semantic or commonsense knowledge (Reyna and Brainerd, 1995; Roediger III et al., 1996; Christianson, 2014; Brigard, 2014).",
"First, we compare the effects of recency of the event described ( TIMESINCEEVENT : a continuous variable representing the log time since the event).",
"9 Then, we contrast recalled stories to their retold counterparts in pairwise comparisons.",
"Finally, we measure the effect of how frequently the experienced event is thought or talked about ( FREQUENCYOFRECALL : a continuous variable ranging from very rarely to very frequently).",
"10 As in 4, we Holm-correct for multiple comparisons.",
"Temporal distance.",
"First, we find that recalled and retold stories written about temporally distant events tend to contain more commonsense knowledge ( | | = 1 . 10 , p < 0 . 001 ).",
"We found no other significant associations with TIMESINCEEVENT .",
"On the other hand, the proposed measures uncover differences between the initially recalled and later retold stories that mirror the differences found between recalled and imagined stories (Ta-ble 2).",
"Specifically, retold stories flow significantly more linearly than their initial counterparts in a pairwise comparison (Cohen's | d | = 0 . 17 , p < 0 . 001 ; see Figure 3).",
"Our results also indicate that retold stories contain fewer realis events ( | | = 0 . 09 , p = 0 . 025 ), and suggest a potential increase in use of commonsense knowledge in the retold stories ( | | = 0 . 06 , p = 0 . 098 ).",
"Using lexicon-based measures, we find that retold stories are significantly higher in scores for cognitive processes ( | d | = 0 . 12 , p < 0 . 001 ) and positive tone ( | d | = 0 . 07 , p = 0 . 02 ).",
"Surprisingly, initially recalled stories contain more self references than retold stories (I-words; | d | = 0 . 10 , p < 0 . 001 ); higher levels of self reference were found in imagined stories (vs. recalled; Table 2).",
"Frequency of recall.",
"We find that the more an event is thought or talked about (i.e., higher FREQUENCYOFRECALL ), the more linearly its story flows ( l ; | | = 0 . 07 , p < 0 . 001 ), and the fewer realis events ( | | = 0 . 09 , p < 0 . 001 ) it contains.",
"9 We use the logarithm of the time elaspsed since the event, as subjects may perceive the passage of time logarithmically (Bruss and R uschendorf, 2009; Zauberman et al., 2009).",
"Furthermore, using lexicon-based measures, we find that stories with high FREQUENCYOFRECALL tend to contain more self references (I-words; Pearson's | r | = 0 . 07 , p < 0 . 001 ).",
"Conversely, stories that are less frequently recalled are more logical or hierarchical (LIWC's ANALYTIC ; Pearson's | r | = 0 . 09 , p < 0 . 001 ) and more concrete (Pearson's | r | = 0 . 05 , p = 0 . 03 ).",
"Discussion.",
"Our results suggest that the proposed language and commonsense methods can measure the effects of narrativization over time in recalled memories (Bartlett, 1932; Smorti and Fioretti, 2016).",
"On one hand, temporal distance of events is associated with stories containing more commonsense knowledge and having more linear flow.",
"On the other hand, stories about memories that are rarely thought about or talked about are more concrete and contain more realis events, compared to frequently recalled stories which flow more linearly.",
"This suggests that stories that become more narrativized, either by the passing of time or by being recalled repeatedly, become more similar in some ways to imagined stories.",
"To investigate the use of NLP tools for studying the cognitive traces of recollection versus imagination in stories, we collect and release HIPPOCORPUS , a dataset of imagined and recalled stories.",
"We introduce measures to characterize narrative flow and influence of semantic vs. episodic knowledge in stories.",
"We show that imagined stories have a more linear flow and contain more commonsense knowledge, whereas recalled stories are less connected and contain more specific concrete events.",
"Additionally, we show that our measures can uncover the effect in language of narrativization of memories over time.",
"We hope these findings bring attention to the feasibility of employing statistical natural language processing machinery as tools for exploring human cognition.",
"The authors would like to thank the anonymous reviewers, as well as Elizabeth Clark, Tal August, Lucy Lin, Anna Jafarpour, Diana Tamir, Justine Zhang, Saadia Gabriel, and other members of the Microsoft Research and UW teams for their helpful comments."
] | [
"objective",
"method",
"method",
"method",
"result",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"objective",
"result",
"method",
"abstain",
"objective",
"abstain",
"abstain",
"method",
"abstain",
"result",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"result",
"objective",
"objective",
"other"
] |
[
"Abstract Graph-based semantic parsing aims to represent textual meaning through directed graphs.",
"As one of the most promising general-purpose meaning representations, these structures and their parsing have gained a significant interest momentum during recent years, with several diverse formalisms being proposed.",
"Yet, owing to this very heterogeneity, most of the research effort has focused mainly on solutions specific to a given formalism.",
"In this work, instead, we reframe semantic parsing towards multiple formalisms as Multilingual Neural Machine Translation ( MNMT ), and propose SGL , a many-to-many seq2seq architecture trained with an MNMT objective.",
"Backed by several experiments, we show that this framework is indeed effective once the learning procedure is enhanced with large parallel corpora coming from Machine Translation: we report competitive performances on AMR and UCCA parsing, especially once paired with pre-trained architectures.",
"Furthermore, we find that models trained under this configuration scale remarkably well to tasks such as cross-lingual AMR parsing: SGL outperforms all its competitors by a large margin without even explicitly seeing non-English to AMR examples at training time and, once these examples are included as well, sets an unprecedented state of the art in this task.",
"We release our code and our models for research purposes at https: //github.com/SapienzaNLP/sgl .",
"Being able to associate natural language text with well-defined and machine-actionable meaning representations, i.e. the task of semantic parsing ( SP ), is one of the holy grails in Natural Language Processing ( NLP ) and Understanding (Nav-igli, 2018).",
"Considering how a breakthrough in this direction would empower NLP systems to ex-plictly make sense of natural language, the ever-growing interest semantic parsing has been receiving really comes as no surprise.",
"Graph-based formalisms such as Abstract Meaning Representation (Banarescu et al., 2013, AMR ), Elementary Dependency Structures (Oepen and Lnning, 2006, EDS ), Prague Tectogrammatical Graphs (Ha-jic et al., 2012, PTG ), Universal Conceptual Cognitive Annotation (Abend and Rappoport, 2013, UCCA ), inter alia , are emerging as the de facto standard for general-purpose meaning representations and have shown potential in Machine Translation (Song et al., 2019), Text Summarization (Hardy and Vlachos, 2018), Human-Robot Interaction (Bonial et al., 2020), and as evaluation metrics (Sulem et al., 2018; Xu et al., 2020b).",
"These formalisms propose encoding meaning through directed graphs, however, each of them builds upon different linguistic assumptions, aims to target different objectives and, at a more practical level, assigns different functions to nodes and edges.",
"For instance, while AMR uses nodes to encode concepts and edges to express the semantic relations between them, UCCA proposes using text tokens as terminal nodes and building graphs on top of them.",
"As a result of this heterogeneous landscape, often referred to as framework-specific balkanization (Oepen et al., 2020), graph-based semantic parsing has seen a proliferation of framework-specific solutions.",
"However, approaches capable of competitively scaling across formalisms represent a natural desideratum , and recent works have started to explore this direction, examining the usage of multi-task learning in different architectures (Her-shcovich et al., 2018; Oepen et al., 2019), or casting different formalisms under a unified framework where models can be trained to perform graph transduction (Zhang et al., 2019b).",
"Nevertheless, despite achieving promising results, research in this direction has been hindered by the general lack of training data that afflicts semantic parsing.",
"Indeed, due to the inherent complexity of this task, annotated corpora are still scarce and prohibitively expensive to expand.",
"In this work, we set ourselves to address these issues and, in particular, we propose Speak the Graph Languages ( SGL ), a many-to-many seq2seq architecture which we show to competitively scale across formalisms and across languages.",
"1 The key idea is to train a seq2seq model with a Multilingual Neural Machine Translation ( MNMT ) objective, where, given an input text and an identifier denoting the desired output formalism, a single shared model has to learn to translate towards the corresponding linearized graph.",
"We use AMR and UCCA as our cases in point to show the effectiveness of this framework.",
"In particular, we show that, once the learning procedure also considers large parallel corpora coming from Machine Translation ( MT ), this configuration becomes an effective approach for framework-independent parsing via a single model.",
"Even more interestingly, this model scales surprisingly well to cross-lingual parsing and is capable of navigating through translation paths like IT AMR , 2 which it has never seen during training.",
"The contributions of this work are therefore as follows: We reframe semantic parsing towards multiple formalisms and from multiple languages as multilingual machine translation; On AMR parsing, our framework achieves competitive performances, surpassing most of its current competitors once paired with a pre-trained Transformer; We outperform all current alternatives in cross-lingual AMR parsing without ever seeing non-English to AMR examples at training time and push the current state of the art even further once we include these examples; On UCCA parsing, we reach competitive results, outperforming a strong BERT-powered baseline (Hershcovich and Arviv, 2019).",
"Our work is mainly concerned with semantic parsing in UCCA and AMR , considering also the cross-1",
"cross-1 By across languages , we mean that the model is capable of performing cross-lingual semantic parsing as defined for AMR by Damonte and Cohen (2018).",
"Unless otherwise specified, we will follow this perspective throughout the paper.",
"2 IT stands for Italian.",
"Semantic Parsing Arguably among the formalisms that have drawn the most interest, AMR has seen the emergence of a rich yet dedicated literature, with recent approaches that can be roughly clustered into two groups.",
"On the one hand, several graph-based solutions have been proposed (Lyu and Titov, 2018; Zhang et al., 2019a,b; Zhou et al., 2020; Cai and Lam, 2020); among these solutions, Zhou et al. (2020) show the effectiveness of enhancing an aligner-free parser with latent syntactic information, whereas Cai and Lam (2020) present an iterative method to build and refine AMR graphs incrementally.",
"On the other hand, translation-based approaches, where seq2seq models are trained to translate from natural language text to linearized graphs, have been shown to reach competitive performances, despite the scarcity of training data (Konstas et al., 2017; van Noord and Bos, 2017; Ge et al., 2019).",
"Continuing this latter direction and arguably closest to our work, Xu et al. (2020a) and Bevilacqua et al. (2021) show that these models, once paired with adequate pre-training, can behave on par or better than dedicated and more sophisticated graph-based alternatives, surpassing the performances of Cai and Lam (2020).",
"In particular, similarly to our work, Xu et al. (2020a) leverage a multilingual framework inspired by Johnson et al. (2017) and explore the possibility of pre-training on a range of related tasks, including MT ; however, their focus is limited to showing the effectiveness of transfer learning from related tasks to English AMR parsing.",
"Conversely, here we show that the benefits of multilingual seq2seq frameworks are not limited to English TEXT -toAMR but, rather, that they enable astonishing performances on unseen translation paths such as IT AMR and competitive results on other frameworks, using UCCA as our case in point.",
"In this sense, we continue the recent cross-framework trend formally started by the shared task of Oepen et al. (2019), exploring the possibility of using translation-based approaches for framework-independent parsing, as opposed to the transition-based parsers proposed in that seminal work.",
"Our findings are in line with the recent results reported by Oepen et al. (2020) and, in particular, by Ozaki et al. (2020), where the authors cast semantic parsing in multiple formalisms as translation towards a novel Plain Graph Notation",
"( PGN ) they devise.",
"However, whereas they train different independent models for each framework, we explore the possibility of using a single multilingual model.",
"Cross-lingual AMR While most of the research effort in the AMR community has been focused on English only, the seminal work of Damonte and Cohen (2018) gave rise to an interesting new direction, i.e. exploring the extent to which AMR can act as an interlingua.",
"The authors introduced a new problem, cross-lingual AMR parsing, and defined it as the task of recovering, given a sentence in any language, the AMR graph corresponding to its English translation.",
"Using an adapted version of the transition-based parser originally proposed by Damonte et al. (2017) and training it on silver data generated through annotation projection, they examined whether AMR graphs could be recovered starting from non-English sentences.",
"Even though their models fell short when compared to MT alternatives, 3 their work showed promising results and suggested that, despite translation divergences, AMR could act effectively as an interlingua.",
"Annotation projection has been focal in subsequent work as well.",
"Blloshmi et al. (2020) propose an aligner-free cross-lingual parser, thus disposing of the need for word alignments in the annotation projection pipeline; their parser manages to outperform MT alternatives when both annotation projection and these baselines have access to comparable amounts of data.",
"Conversely, Sheth et al. (2021) leverage powerful contextualized word embeddings to improve the foreign-text-to-English-AMR alignments, surpassing all previous approaches and, most importantly, the yet-unbeaten MT baselines that have access to larger amounts of data.",
"3 The input sentence is first translated towards English and, then, an English parser is used.",
"We stand out from previous research and show that, as a matter of fact, annotation projection techniques are not needed to perform cross-lingual AMR parsing.",
"By jointly training on parallel corpora from MT and the EN SP data we have, we find that a multilingual model can navigate unseen translation paths such as IT AMR effectively, outperforming all current approaches by a significant margin; yet, annotation projection is naturally beneficial and, when its training data are taken into account as well, SGL pushes performances even further.",
"In this section, we describe SGL , our proposed approach to graph-based semantic parsing.",
"We first explain the graph linearizations we employ for AMR and UCCA , along with their delinearizations (3.1).",
"We then describe the seq2seq modelling approach we use (3.2) and, finally, we present our multilingual framework (3.3).",
"We now describe how we convert the considered meaning representations into translatable text sequences ( linearization ), along with their reverse process ( delinearization ).",
"For AMR parsing, as in van Noord and Bos (2017), we first simplify AMR graphs by removing variables and wiki links.",
"We then convert these stripped AMR graphs into trees by duplicating co-referring nodes.",
"At this point, in order to obtain the final linearized version of a given AMR , we concatenate all the lines of its PENMAN notation (Goodman, 2020) together, replacing newlines and multiple spaces with single spaces (Figure 1a and 1b).",
"Conversely, delinearization is performed by assigning a variable to each predicted concept, performing Wikification, 4 restoring co-referring nodes and, where possible, repairing any syntactically malformed subgraph.",
"5 For both phases, we use the scripts released by van Noord and Bos (2017).",
"6 For UCCA parsing, we employ a Depth-First Search ( DFS ) approach: starting from the root, we navigate the graph, using square brackets to delimit subgraph boundaries and special variables to denote terminal and non-terminal nodes; remote edges are denoted by a special modifier appended to their labels, while re-entrancies, that is, edges whose target is a node already seen, are handled by simply entering the respective variable (Figure 1c and 1d).",
"Similarly to AMR , delinearization is performed by back-parsing this sequence into a UCCA graph, repairing malformed subgraphs when possible; 7 additionally, as terminal nodes are anchored in UCCA , we remove those whose anchoring is impossible.",
"The linearization and delinearization scripts for this schema are released along with the rest of our code.",
"We employ neural seq2seq models based upon the Transformer architecture (Vaswani et al., 2017).",
"This architecture is essentially composed of two building blocks, namely, a Transformer encoder and a Transformer decoder .",
"The encoder is a stack of N identical layers, each made up of two sublayers: the first is a multi-head self-attention mechanism, while the second is a position-wise fully connected feed-forward network.",
"The decoder follows a similar architecture, presenting, however, an additional sub-layer that performs multi-head attention over the output of the encoder.",
"Within this work, we use two different kinds of Transformer architecture, Cross and mBART (Liu et al., 2020).",
"Cross is a randomly initialized Transformer closely following the architecture depicted by Vaswani et al. (2017), except for a significant difference: we leverage a factorized embedding parameterization (Lan et al., 2020), that is, we decompose the large vocabulary embedding matrix into two smaller matrices.",
"While the first of these represents the actual embedding matrix and projects one-hot vectors into an embedding space 4 We use DBpedia Spotlight API (Daiber et al., 2013).",
"5 Although trained to generate syntactically correct graphs, the outputs seq2seq models produce may contain syntactic errors, such as brackets that do not match.",
"6 https://github.com/RikVN/AMR 7 Should repairing fail, the faulty subgraph is discarded altogether.",
"whose dimension is lower than the Transformer hidden size, the second one takes care of projecting these intermediate representations towards the actual Transformer hidden space.",
"This technique significantly reduces the number of parameters and, within the context of our experiments, did not show any significant performance penalty.",
"On the other hand, mBART is a multilingual Transformer pre-trained in many languages over large-scale monolingual corpora.",
"As AMR and UCCA are naturally not included among the supported languages in the vocabulary, we apply an architectural change to mBART and increase its vocabulary with two new language ids.",
"More specifi-cally, we augment its embedding matrix by adding two additional vectors, which we randomly initialize as in Tang et al. (2021).",
"In order to empower our models to support translation from and towards multiple languages, we employ a data-driven approach: we replace the start token of the decoder with a special tag specifying the language the encoder representations should be unrolled towards.",
"Figure 2 shows an example of this schema.",
"It is worth pointing out that, while for Cross we do not feed the source language to the encoder, when using the mBART model we follow its input format and do provide it.",
"Once data have been tagged according to this schema, we train a many-to-many translation model on both the semantic parsing and English-centric parallel corpora.",
"8 Considering that our focus is on semantic parsing, we perform oversam-pling on the AMR and UCCA datasets.",
"Furthermore, when considering the parallel corpora from MT , we flip the training direction with probability 0 .",
"5 , hence allowing our model to see at training time both the X EN and EN X training directions; we argue that this stochastic flip benefits our models in multiple ways: As EN X shares the source language with both EN AMR and EN UCCA , this results in positive transfer; As AMR , UCCA and EN are significantly related, X EN also results in positive transfer (similar target language); 8 Henceforth, without loss of generality, we will use English as the source language of the MT data and denote by X all the target-side languages.",
"Finally, X EN allows our model to navigate unseen translation paths (i.e. zero-shot ) such as IT AMR and thus tackle tasks like cross-lingual AMR parsing.",
"We assess the effectiveness of our proposed approach by evaluating its performance on all translation paths where the target language is a graph formalism, the only exception being X UCCA , with X any language but English.",
"This choice is motivated by the fact that, differently from AMR where cross-lingual AMR aims to produce English-based meaning representations (Damonte and Cohen, 2018), UCCA builds graphs on top of its tokens which are, consequently, inherently in the same language as the input text (Hershcovich et al., 2019); we leave exploring this direction to future work.",
"We choose to use both Cross , a randomly initialized Transformer, and mBART , a multilingual pre-trained Transformer, to better grasp the effects of this joint multilingual framework in different regimes.",
"In particular, we explore the following configurations: models trained only on a single semantic parsing task ( AMR or UCCA parsing) and without considering any parallel data, denoted by Cross st and mBART st ; models trained on both semantic parsing tasks and the MT data, denoted by Cross mt and mBART mt .",
"Furthermore, so as to explore whether the training schedules we use result in underfitting for AMR and UCCA , we also consider Cross ftmt and mBART ftmt , that is, Cross mt and mBART mt fine-tuned with a training schedule biased towards the semantic parsing formalism that is being considered.",
"9 4.2 Datasets and Preprocessing AMR For AMR parsing, we use AMR-2.0 (LDC2017T10) and its recently released expansion, AMR-3.0 (LDC2020T02), amounting, respectively, to 39 260 and 59 255 manually-created sentence-graph pairs.",
"Cross-Lingual AMR We use Abstract Meaning Representation 2.0 Four Translations (Damonte and Cohen, 2020) to investigate the performance of SGL on cross-lingual AMR parsing.",
"This corpus contains translations of the sentences in the test set of AMR-2.0 in Chinese ( ZH ), German ( DE ), Italian ( IT ) and Spanish ( ES ).",
"UCCA We replicate the setting of the CoNLL 2019 Shared Task (Oepen et al., 2019).",
"We train our models using the freely available 10 UCCA portion of the training data; this corpus amounts to 6 572 sentence-graph pairs, drawn from the English Web Treebank (2012T13) and English Wikipedia articles on celebrities.",
"As no official development set was included in the data release, following Hershcovich and Arviv (2019), we reserve 500 instances and use them as the validation set.",
"To the best of our knowledge, the full evaluation data have not been released yet and, therefore, we compare with state-of-the-art alternatives and report results only on The Little Prince , a released subset consisting of 100 manually-tagged sentences sampled from the homonymous novel.",
"Parallel Data We use English-centric parallel corpora in four languages, namely, Chinese, German, Italian and Spanish; we employ Mul-tiUN (Tiedemann, 2012) for Chinese and Spanish, ParaCrawl (Espl et al., 2019) for German, and Europarl (Tiedemann, 2012) for Italian.",
"We perform a mild filtering over all the available parallel sentences and then take the first 5 M out of these.",
"11 Preprocessing We do not perform any preprocessing or tokenization, except for the graph linearizations explained in 3.1 and Chinese simpli-fication.",
"12 Instead, we directly apply subword to-kenization with a Unigram Model (Kudo, 2018).",
"When working with Cross in a single-task setting on AMR or UCCA , we follow Ge et al. (2019) and use a vocabulary size of 20 k subwords; instead, when working in the multilingual setting, we increase this value to 50 k so as to better accommodate the increased amount of languages.",
"Conversely, when using mBART , we always use the original vocabulary consisting of 250 k subwords.",
"We evaluate AMR and cross-lingual AMR parsing by using the Smatch score 13 (Cai and Knight, 2013), a metric that computes the overlap between two graphs.",
"Furthermore, in order to have a better picture of the systems' performances, we also re-11 See Appendix C for further details.",
"12 We use the hanziconv library ( https://github. com/berniey/hanziconv ).",
"port the fine-grained scores as computed by the evaluation toolkit 14 of Damonte et al. (2017).",
"For UCCA parsing, we employ the official evaluation metric 15 of the shared task, conceptually similar to the Smatch score.",
"We now report the results SGL achieves focusing on the following translation paths:",
"i) EN AMR (5.1);",
"ii) X AMR , with X any language among Chinese, German, Italian and Spanish (5.2);",
"iii) EN UCCA (5.3).",
"We report the Smatch and fine-grained scores that SGL and its current state-of-the-art alternatives attain on AMR-2.0 in Table 1 (top).",
"Among the competing systems considered, for Bevilacqua et al. (2021) we report their BART -powered baseline (SPRING bart ) and their best performing model (SPRING).",
"As a first result, we want to highlight the significant boost that jointly training within our proposed framework on MT data provides; Cross mt outperforms Cross st by more than 7 points and reaches competitive performances when compared with current state-of-the-art approaches.",
"Furthermore, the gap of 1 .",
"4 points between Cross mt and Cross ftmt shows that the training schedule we use for Cross 14 https://github.com/mdtux89/ amr-evaluation 15 https://github.com/cfmrp/mtool does indeed result in underfitting for AMR and that further training is beneficial; this fine-tuned alternative achieves 79 .",
"5 Smatch score, less than one point behind Xu et al. (2020a).",
"Considering the similarity between the two approaches, this difference is likely caused by the increased number of tasks our model is asked to handle.",
"Once we replace Cross with mBART , all performances rise significantly.",
"In particular, even mBART st , a single-task variant with no additional data, outperforms all its alternatives except for SPRING and SPRING bart (Bevilac-qua et al., 2021), highlighting the potential of fully pre-trained Transformer language models for translation-based approaches.",
"mBART mt and mBART ftmt push performances further up, showing that the MT data are beneficial even in this pre-trained setting and that the multi-task training set, which enables a single shared model to scale across formalisms and languages, is not detrimental to English AMR parsing.",
"However, arguably more interesting is the comparison between the performances of mBART models and SPRING, which, in contrast, builds upon the English-only BART (Lewis et al., 2020).",
"In particular, as SPRING bart outperforms even mBART ftmt , this finding suggests that, as expected, BART is more suitable than mBART when dealing with English AMR .",
"However, as we show in 5.2, our choice is beneficial for cross-lingual AMR parsing and results in an interesting trade-off.",
"Finally, we also evaluate SGL on AMR-3.0 and report the results of Cross ftmt , mBART st and mBART ftmt when trained on this dataset (Figure 1 bottom).",
"Overall, we witness a similar trend compared to AMR-2.0.",
"We now show the performances of SGL on cross-lingual AMR parsing in terms of Smatch score over Chinese ( ZH ), German ( DE ), Italian ( IT ) and Spanish ( ES ).",
"For comparison, we report the results of the systems proposed by Damonte and Cohen (2018, AMREAGER ), Blloshmi et al. (2020, XL-AMR ) and Sheth et al. (2021); along with their best systems, we also show the strongest MT baseline reported in Damonte and Cohen (2018, AMREAGERMT ) and the zero-shot configuration explored in Blloshmi et al. (2020, XL-AMR ).",
"falling short only when compared to the recent work of Sheth et al. (2021); in particular, it surpasses the strong AMREAGERMT baseline.",
"The most interesting aspect of this result is that Cross ftmt attains these performances without ever seeing at training time any X AMR translation path; this is in marked contrast with all previous literature and with the systems we report in Table 2.",
"This finding clearly highlights the effectiveness of transfer learning and, by extension, of our proposed framework in this setting.",
"Secondly, the performances mBART st achieve are astounding under multiple perspectives.",
"First, to the best of our knowledge, it is the first reported result of AMR systems achieving competitive performances on cross-lingual AMR parsing in a fully zero-shot configuration: mBART st is fine-tuned solely on EN AMR and then applied directly to X AMR translation; especially when compared to XL-AMR , the only similar approach we are aware of, the gap is significant.",
"Second, among the languages we consider, the case of Chinese is especially interesting as it appears to require constrained decoding in order to work properly: in particular, we restrict the model to generate only subwords whose characters belong to the English alphabet.",
"16 If we were to perform ZH AMR parsing with no additional decoding machinery, as for the other languages, performances would be significantly lower, with mBART st attaining only 31 .",
"9 .",
"This performance drop is caused by 16 The reported results on Chinese of all mBART models have been computed using this form of decoding.",
"the model leaving some nodes of the graph untranslated, i.e. named entities left written in Chinese ( rather than Obama ), which disrupts the auto-regressive nature of the decoding procedure and, besides, eventually results in a penalized Smatch score.",
"Finally, despite the larger amount of pre-training mBART has been exposed to, its bigger capacity and better Smatch score on English, mBART st still falls short when compared to Cross ftmt , highlighting the benefits of seeing related translation directions at training time.",
"mBART mt pushes the bar further up, with performances on German, Spanish and Italian that are now only roughly 10 points behind their English counterparts.",
"As mBART mt significantly outperforms mBART st , this result shows that, despite the massive pretraining, parallel data are still beneficial for cross-lingual AMR .",
"Moreover, differently from English AMR , mBART ftmt does not yield an improvement and, in fact, performances slightly drop on average.",
"While the scores mBART mt attains are already unprecedented, it is natural to wonder whether annotation projection (AP) might yield a further beneficial effect.",
"To this end, similarly to Blloshmi et al. (2020), we translate the input sentences of AMR-2.0 into the four languages under consideration 17 and build a training set for each language by pairing the translated sentence with the original AMR graph.",
"We further fine-tune mBART ftmt , including also these new datasets among the training data.",
"This model, which we denote by mBART ftmt + AP, surpasses further mBART mt , clearly underlining the beneficial effect of this technique.",
"Finally, following Sheth et al. (2021), we also report the results of SGL when evaluated on the machine-translated test set; 18 similarly to their findings, we observe that, as the mismatch between the training and test set is reduced, our parser performs better in this setting than on the human-translated one.",
"We report in Table 3 the performance of SGL on UCCA parsing.",
"We compare our approach with the original multi-task baseline (Oepen et al., 2019) and 3 transition-based parsers that participated; in 17 We use the MarianMT models (Tiedemann and Thottin-gal, 2020) available in the HuggingFace Transformers library (Wolf et al., 2020).",
"18 We use the same MT models we utilized for annotation projection.",
"particular, we report the score of Che et al. (2019), the system that ranked first in both all-framework and UCCA parsing.",
"First of all, we note the result of Cross st ; while its performance is far below the score Che et al. (2019) achieve, it still outperforms the original proposed baseline by more than 10 points.",
"Furthermore, to the best of our knowledge, apart from the recent works proposed in the latest shared task of Oepen et al. (2020), this is the first reported result of translation-based approaches on UCCA parsing.",
"Once plugged into our multilingual framework, UCCA benefits from transfer learning to an even greater extent than AMR parsing, likely owing to the smaller amount of training data: Cross mt and especially Cross ftmt significantly reduce the gap between SGL and Che et al. (2019), with Cross ftmt outperforming the multi-task transition-based approach of Hershcovich and Arviv (2019).",
"The usage of mBART pushes up the system's performance further, with mBART st achieving 77 .",
"0 and mBART mt 79 .",
"9 ; differently from AMR , mBART ftmt suffers from overfitting and its performance is actually lower than that of mBART mt .",
"Even though these scores are lower than those of Che et al. (2019), we argue that such results are still incredibly promising as they demonstrate the effectiveness of SGL in tackling cross-framework semantic parsing.",
"Indeed, these results show that multilingual translation-based approaches allow for a single model to jointly accommodate different formalisms, each potentially linearized according to a different linearization scheme.",
"Furthermore, we believe there is a significant margin for improvement on both the linearization used and the model; for instance, we did not consider node ids such as <root_0> as special tokens, but instead had the unigram tokenizer handle them as if they were normal AMR UCCA Model EN DE ES IT ZH EN Cross st 70 .",
"Finally, we wish to point out that direct comparability between our system and those reported is hindered by the fact that our training setting is significantly different from theirs; in particular, we limit ourselves to two frameworks only and leverage resources (the parallel corpora from MT ) whose usage was forbidden to the shared task participants.",
"19 Nevertheless, we believe that their results are needed here to better contextualize the performances SGL obtains.",
"Although the performances of Cross mt are remarkable, mBART st achieves competitive results on cross-lingual parsing and fares even better on English parsing.",
"While mBART st admittedly features a massive amount of pre-training, this pre-training is over monolingual corpora and, as such, the model has never seen any parallel data.",
"We therefore wonder to what extent the parallel nature of the additional MT data we use is crucial for Cross mt .",
"To answer this question, we treat our MT corpora as monolingual data by sampling, for each instance, either the source or target side and converting the translation task into a denoising one: given an instance EN IT , we sample either EN or IT with equal probability, denoting the result by Z , and convert the instance into g ( Z ) Z , where g is a noising function that corrupts the input text.",
"We follow Lewis et al. (2020) and choose a noising function that masks 35% of the words by random sampling a span length from a Poisson distribution ( = 3 . 5 ).",
"Applying this noisification scheme to the MT data, we train a model identical to Cross mt and denote it by Cross Nmt .",
"As shown in Table 4, in this data regime, the parallel nature is crucial both for English and, especially, for cross-lingual parsing.",
"While Cross Nmt does yield a significant boost over Cross st , when 19 Allowed resources are specified at: http://svn.",
"compared instead to Cross mt , it is 4 points behind on UCCA parsing and only half way on AMR parsing.",
"The difference is even more marked in the cross-lingual setting, where Cross Nmt simply does not work.",
"In this work, we presented SGL , a novel framing of semantic parsing towards multiple formalisms as Multilingual Neural Machine Translation.",
"That is to say, given a sentence and the desired output formalism, a many-to-many neural model has to learn to translate from the input sentence to the corresponding linearized graph.",
"Within this framework, we show that we can address the paucity of annotated data that afflicts semantic parsing effectively by performing the learning procedure jointly on large parallel corpora coming from MT , and leveraging the power of pre-trained Transformer language models.",
"Using AMR and UCCA as our cases in point, we report competitive performances on their parsing, especially once pre-trained models enter the picture.",
"Furthermore, we find that the benefit MT data provide goes beyond merely improving English-centric parsing, yielding astonishing performances on cross-lingual AMR parsing as well, and allowing SGL to outperform all existing approaches by a large margin.",
"Most interestingly, differently from all previous literature, this result is attained without ever explicitly seeing at training time the translation paths the model is tested upon.",
"Once we use annotation projection and include these data as well, performances rise even further, attaining unprecedented results.",
"As future work, thanks to the nimbleness with which we can add new languages , we plan to assess the scalability of this framework as more formalisms are taken into account.",
"This work was partially supported by the MIUR under the grant Dipartimenti di eccellenza 2018-2022\" of the Department of Computer Science of Sapienza University."
] | [
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"result",
"result",
"abstain",
"other",
"method",
"method",
"method",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"other",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"other",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"other",
"abstain",
"method",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"result",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"method",
"abstain",
"abstain",
"abstain",
"method",
"other",
"method",
"method",
"abstain",
"other",
"abstain",
"abstain",
"objective",
"abstain",
"result",
"method",
"result",
"abstain",
"result",
"objective",
"other"
] |
[
"Multimodal summarization becomes increasingly significant as it is the basis for question answering, Web search, and many other downstream tasks.",
"However, its learning materials have been lacking a holistic organization by integrating resources from various modalities, thereby lagging behind the research progress of this field.",
"In this study, we present a full-scale multimodal dataset comprehensively gathering documents, summaries, images, captions, videos, audios, transcripts, and titles in English from CNN and Daily Mail.",
"To our best knowledge, this is the first collection that spans all modalities and nearly comprises all types of materials available in this community.",
"In addition, we devise a baseline model based on the novel dataset, which employs a newly proposed Jump-Attention mechanism based on transcripts.",
"The experimental results validate the important assistance role of the external information for multimodal summarization.",
"Multimodal summarization refines salient information from one or more modalities, including text, image, audio, and video ones (Evangelopou-los et al., 2013; Li et al., 2017).",
"Given the rapid dissemination of multimedia data over the Internet, multimodal summarization has been widely explored in recent years.",
"Meanwhile, some multimodal datasets (Li et al., 2017; Zhu et al., 2018; Sanabria et al., 2018; Li et al., 2020a) have been introduced to advance the development of this research field.",
"However, a majority of them are restricted in scale and too oriented, such as being less than one hundred examples or merely containing Chinese texts.",
"Moreover, the materials from different modalities are rarely collected across the board, especially videos and their accompanying materials that possess abundant external information for multimodal comprehension and fusion.",
"In this work, we introduce a full-scale Multimodal Article and Video Summarization (MM-AVS) dataset 1 with documents, summaries, images, captions, videos, audios, transcripts, and titles in English.",
"The significance of MM-AVS for the multimodal summarization community includes but not limited to:",
"1) MM-AVS is a large-scale multimodal collection compared with existing video containing dataset and its generation codes 1 has been released, which can be readily extended for existing and future multimodal summarization approaches;",
"2) MM-AVS is collected from CNN 2 and Daily Mail 3 , which makes it available to more researchers due to English-based and comparable with the popular text-based CNN/Daily Mail corpus; and",
"3) MM-AVS firstly collects nearly all types of materials from all modalities, inclusively with videos, audios, transcripts, images, captions, and titles that are rarely assembled.",
"In addition, we implement a general multimodal summarization baseline based on transcripts for multimodal summarization on MM-AVS.",
"This method employs a Jump-Attention mechanism to align features between text and video.",
"Further, we use the multi-task learning to simultaneously optimize document and video summarizations.",
"Evaluations on MM-AVS illustrate the benefits of external information such as videos and transcripts for multimodal summarization without alignment.",
"Multi-modal summarization generates a condensed multimedia summary from multi-modal materials, such as texts, images, and videos.",
"For instance, UzZaman et al. (2011) introduced an idea of illustrating complex sentences as multimodal summaries by combining pictures, structures, and sim-1 https://github.com/xiyan524/MM-AVS .",
"Libovick`y et al. (2018) and Palaskar et al. (2019) studied abstractive text summarization for open-domain videos.",
"Li et al. (2017) constructed MMS dataset and developed an extractive multi-modal summarization method that automatically generated a textual summary based on a topic-related set of documents, images, audios, and videos.",
"Zhu et al. (2018, 2020) combined image selection and output to alleviate the modality bias based on the MSMO dataset.",
"Chen and Zhuge (2018) extended Daily Mail with images and captions to E-Daily Mail dataset and employed a hierarchical encoder-decoder model to align sentences and images.",
"Recently, an aspect-aware model and a large-scale Chinese e-commerce product summarization dataset EC-product were introduced to incorporate visual information for e-commerce product summaries (Li et al., 2020a).",
"The above mentioned datasets are rarely constructed comprehensively, which ignore the abundant visual information underlying in videos.",
"The only video-containing work is restricted in scale, which hampers its use for deep-learning based methods.",
"In this study, we will build a full-scale multimodal dataset to address these issues.",
"To facilitate a straightforward comparison for the multimodal summarization approaches with the text-based ones, MM-AVS extends CNN/DM collections to multimodalities.",
"Each example of MM-AVS contains a document accompanying with multi-sentence summary, title, images, captions, videos, and their corresponding audios and transcripts.",
"Table 1 compares MM-AVS with the representative multimodal summarization benchmarks.",
"MM-AVS contains documents and abstractive summaries as most of the benchmarks including, while it ex-Daily Mail CNN Avg.",
"tends visual information that most existing benchmarks ignore (such as MSMO(Zhu et al., 2018), MMSS(Li et al., 2018), E-DailyMail(Chen and Zhuge, 2018), and EC-product(Li et al., 2020b)).",
"MMS(Li et al., 2017) and How2(Sanabria et al., 2018) also take videos into account; however, MMS only contains 50 examples that are too limited for deep learning and How2 excludes documents, which are the most critical materials for summarization.",
"MM-AVS also keeps image captions for deep descriptions of images as well as document titles for the topic extraction.",
"Further, MM-AVS contains extractive labels for training convenience.",
"In the manner of providing abundant multimodal information, MM-AVS is applicable for existing and future multimodal research in different learning tasks.",
"The concrete statistics of MM-AVS are shown in Table 2 4 , incorporating textual and visual modules: Textual module.",
"Following (Nallapati et al., 2016), we have crawled all the summary bullets of each story in the original order to obtain a multi-sentence reference, where each bullet is treated as a sentence.",
"Given that the reference is an abstractive summary written by humans, we construct the label of each sentence as (Nallapati et al., 2017) does.",
"4 The data scale is determined by its accompanied videos, considering this modality is more space-consuming.",
"The data acquirability code in the project github mentioned above can be used for extension.",
"Sentences in the document are selected to maximize the ROUGE (Lin, 2004) score with respect to the gold summary by a greedy approach.",
"As for the document and title, we keep their original formats as shown in the websites.",
"Visual module.",
"To enrich visual information for multimodal summarization, we collect images and videos for each example.",
"Image caption is preserved to assist further explorations such as feature extraction and alignment to documents.",
"Given long videos, we separate the audios and extract the transcripts 5 to alleviate the pre-process pressure for large-scale or online learning.",
"We utilize the hierarchical bi-directional long short term memory (BiLSTM) (Nallapati et al., 2017) based on word and sentence levels to read tokens and induce a representation for each sentence denoted as s i .",
"Each sentence in a transcript is denoted as t j .",
"In terms of videos, we employ ResNet (He et al., 2016) for feature extraction and BiLSTM to model the sequential pattern in video frames.",
"Each image is represented as m k .",
"Given that the transcript extracted from a video shares the same modality with a document and accurately aligns with a video, we take it as a bridge to deepen the relationship between two modalities.",
"We apply the jump attention based on transcripts to assist modality alignment, which focuses on transcripts to video images and then on documents to 5 We use IBM Watson Speech for the text service https://www.ibm.com/watson/services/speech-to-text/.",
"transcript attention context.",
"The video-aware context cd 2 v i is denoted as cd 2 v i = NT (cid:88) j =1 NM (cid:88) k =1 b j i d kj m k , (1) where NT and NM are the lengths of transcripts and image frames.",
"b ji and d kj are the attention weights and can be calculated as follows (taking d kj for illustration): d kj = ( VT ( q j (cid:12) r k + q j + r k )) , (2) where V is the training parameter, q j and r k are the feature mappings of each modality that are calculated as q j = tanh ( W m m k + b m ) and r k = tanh ( W t t j + b t ) .",
"The jump attention can be reversed to obtain an article context vector for video summarization.",
"Given that modalities may be not accurately aligned, we employ late+ fusion by fusing unimodal decisions.",
"Inspired by (Liu et al., 2018), we induce noise filters to eliminate noises as F ( W s f ( s i ) , W c g ( cd 2 v i )) , where the filters W s and W c are calculated as follows: W s = [1 g ( cd 2 v i )] , W c = [1 f ( s i )] , (3) where is a smoothed coefficient for penalty intensity, and f ( ) , g ( ) , and F ( ) are feedforward networks.",
"We employ the multi-task training to enhance summarization.",
"The loss function is the weight mix of R-1 R-2 R-L text-only 39.11 16.42 28.56 +video frames 40.86 17.48 30.23 +transcripts 41.26 17.95 30.98 Table 3: Summarizations based on the materials of documents, videos, and transcripts.",
"L = ts L ts + vs ( R div + R rep ) , L ts = 1 NSNS (cid:88) n =1 [ y n log y n + (1 y n ) log",
"where L ts is the training loss for extractive summarization, y n and y n represent the true and predicted labels, and ts and vs are balance parameters.",
"Following (Zhou et al., 2018), we use unsupervised learning by reinforcement learning methods for video summarization whose loss can be separated into the diversity reward R div (measuring frames dissimilarity) and the representativeness reward R rep (measuring similarity between summary and video) as follows: R div = 1 |M| ( |M| 1) (cid:88) j M (cid:88) j (cid:48) M , j (cid:48) (cid:54) = j d (cid:0) m j , m j (cid:48) (cid:1) , R rep = exp 1 NMNM (cid:88) j =1 min j (cid:48) M (cid:13)(cid:13) m j m j (cid:48) (cid:13)(cid:13) 2 , (5) where M is the set of the selected video frames and d ( ) is the dissimilarity function.",
"We conduct experiments on the MM-AVS dataset and evaluate the performance by ROUGE (Lin, 2004).",
"R-1, R-2, and R-L respectively represent ROUGE-1, ROUGE-2, and ROUGE-L F1-scores, which are widely used to calculate the n-grams overlapping between decoded summaries and references.",
"Videos, audios, or transcripts are less concerned than documents and images, as revealed in Table 1.",
"Accordingly, the multimodal corpus assembling all of them has been absent so far, till MM-AVS is built in this study.",
"To verify the importance of these materials for multimodal summarization, we test a text-only baseline and its two extensions.",
"As for the baseline, we construct a hierarchical framework that concentrates on word and sentence levels with a feedforward classification layer.",
"Its two extensions respectively take videos and transcripts for additional considerations.",
"As shown in Table 3, both videos and transcripts can contribute to improving multimodal summa-rizaions by fusing documents.",
"This validates that the external information complementary for texts can facilitate capturing the core ideas of documents and inducing high-quality summaries.",
"To further investigate the nature of transcripts, we compare them with documents and references.",
"As shown in Table 4, the video transcripts in MM-AVS are distinct from documents with low overlaps, indicating that they are not repeating documents but provide useful assistant information.",
"While Table 4 also illustrates that the transcripts are lowly correlated with references, suggesting that transcripts can assist summary generation but are not enough for the final excellent summaries.",
"The document , video , and document with video summarization results on 200 groups of MM-AVS examples are scored by five computer science graduates in terms of their informativeness (Inform) and satisfaction (Satis).",
"Each summary is scored from 1 to 5, where a higher score denotes more informative or satisfied, and we record the average scores in Table 5.",
"It shows that the summaries induced via documents and videos are more close to human comprehensions, which is in accord with the observations in Section 5.1, verifying the importance of external information such as videos for excellent summaries.",
"In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing , pages 10921102.",
"Jindrich Libovick`y, Shruti Palaskar, Spandana Gella, and Florian Metze.",
"2018.",
"Multimodal abstractive summarization of opendomain videos.",
"In Proceedings of the Workshop on Visually Grounded Interaction and Language (ViGIL).",
"NIPS.",
"Chin-Yew Lin.",
"2004.",
"Rouge: A package for automatic evaluation of summaries.",
"Text Summarization Branches Out , 8.",
"Kuan Liu, Yanen Li, Ning Xu, and Prem Natarajan.",
"2018.",
"Learn to combine modalities in multimodal deep learning.",
"arXiv preprint arXiv:1805.11730 .",
"Naushad UzZaman, Jeffrey P. Bigham, and James F. Allen.",
"2011.",
"Multimodal summarization of complex sentences.",
"In ACM International Conference on Intelligent User Interfaces , pages 4352.",
"In this work, we contribute a full-scale dataset for multimodal summarization, which extensively assembles documents, summaries, images, captions, videos, audios, transcripts, and titles.",
"A novel multimodal summarization framework is proposed based on this dataset to be taken as a baseline for the future research in this community.",
"We sincerely thank Wei Liu for his constructive collaboration during development of this paper and Nayu Liu for the helpful discussions."
] | [
"abstain",
"abstain",
"method",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"method",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"other",
"abstain",
"abstain",
"other",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"abstain",
"other"
] |
[
"The Minecraft Collaborative Building Task is a two-player game in which an Architect A instructs a Builder B to construct a target structure out of 3D blocks.",
"We consider the task of predicting B 's action sequences (block placements and removals) in a given game context, and show that capturing B 's past actions as well as B 's perspective leads to a significant improvement in performance on this challenging language understanding problem.",
"There is a long-standing interest in building interactive agents that can communicate with humans about and operate within the physical world (e.g. Winograd (1971)).",
"The goal for agents in this scenario is to not only be able to engage in rich natural language discourse with their human conversation partners, but also to ground that discourse to physical objects, and execute instructions in the real world.",
"Traditional dialogue scenarios are either completely ungrounded (Ritter et al., 2010; Schrading et al., 2015), focus on slot-value filling tasks (Kim et al., 2016b,a; Budzianowski et al., 2018) which instead require grounding to entities in a knowledge base, or operate within static environments, such as images (Das et al., 2017) or videos (Pasunuru and Bansal, 2018).",
"Relevant efforts in robotics have largely focused on single-shot instruction following, and are mostly constrained to simple language (Roy and Reiter, 2005; Tellex et al., 2011) with limited resources (Thomason et al., 2015; Misra et al., 2016; Chai et al., 2018).",
"The recently introduced Minecraft Collaborative Building Task and the corresponding Minecraft Dialogue Corpus (Narayan-Chen et al., 2019) is one attempt to bridge this gap within the simulated game world of Minecraft.",
"In this task, two players, an Architect ( A ) instructs a Builder ( B ) to construct a target structure out of multi-colored building blocks.",
"The corpus consists of 509 game logs between humans that perform this task.",
"Narayan-Chen et al. (2019) focus on generating Architect utterances.",
"In this paper, we explore models for building an automated Builder agent.",
"1 We focus on the subtask of predicting the Builder's block placements, and leave the back-and-forth dialogue aspect of the overall task required of a fully interactive Builder agent to future work.",
"We define the Builder Action Prediction (BAP) task in Section 2, describe our models in Section 3, an approach to augment the training data in Section 4, and our experiments in Section 5. We analyze results and highlight challenges of the BAP task in Section 6. 2 Dataset and Task 2.1 The Minecraft Dialogue Corpus The Minecraft Dialogue Corpus (Narayan-Chen et al., 2019) consists of 509 human-human dialogues and game logs for the Minecraft Collaborative Building Task, a two-player game in a simulated Blocks World environment between an Architect ( A ) and a Builder ( B ).",
"A is given a target structure ( Target ) and has to instruct B via a text chat interface to build a copy of Target on a given build region.",
"A and B communicate back and forth via chat throughout the game (e.g. to resolve confusions or to correct B 's mistakes), but only B can move blocks, while A observes B operating in the world.",
"B is given access to an inventory of 120 blocks of six given colors that it can place and remove.",
"The resulting dialogues consist mainly of A providing instructions, often involving multiple actions to be taken, and grounded in the Builder's perspective, while B executes those instructions and resolves 1 For models and code see http://juliahmr.cs.",
"any confusion through further dialogue.",
"The task is complete when the structure built by B ( Built ) matches Target (allowing for translations within the horizontal plane and rotations about the vertical axis) and lies completely within the boundaries of the predefined build region.",
"Games in this corpus are based on 150 distinct target structures, split into disjoint test, training, and development sets such that training targets do not appear during test or development.",
"Game logs record all utterances and B 's actions (placements and removals), as well as the state of the world (i.e. the (x,y,z)-coordinates and colors of all blocks in the build region), and B 's (x,y,z) position, vertical rotation (pitch) and horizontal orientation (yaw) at the points in time when an utterance was recorded or an action performed.",
"Since there are six block colors to be placed, we distinguish seven possible types of actions A { BLUE , GREEN , ..., YELLOW , REMOVE } .",
"B actions are 4-tuples (cid:104) A, x, y, z (cid:105) consisting of an action type and cell coordinates.",
"A block placement is feasible as long as an adjacent grid location is occupied, while REMOVE is feasible as long as that location is currently occupied by a block.",
"These actions do not include B 's movement.",
"B can assume any (continuous) 3D position and orientation, and the dataset records B 's position and orientation for each individual action.",
"But since there are many positions and orientations from which blocks in a cell can be placed, B 's movement is secondary to the main task of constructing the target configuration.",
"Narayan-Chen et al. (2019) focused on creating models that can generate A utterances, whereas we aim to develop models that can perform B 's role.",
"Although back-and-forth dialogue between the two players is a clear hallmark of this task, we leave the question of how to develop B agents that can decide when to speak and what to contribute to the conversation (either by way of chit-chat, verifica-tions or clarification questions to A ) to future work, and focus here on the subtask of predicting correct sequences of block placements and removals.",
"Executing A instructions is B 's primary role, and a crucial component to overall task completion.",
"of performing this task.",
"A can move around freely, but remains invisible to B and views the structure from behind B when giving instructions.",
"As a result, A instructions frequently include spatial relations, both between pairs of blocks or substructures ( put ... on top of.., ), and relative to B 's current position and perspective ( left , right ).",
"A also often uses higher-level descriptions involving complex shapes (e.g. staircase , v ).",
"Due to the asynchronous nature of the dialogue, A often interrupts during B action sequences.",
"A may also provide corrections and clarifications to fix B mistakes.",
"Producing feasible sequences of B actions requires a certain amount of planning, since blocks can only be placed in grid cells that are adjacent to other blocks or the ground, and floating structures (a common occurrence among the target structures in this corpus) can only be built if supporting blocks that are not part of the target structure are present when the floating blocks are being placed.",
"Despite these challenges, we show below that training models that use a rich representation of the world (Section 3) on sufficient amounts of diversified data (Section 4) produces promising initial results.",
"To generate items for this task, we follow a similar strategy to Narayan-Chen et al. (2019), who, as a first step towards designing a fully interactive Architect, define an Architect Utterance Generation Task, where models are presented with a particular human-human game context in which a human Architect produced an utterance and are evaluated based on how well they can generate an appropriate utterance.",
"Conversely, we define the Builder Action Prediction (BAP) Task as the task of predicting the sequence of actions (block placements and/or removals) that a human Builder performed at a particular point in a human-human game.",
"To evaluate models for the BAP task, we compare each model's predicted action sequence A m against the corresponding action sequence A h that the human builder performed at that point in the game.",
"Specifically, for each pair of model and human action sequences ( A m , A h ) , where A h = (cid:104) a (1) h , ...a ( k ) h (cid:105) led from a world state W before to a world state W h and A m = (cid:104) a (1) m , ...a ( l ) m (cid:105) led from the same W before to W m , we compute an F1 score over the net actions in A h and A m , and report a micro-average over all sequences in the test (or development) data.",
"Net actions ignore actions that were undone within the same sequence, e.g. if a block was placed and then removed.",
"We consider any a m action correct if the same action (involving the same grid cell and block color) occurs among the net actions in A h .",
"There are two reasons why we evaluate net rather than all actions: first, many structures contain floating blocks which require the placement of temporary placeholder blocks that are later removed.",
"Placeholders' colors are arbitrary, and there are often multiple possible locations where placeholders can be put; placeholder predictions should not be penalized, as long as they enable the correct target to be built.",
"Human Builders are also prone to making small mistakes that are immediately resolved (e.g. by removing blocks that were accidentally placed).",
"Evaluation should be robust to this noise in the ground truth sequences.",
"The F1 metric ignores sequence information because it is either implicit in cases where it matters (e.g. building a vertical stack of blocks from the ground up), or irrelevant (e.g. building a line of blocks on the ground).",
"Other metrics may also be suited for this task, but obvious choices such as an edit distance between W m and W h suffer from the problem that they favor models that place fewer blocks, since incorrect placements would incur twice the cost of no placements.",
"However, our current definition of when an action is correct is relatively harsh, and could be relaxed in a number of ways.",
"First, since it only considers an action correct if it matches a human action at the same grid cell, it penalizes cases where there are rotational equivalences between the built and the target structures (as may arise when the target has rotational symmetry).",
"It also ignores any translational equivalences (which are very common at the beginning of a dialogue when the initial structure is empty, and may also need to be taken into account when the action sequence passes through an intermediate state in which all blocks have been removed).",
"Second, looser F1 scores that evaluate actions only with regard to block locations (ignoring color) or colors (ignoring locations) might yield insight into how well models understand spatial relations, colors, or the number of blocks to be placed or removed.",
"We leave exploring such variants to future work.",
"While our evaluation allows us compare models directly and automatically against a common gold standard, it is important to keep in mind that such direct comparisons to human action sequences provide only a lower bound on performance because they are based on the assumption that",
"a) the human executed the instructions completely and correctly, and that",
"b) there is only one way to execute the instructions correctly.",
"But instructions are often vague or ambiguous: Place a red block on the ground next to the blue block may be resolved to any of four equally correct cells adjoining that block, and ideally, the evaluation metric should score them the same.",
"And human action sequences do not always correspond to a complete execution of the previous instruction, e.g. when B is interrupted by A or stops to ask a question: A : now it will be a diagonal staircase with 4 steps angling towards the middle A : if that makes sense B puts down a red block B : diagonal staircase with this orientation?",
"B puts down a red block A : towards where the yellow blocks are pointing B picks up 2 red blocks, puts down a red block 2.4 Related Work There is growing interest in situated collaborative scenarios involving instruction givers/followers with one-way (Hu et al., 2019; Suhr et al., 2019) and two-way (Kim et al., 2019; Ilinykh et al., 2019) communication.",
"Here, we compare our task to related work on instruction following, both generally and within Blocks World and Minecraft.",
"Instruction following: Prior approaches to instruction comprehension typically take a semantic parsing approach (Chen and Mooney, 2011; Artzi and Zettlemoyer, 2013; Andreas and Klein, 2015).",
"Semantic parsing components enable human-robot understanding (Tellex et al., 2011; Matuszek et al., 2013); some approaches to interactive robot design combine these architectures with physical robot exploration to enable online learning (Thoma-son et al., 2015, 2016, 2017).",
"The SCONE corpus (Long et al., 2016) features tasks in three domains requiring context-dependent sequential instruction understanding, in which a system is given a world containing several predefined objects and properties and has to predict the final world state by parsing instructions to intermediate logical forms.",
"Some papers have also applied neural action prediction models (Suhr and Artzi, 2018; Huang et al., 2019) to SCONE.",
"More recently, Vision-and-Language Navigation (VLN), (Anderson et al., 2018), and its dialog counterpart, Cooperative Vision-and-Dialog Navigation (CVDN) (Thomason et al., 2019), focus on instruction following and cooperative interactions in pho-torealistic navigation settings.",
"Since our dataset does not contain any logical forms, we also cannot use semantic parsing approaches, and have to resort to neural action prediction models.",
"However, Minecraft instructions are more challenging than the SCONE tasks because our action space is significantly larger and our utterances are more complex.",
"Minecraft dialogues are also more complex than the sequences of instructions in SCONE because we cannot assume that actions to be executed are described in the last utterance.",
"Minecraft dialogues are also more complex than those in CVDN, because they contain more turns, and because communication is asynchronous.",
"Moreover, construction differs fundamentally from navigation in that construction dynamically changes the environment.",
"While referring expressions in navigation can be safely assumed to refer to objects that exist in the world, construction instructions frequently refer to objects that need to be built by the agent.",
"And although more recent navigation tasks require real vision, their underlying world state space (as defined by fixed viewpoints and the underlying navigation graph) is just as highly discretized.",
"Our task does not require vision, but poses an arguably more challenging planning problem, since its action space is much larger (7623 possible actions vs. six actions in the vision-language navigation work).",
"Blocks World: There is a renewed interest in instruction comprehension in Blocks World scenarios.",
"Voxelurn (Wang et al., 2017) interfaces with human users and learns to understand descriptions of voxel structures of increasing complexity, but does so by mapping them down to a core programmatic language.",
"Bisk et al. (2016a,b, 2018) build models for understanding single-shot instructions that transform one world state to another using simulated 3D blocks.",
"Blocks are viewed from a fixed bird's-eye perspective, initialized randomly in the initial world state, and uniquely identifiable.",
"The varying Builder perspective and lack of easily identifiable referents, along with the need to understand utterances in a dialogue context, make our task a much more challenging problem.",
"Unlike traditional Blocks World, Minecraft allows blocks to float (requiring nonmonotonic action sequences where placement is followed by removal), or attach to any side of an existing block.",
"Minecraft: Combining semantic parsing with simulated human-robot interaction, Facebook CraftAssist is a dialogue-enabled framework with an associated dataset for semantic parsing of instructions in Minecraft (Gray et al., 2019; Jernite et al., 2019; Szlam et al., 2019).",
"Their setup enables two-way human-bot interactions in which a human architect can direct an automated builder using natural language to build complex structures.",
"To bootstrap a semantic parser, they synthetically generate (using a hand-defined grammar) and crowdsource natural language instructions paired with logical tree structures consisting of action primitives.",
"In addition to lacking such annotations, our work differs fundamentally in that our data is sourced from human-human dialogues; instructions are more ambiguous, dialogues have larger variety and Builder action sequences are noisier.",
"Similar to e.g. the models of Suhr and Artzi (2018) for the SCONE tasks, models for the Builder Action Prediction task need to predict an appropriate, variable-length, sequence of actions (block placements and removals) in a given discourse and game context and world state.",
"All our models (Figure 2) are based on a recurrent encoder-decoder architecture (Sutskever et al., 2014; Cho et al., 2014) in which a GRU-based encoder (bottom left box) captures the game context (dialogue and action his-tory), and a CNN-based encoder (top left box) captures the world state at each time step.",
"The decoder (right box) predicts one action per time step, based on the game history, the world state at that time, and the last action taken.",
"It consists of another GRU backbone over action sequences (bottom right), and a multi-class classifier that reads in the output of the GRU backbone as well as the world state encoding produced by the CNN to predict either the next action (block placement or removal) to be taken, or a special STOP token that terminates the action sequence.",
"The world state representation gets updated and re-encoded after each predicted action.",
"We now describe these components in more detail.",
"Since B only knows what blocks to place after receiving an instruction from A , we can view the game history as a non-empty sequence of previous utterances (by both players), possibly interleaved with sequences of actions that were taken by B in earlier turns of the game.",
"Our experiments examine the question of how much of this history should be given to our model, but all models examined in this paper treat the game history as a single sequence of tokens.",
"Similar to Narayan-Chen et al. (2019), we encode the dialogue history as a sequence of tokens in which each player's utterances are contained within speaker-specific start and end tokens ( (cid:104) A (cid:105) . . . (cid:104)\\ A (cid:105) or ( (cid:104) B (cid:105) . . . (cid:104)\\ B (cid:105)",
".).",
"We also represent B 's prior actions naively as tokens that capture the action type (placement or removal) and block color (e.g. as builder putdown red).",
"The 2 6 = 12 action tokens as well as the speaker tokens are encoded using 300-dimensional random vectors, while all other tokens are encoded as 300-dimensional pre-trained GloVe word em-beddings (Pennington et al., 2014).",
"The token em-beddings are passed through a GRU to produce a H -dim embedding ( H { 200 , 300 } ) of the dialogue history in the GRU's final hidden state.",
"The world state is the current grid configuration that is fed into the action prediction model at each time step.",
"We first describe how we represent the raw world state, before we explain how this representation is then encoded via a CNN-based architecture.",
"Input: the raw world state Minecraft blocks are unit cubes that can be placed at integer-valued (cid:104) x, y, z (cid:105) locations in a 3D grid; the Collaborative Building Task restricts these to a build region of size 11 9 11 .",
"Since we found it beneficial to explicitly capture empty grid cells, our baseline model represents each cell state as a 7-dim one-hot vector, yielding a 11 9 11 7 minimal world state representation encoding the presence (or absence) of blocks at any grid cell.",
"We also found it useful to capture the relative position of each cell with respect to B 's current position and orientation, as well as which cells were affected by B 's most recent actions, and augment this model in two ways: Action history weights: Each action affects a single grid cell.",
"Actions that follow each other often affect adjacent grid cells.",
"We encode information about the most recent actions in our world state representation as follows: Given the chronological sequence of all actions A = a (1) , a (2) ...a ( t 1) that took place before the t -th action to be predicted, we assign a real-valued weight ( i ) to each action a ( i ) (where ( i ) ( i +1) ), and include these action weights in the world state representation of the corresponding cells.",
"We truncate the action history to the last five elements, assign integer weights",
"1...5 to a ( t 5) , ..., a ( t 1) (and 0 to all a ( i<t 5) ), and then include these weights as a separate input feature in each cell.",
"If a cell was affected more than once by the last five actions, we only use the weight of the most recent action.",
"Our action weights do not distinguish between actions taken in the preceding action sequence and those in the current sequence.",
"Perspective coordinates: B needs to understand the spatial relations in A 's instructions.",
"Many of these relations (e.g. left in Figure 1) depend on B 's current position (cid:104) x B , y B , z B (cid:105) and orientation (pitch B [ 90 , ..., +90] , or vertical rotation, and yaw B [ 180 , ..., +180] , horizontal orientation).",
"Our models assume that spatial relations in an instruction are relative to B 's position at that time, and use that information to compute perspective coordinates.",
"We calculate the relative perspective coordinates (cid:104) x (cid:48) c , y (cid:48) c , z (cid:48) c (cid:105) of a cell c with absolute coordinates (cid:104) x c , y c , z c (cid:105) by moving the frame of reference from (cid:104) 0 , 0 , 0 (cid:105) to (cid:104) x B , y B , z B (cid:105) , and rotating it to account for B 's yaw and pitch: 2 (cid:104) x (cid:48) c , y (cid:48) c , z (cid:48) c (cid:105) = P Y (cid:104) x c x B , y c y B , z c z B (cid:105) We scale these perspective coordinates by a factor of .1 to keep their range closer to that of the cell 2 P = (cid:18) 1 0 0 0 cos B sin B 0 sin B cos B (cid:19) and Y = (cid:18) cos B 0 sin B 0 1 0 sin B 0 cos B (cid:19) state and action history weights.",
"Our full model represents each cell as an 11-dim vector (consisting of the 7-dim cell state, 1-dim action history weight and 3-dim perspective coordinates), and the entire grid (which serves as input to a CNN-based encoder) as a 11 11 9 11 tensor.",
"We refer to the grid at time step t as W ( t ) raw .",
"Output: a CNN-based encoding To obtain a representation of each grid cell, we feed the raw world state tensor W ( t ) raw of Section 3.3 through a multi-layer CNN that embeds each grid cell conditioned on its neighborhood and recent actions (if using action history weights).",
"The model consists of m 3d-conv layers with kernel size 3 (CNN 3 ), stride 1 and padding 1, followed by a ReLU activation function.",
"Between every successive pair of these layers is a 1 1 1 3d-conv layer (CNN 1 ) with stride 1 and no padding, for dimensionality reduction purposes, again followed by ReLU.",
"With W ( t ) 0 = W ( t ) raw , the first m 1 blocks of this model can be expressed as W ( t ) i = relu ( CNN i 1 ( relu ( CNN i 3 ( W ( t ) i 1 )))) .",
"The m 'th 3 3 3 3d-conv layer CNN m 3 computes the final world state representation W ( t ) m = relu ( CNN m 3 ( W ( t ) m 1 )) that is used to predict the next action.",
"The GRU backbone The GRU backbone of the decoder captures information about the current action sequence and the game history.",
"We initialize its hidden state with the final hidden state of the game history encoder RNN of Section 3.2.",
"Since the tensor representation of the grid is too unwieldy to be used as input to a recurrent net, we instead compute an explicit 11-dim representation a ( t 1) of the action taken at the last time step, consisting of three components: a 2-dim one-hot vector for the action type (placement or removal), a 6-dim one-hot vector for the block color (all zero for removals), and a 3-dim block location vector containing the absolute (cid:104) x, y, z (cid:105) coordinates of the cell where the action took place.",
"At the start of decoding, we use a zero vector as a start token.",
"These action vectors get passed through j dense linear layers with ReLU before being fed to the GRU.",
"Output: Next action prediction With seven possible actions per cell, there are 7623 possible actions (although only a small subset of these will be feasible at any point in time, a point that we will return to below).",
"Since our models need to predict a variable length sequence of actions, we also need a special STOP action that is not associated with a single cell, but terminates the sequence.",
"Our action prediction classifier has therefore two sub-components: a block action prediction model, and a stop prediction model.",
"The stop prediction model returns a single element, which we append to the vector returned by the block action prediction model before feeding it through a softmax layer to return the most likely next action.",
"Block actions scores: We use a CNN-based architecture with parameter sharing across cells to score each of the seven possible actions for every grid cell.",
"The input to this model consists of the CNN-based world state representation W ( t ) m (Sec-tion 3.3), as well as the decoder GRU's hidden state h ( t ) , concatenated to each cell's representation in W ( t ) m as additional channels.",
"This model consists of n 1 1 1 1 3d-conv layers followed by ReLU ( W (cid:48) ( t ) i = relu ( CNN i 1 ( W (cid:48) ( t ) i 1 ) ) and with the n th such 3d-conv layer with 7 output channels (and no ReLU): W (cid:48) ( t ) n = relu ( CNN n 1 ( W (cid:48) ( t ) n 1 )) , which is flattened into a 7623-dim vector of action scores.",
"STOP score: We also need to predict when an action sequence is complete.",
"While this decision needs access to the same information as the block action scorer, it also needs access to a (compact) global representation of the grid, since the STOP action is not cell-specific.",
"It also needs to know the uncertainty in the block action scorer, since STOP is more likely when it is less clear which block action should be performed, and vice versa.",
"We take the output of the penultimate layer in the block action scorer and apply max-pooling to every cell's vector representation, thus obtaining a single number for each of the 1089 cells.",
"We concatenate these numbers into a single vector and use that as input to the STOP prediction model, which consists of l dense linear layers (with ReLU after each layer except the last), where the l th layer has a single output W (cid:48)(cid:48) ( t ) l , the score for STOP.",
"a t = arg max(softmax( vec ( W (cid:48) ( t ) n ) W (cid:48)(cid:48) ( t ) l ))",
"The small size of the training set (3,709 examples) is a major limiting factor for training complex models.",
"Here, we explore ways of generating synthetic data to augment the size and variety of our data.",
"For each game log in the original training data, we generate twenty new game logs by combining the following data augmentation techniques: Utterance paraphrases: We generate paraphrases of the utterances in the dialogue by randomly substituting tokens with any of their synonyms in the hand-engineered synonym lexicon of Narayan-Chen et al. (2019).",
"Color substitutions: We permute block colors by applying one of the 6! possible permutations, chosen at random, to the entire game log.",
"These substitutions also change the language in the synthetic dialogues to reflect the updated colors.",
"Spatial transformations: Since the world contains no landmarks besides the built region, absolute coordinates are somewhat arbitrary.",
"We sample one (0, 90, -90, 180) rotation in the ground plane (affecting all (cid:104) x, z (cid:105) coordinates, plus B 's yaw and position) per synthetic log (subject to the constraint that the target still fit in the built region).",
"Experimental Setup Our training, test and development splits contain 3709, 1616, and 1331 Builder action sequences respectively.",
"We increase the training data to 7418 (2x), 14836 (4x) and 22254 (6x) items by sampling items from the synthetic data of Section 4. The average sequence length (in the development set) is 4.3 (with a std. deviation of 4.5).",
"Target structures in the test data do not appear in the training or development data.",
"We train models with AdamW (Loshchilov and Hutter, 2019) and weight decay regularization with a weight decay factor of 0.1.",
"We use a learning rate of 0.001 for the original data and a slightly lower learning rate of 0.0001 in the case of augmented data.",
"We use a batch size of 1.",
"During training, we use teacher forcing and minimize the sum of the cross entropy losses between each predicted and ground truth action sequence (the action sequence performed by the human).",
"We stop training early when loss on the held-out development set has increased monotonically for ten epochs.",
"We use greedy decoding (max. sequence length of 10) to generate action sequences, which seems to work better than beam search decoding (for fixed beam sizes between 5 and 20).",
"We report net action F1 (Section 2.3) on the test set.",
"Model Variants The world state representation of the baseline model (BAP-base) consists of block colors at absolute (cid:104) x, y, z (cid:105) coordinates.",
"We examine the effect of augmenting BAP-base first with action history weights, and then also with relative perspective coordinates (both described in Section 3.3).",
"For model hyperparameters, see Appendix A. Game History We experiment with three schemes for how much game history to provide to the models: H 1 includes A 's last utterance and any following B utterances.",
"H 2 includes all utterances after B 's penultimate action sequence.",
"H 3 includes all utterances after B 's penultimate action sequence interleaved with a token representation of B 's last action sequence.",
"If A 's last utterance was a standalone instruction, H 1 should be sufficient.",
"But prior discourse is often required: A instructions may span multiple utterances and can be interrupted by back-and-forth clarification dialogues.",
"At the same time, B 's next action sequence is often directly related to (or a continuation of) their previous actions.",
"This motivates H 2 and H 3 : by including utterances that sandwich B 's previous action sequence, we include additional A history and B context.",
"Finally, to investigate the degree to which previous B actions should be represented, H 3 augments H 2 with explicit representations of B 's actions (as described in Section 3.2).",
"For each cell in Tables 1 and 2, we first perform a grid search over model hyperparameters and select the best performing model on the development set, then report its performance on the test set.",
"Table 1 shows how the different game history and world state representations affect model performance.",
"We see that performance increases as action weights are added and as the amount of history is increased.",
"H 3 consistently performs well across all model variants.",
"Table 2 shows how different amounts of data augmentation affect performance.",
"We train each model variant with H 3 history on 2x, 4x and 6x augmented training data.",
"This increases BAP-base H 3 's performance from 14.6 to 17.0 (with 6x data).",
"With action history, performance increases from 19.7 to 20.0.",
"With perspective coordinates, performance increases from 18.8 to 21.2 (both with 4x data).",
"Perspective coordinates, thus, help with more training data (although it is unclear why performance drops again for the more complex models at 6x).",
"Our best model is the full BAP model with action weights, perspective coordinates, history H 3 and 4x augmented data (BAPH 3 , 4x ) with an F1 of 21.2.",
"This is significantly better than the 11.8 F1 of our baseline BAP model with history H 1 and without action history weights, perspective coordinates, or data augmentation (BAP-base H 1 ).",
"We also see an improvement in mean sequence length from 2.23 to 2.66, even if the latter is still much smaller than the mean gold sequence length of 4.3.",
"Infeasible Actions and Constrained Decoding In any given world state, only a small fraction of the 7623 actions are feasible: blocks can only be placed in locations that are currently empty and adjacent to existing blocks or on the ground, and blocks can only be removed from locations that are currently occupied.",
"Surprisingly, less than 1% of action sequences generated by any of our models contain one or more infeasible actions.",
"We can force our models to predict only feasible actions by multiplying the output of the block action prediction model (post softmax) with a bit mask over block actions that identifies which of the possible actions are feasible in the current world state, but this does not affect the F1 scores of either the baseline model or our best model.",
"We return to the development set to illustrate different aspects of BAPH 3 , 4x 's generated action sequences.",
"Figures 3 and 4 provide a few examples; more examples can be found in Appendix B. Colors: Our model is generally able to correctly identify colors of blocks to be placed.",
"While in many cases continuing the color from the previous Initial Generated Ground Truth A: same on the other side B: (places purple at (-2, 3, 1)) A: add one red block on top of that Figure 3: Example 1: After B places the rightmost purple block, A directs B to place another red block on top of it.",
"action sequence is sufficient, the model is also able to switch colors as needed based on A instructions.",
"Numbers: Our model can sometimes identify the number of blocks to be placed when instructions mention them.",
"But with vague instructions, the model struggles, stopping early or erroneously continuing long sequences of the same color.",
"Spatial relations: Our model usually predicts a reasonable ballpark of locations for the next action sequence.",
"While predicting correct locations exactly is still difficult, the model is usually able to distinguish below from on top of , and places blocks in the neighborhood of the true sequence.",
"Placements vs. removals: Finally, our model is able to both place and remove blocks somewhat appropriately based on dialogue context.",
"For instance, corrective utterances in the history ( sorry, my mis-take ) usually trigger the model to undo previous actions.",
"However, the model sometimes goes overboard: not knowing how much of the penultimate action sequence to remove, an entire sequence of correct blocks can be erroneously erased.",
"In the Minecraft Collaborative Building Task, Builders must be able to comprehend complex instructions in order to achieve their primary goal of building 3D structures.",
"To this end, we define the challenging subtask of Builder Action Prediction, tasking models with generating appropriate action sequences learned from the actions of human Builders.",
"Our models process the game history along with a 3D representation of the evolving world to predict actions in a sequence-to-sequence fashion.",
"We show that these models, especially when conditioned on a suitable amount of game history and trained on larger amounts of synthetically generated data, improve over naive baselines.",
"In the future, richer representations of the dialogue history (e.g. by using BERT (Devlin et al., 2019) or of past Builder actions) combined with de-noising of the human data and perhaps more exhaustive data augmentation should produce better output sequences.",
"For true interactivity, the Builder must be augmented with the capability to determine when and how to respond when it is too uncertain to act.",
"And, finally, an approach like the Speaker-Follower Models of Fried et al. (2018) could be used to train our Builder model and the Architect model of Narayan-Chen et al. (2019) jointly.",
"We would like to thank the reviewers for their valuable comments.",
"This work was supported by Contract W911NF-15-1-0461 with the US Defense Advanced Research Projects Agency (DARPA) Communicating with Computers Program and the Army Research Office (ARO).",
"Approved for Public Release, Distribution Unlimited.",
"The views expressed are those of the authors and do not reflect the of-ficial policy or position of the Department of Defense or the U.S. Government."
] | [
"abstain",
"result",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"other",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"other",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"result",
"abstain",
"abstain",
"method",
"other",
"other",
"other",
"other"
] |
[
"Multi-intent SLU can handle multiple intents in an utterance, which has attracted increasing attention.",
"However, the state-of-the-art joint models heavily rely on autoregressive approaches, resulting in two issues: slow inference speed and information leakage .",
"In this paper, we explore a non-autoregressive model for joint multiple intent detection and slot filling, achieving more fast and accurate.",
"Specifically, we propose a G lobalL ocally G raph I nteraction N etwork (GL-GIN) where a local slot-aware graph interaction layer is proposed to model slot dependency for alleviating uncoordinated slots problem while a global intent-slot graph interaction layer is introduced to model the interaction between multiple intents and all slots in the utterance.",
"Experimental results on two public datasets show that our framework achieves state-of-the-art performance while being 11.5 times faster.",
"Spoken Language Understanding (SLU) (Young et al., 2013) is a critical component in spoken dialog systems, which aims to understand user's queries.",
"It typically includes two sub-tasks: intent detection and slot filling (Tur and De Mori, 2011).",
"Since intents and slots are closely tied, dominant single-intent SLU systems in the literature (Goo et al., 2018; Li et al., 2018; Liu et al., 2019b; E et al., 2019; Qin et al., 2019; Teng et al., 2021; Qin et al., 2021b,c) adopt joint models to consider the correlation between the two tasks, which have obtained remarkable success.",
"Multi-intent SLU means that the system can handle an utterance containing multiple intents, which is shown to be more practical in the real-world scenario, attracting increasing attention.",
"To this end, Corresponding author.",
"Xu and Sarikaya (2013) and Kim et al. (2017) begin to explore the multi-intent SLU.",
"However, their models only consider the multiple intent detection while ignoring slot filling task.",
"Recently, Gangadharaiah and Narayanaswamy (2019) make the first attempt to propose a multi-task framework to joint model the multiple intent detection and slot filling.",
"Qin et al. (2020b) further propose an adaptive interaction framework (AGIF) to achieve fine-grained multi-intent information integration for slot filling, obtaining state-of-the-art performance.",
"Though achieving the promising performance, the existing multi-intent SLU joint models heavily rely on an autoregressive fashion, as shown in Figure",
"1(a), leading to two issues: Slow inference speed .",
"The autoregressive models make the generation of slot outputs must be done through the left-to-right pass, which cannot achieve parallelizable, leading to slow inference speed.",
"Information leakage .",
"Autoregressive models predict each word slot conditioned on the previously generated slot information (from left-to-right), resulting in leaking the bidirectional context information.",
"In this paper, we explore a non-autoregressive framework for joint multiple intent detection and slot filling, with the goal of accelerating inference speed while achieving high accuracy, which is shown in Figure",
"1(b).",
"To this end, we propose a G lobalL ocally G raphI nteraction N etwork (GL-GIN) where the core module is a proposed local slot-aware graph layer and global intent-slot interaction layer, which achieves to generate intents and slots sequence simultaneously and non-autoregressively.",
"In GL-GIN, a local slot-aware graph interaction layer where each slot hidden states connect with each other is proposed to explicitly model slot dependency, in order to alleviate uncoordinated slot problem (e.g., B-singer followed by I-song ) (Wu et al., 2020) due to the non-autoregressive fashion.",
"A global intent-slot graph interaction layer is further introduced to perform sentence-level intent-slot interaction.",
"Unlike the prior works that only consider the token-level intent-slot interaction, the global graph is constructed of all tokens with multiple intents, achieving to generate slots sequence in parallel and speed up the decoding process.",
"Experimental results on two public datasets MixSNIPS (Coucke et al., 2018) and MixATIS (Hemphill et al., 1990) show that our framework not only obtains state-of-the-art performance but also enables decoding in parallel.",
"In addition, we explore the pre-trained model (i.e., Roberta (Liu et al., 2019c)) in our framework.",
"In summary, the contributions of this work can be concluded as follows: (1) To the best of our knowledge, we make the first attempt to explore a non-autoregressive approach for joint multiple intent detection and slot filling; (2) We propose a global-locally graph-interaction network, where the local graph is used to handle uncoordinated slots problem while a global graph is introduced to model sequence-level intent-slot interaction; (3) Experiment results on two benchmarks show that our framework not only achieves the state-of-the-art performance but also considerably speeds up the slot decoding (up to 11 . 5 ); (4) Finally, we explore the pre-trained model in our framework.",
"With the pre-trained model, our model reaches a new state-of-the-art level.",
"For reproducibility, our code for this paper is publicly available at https://github.com/ yizhen20133868/GL-GIN.",
"Multiple Intent Detection Given input sequence x = ( x 1 , . . . , x n ), multiple intent detection can be defined as a multi-label classification task that outputs a sequence intent label o I = ( o I 1 , . . . , o Im ), where m is the number of intents in given utterance and n is the length of utterance.",
"Slot Filling Slot filling can be seen as a sequence labeling task that maps the input utterance x into a slot output sequence o S = ( o S 1 , . . . , o Sn ).",
"As shown in Figure",
"2(a), we describe the proposed framework, which consists of a shared self-attentive encoder ( 3 . 1 ), a token-level intent detection decoder ( 3 . 2 ) and a global-local graph-interaction graph decoder for slot filling ( 3 . 3 ).",
"Both intent detection and slot filling are optimized simultaneously via a joint learning scheme.",
"Following Qin et al. (2019), we utilize a self-attentive encoder with BiLSTM and self-attention mechanism to obtain the shared utterance representation, which can incorporate temporal features within word orders and contextual information.",
"BiLSTM The bidirectional LSTM (BiL-STM) (Hochreiter and Schmidhuber, 1997) have been successfully applied to sequence labeling tasks (Li et al., 2020, 2021).",
"We adopt BiLSTM to read the input sequence { x 1 , x 2 , . . . , x n } forwardly and backwardly to produce context-sensitive hidden states H = { h 1 , h 2 , . . . , h n } , by repeatedly applying the h i = BiLSTM ( emb ( x i ) , h i 1 , h i +1 ), where emb is embedding function.",
"Self-Attention Following Vaswani et al. (2017), we map the matrix of input vectors X R n d ( d represents the mapped dimension) to queries Q , keys K and values V matrices by using different linear projections.",
"Then, the self-attention output C R n d is a weighted sum of values: C = softmax (cid:18) QK (cid:62) d k (cid:19) V .",
"Inspired by Qin et al. (2019), we perform a token-level multi-label multi-intent detection, where we predict multiple intents on each token and the sentence results are obtained by voting for all tokens.",
"Specifically, we first feed the contextual encoding E into an intent-aware BiLSTM to enhance its task-specific representations: h It = BiLSTM (cid:0) e t , h It 1 , h It +1 (cid:1) .",
"where I t denotes the intent results at the t -th word; denotes the sigmoid activation function; W h and WI are the trainable matrix parameters.",
"Finally, the sentence intent results o Ik can be obtained by: o I = { o Ik | ( n (cid:88) i =1 1 [ I ( i,k ) > 0 . 5]) > n/ 2 } , (5) where I ( i,k ) represents the classification result of token i for o Ik .",
"We predict the label as the utterance intent when it gets more than half positive predictions in all n tokens.",
"For example, if I 1 = { 0 .",
"9 , 0 .",
"8 , 0 .",
"7 , 0 .",
"1 } , I 2 = { 0 .",
"8 , 0 .",
"2 , 0 .",
"7 , 0 .",
"4 } , I 3 = { 0 .",
"9 , 0 .",
"3 , 0 .",
"2 , 0 .",
"3 } , from three tokens, we get { 3 , 2 , 1 , 0 } positive votes ( > 0 . 5 ) for four intents respectively.",
"Thus the index where more than half of the votes ( > 3 / 2 ) were obtained was o I 1 and o I 3 , we predict intents o I = { o I 1 , o I 3 } .",
"One main advantage of our framework is the proposed global-locally graph interaction network for slot filling, which is a non-autoregressive paradigm, achieving the slot filling decoding in parallel.",
"In the following, we first describe the slot-aware LSTM ( 3 . 3 . 1 ) to obtain the slot-aware representations, and then show how to apply the global-locally graph interaction layer ( 3 . 3 . 2 ) for decoding.",
"We utilize a BiLSTM to produce the slot-aware hidden representation S = ( s 1 , . . . , s n ).",
"At each decoding step t , the decoder state s t calculating by: s t = BiLSTM (cid:0) I t || e t , s t 1 , s t +1 (cid:1) , (6) where e t denotes the aligned encoder hidden state and I t denotes the predicted intent information.",
"The proposed global-locally graph interaction layer consists of two main components: one is a local slot-aware graph interaction network to model dependency across slots and another is the proposed global intent-slot graph interaction network to consider the interaction between intents and slots.",
"In this section, we first describe the vanilla graph attention network.",
"Then, we illustrate the local slot-aware and global intent-slot graph interaction network, respectively.",
"Vanilla Graph Attention Network A graph attention network (GAT) (Velickovic et al., 2018) is a variant of graph neural network, which fuses the graph-structured information and node features within the model.",
"Its masked self-attention layers allow a node to attend to neighborhood features and learn different attention weights, which can automatically determine the importance and relevance between the current node with its neighborhood.",
"In particular, for a given graph with N nodes, one-layer GAT take the initial node features H = { h 1 , . . . , h N } , h n RF as input, aiming at producing more abstract representation, H (cid:48) = { h (cid:48) 1 , . . . , h (cid:48) N } , h (cid:48) n RF (cid:48) , as its output.",
"The attention mechanism of a typical GAT can be summarized as below: h (cid:48) i = || Kk =1 (cid:0) (cid:80) j N i kij W kh h j (cid:1) , (7) ij = exp(LeakyReLU ( a (cid:62) [ W h h i (cid:107) W h h j ] ) ) (cid:80) j (cid:48)N i exp(LeakyReLU (cid:16) a (cid:62) [ W h h i (cid:107) W h h (cid:48) j ] (cid:17) ) , (8) where W h RF (cid:48) F and a R 2 F (cid:48) are the trainable weight matrix; N i denotes the neighbors of node i (including i ); ij is the normalized attention coefficients and represents the nonlinearity activation function; K is the number of heads.",
"Local Slot-aware Graph Interaction Layer Given slot decode hidden representations S = ( s 1 , . . . , s n ), we construct a local slot-aware graph where each slot hidden node connects to other slots.",
"This allows the model to achieve to model the dependency across slots, alleviating the uncoordinated slots problem.",
"Specifically, we construct the graph G = ( V, E ) in the following way, Vertices We define the V as the vertices set.",
"Each word slot is represented as a vertex.",
"Each vertex is initialized with the corresponding slot hidden representation.",
"Thus, the first layer states vector for all nodes is S 1 = S = ( s 1 , . . . , s n ).",
"Edges Since we aim to model dependency across slots, we construct a slot-aware graph interaction layer so that the dependency relationship can be propagated from neighbor nodes to the current node.",
"Each slot can connect other slots with a window size.",
"For node S i , only { S i m , . . . , S i + m } will be connected where m is a hyper-parameter denotes the size of sliding window that controls the length of utilizing utterance context.",
"process at l -th layer can be defined as:",
"where N i is a set of vertices that denotes the connected slots.",
"After stacking L layer, we obtain the contextual slot-aware local hidden features SL +1 = { s L +1 1 , . . . , s L +1 n } Global Slot-Intent Graph Interaction Layer To achieve sentence-level intent-slot interaction, we construct a global slot-intent interaction graph where all predicted multiple intents and sequence slots are connected, achieving to output slot sequences in parallel.",
"Specifically, we construct the graph G = ( V, E ) in the following way, Vertices As we model the interaction between intent and slot token, we have n + m number of nodes in the graph where n is the sequence length and m is the number of intent labels predicted by the intent decoder.",
"The input of slot token feature is G [ S, 1] = SL +1 = { s L +1 1 , . . . , s L +1 n } which is produced by slot-aware local interaction graph network while the input intent feature is an embedding G [ I, 1] = { emb ( o I 1 ) , . . . , emb ( o Im ) } where emb is a trainable embedding matrix.",
"The first layer states vector for slot and intent nodes is G 1 = { G [ I, 1] , G [ S, 1] } = { emb ( o I 1 ) , . . . , emb ( o Im ) , s L +1 1 , . . . , s L +1 n } Edges There are three types of connections in this graph network.",
"intent-slot connection : Since slots and intents are highly tied, we construct the intent-slot connection to model the interaction between the two tasks.",
"Specifically, each slot connects all predicted multiple intents to automatically capture relevant intent information.",
"slot-slot connection : We construct the slot-slot connection where each slot node connects other slots with the window size to further model the slot dependency and incorporate the bidirectional contextual information.",
"intent-intent connection : Following Qin et al. (2020b), we connect all the intent nodes to each other to model the relationship between each intent, since all of them express the same utterance's intent.",
"After L layers' propagation, we obtain the final slot representation G [ S,L +1] for slot prediction.",
"Following Goo et al. (2018), we adopt a joint training model to consider the two tasks and update parameters by joint optimizing.",
"The intent detection objective is: CE( y, y ) = y log ( y ) + (1 y ) log (1 y ) , (13) L 1 (cid:44) n (cid:88) i =1 NI (cid:88) j =1 CE( y ( j,I ) i , y ( j,I ) i ) .",
"(14)",
"Similarly, the slot filling task objective is: L 2 (cid:44) n (cid:88) i =1 NS (cid:88) j =1 y ( j,S ) i log (cid:16) y ( j,S ) i (cid:17) , (15) where NI is the number of single intent labels and NS is the number of slot labels.",
"The final joint objective is formulated as: L = L 1 + L 2 , (16) where and are hyper-parameters.",
"We conduct experiments on two publicly available multi-intent datasets.",
"1 One is the MixATIS (Hemphill et al., 1990; Qin et al., 2020b), which includes 13,162 utterances for training, 756 utterances for validation and 828 utterances for testing.",
"Another is MixSNIPS (Coucke et al., 2018; Qin et al., 2020b), with 39,776, 2,198, 2,199 utterances for training, validation and testing.",
"The dimensionality of the embedding is 128 and 64 on ATIS and SNIPS, respectively.",
"The dimensionality of the LSTM hidden units is 256.",
"The batch size is 16.",
"The number of the multi head is 4 and 8 on MixATIS and MixSNIPS dataset, respectively.",
"All layer number of graph attention network is set to 2.",
"We use Adam (Kingma and Ba, 2015) to optimize the parameters in our model.",
"For all the experiments, we select the model which works the best on the dev set and then evaluate it on the test set.",
"All experiments are conducted at GeForce RTX 2080Ti and TITAN Xp.",
"We compare our model with the following best baselines: (1) Attention BiRNN.",
"Liu and Lane (2016) propose an alignment-based RNN for joint slot filling and intent detection; (2) Slot-Gated Atten.",
"Goo et al. (2018) propose a slot-gated joint model, explicitly considering the correlation between slot filling and intent detection; (3) Bi-Model.",
"Wang et al. (2018) propose the Bi-model to model the bi-directional between the intent detection and slot filling; (4) SF-ID Network.",
"E et al. (2019) proposes the SF-ID network to establish a direct connection between the two tasks; (5) Stack-Propagation.",
"Qin et al. (2019) adopt a stack-propagation framework to explicitly incorporate intent detection for guiding slot filling; (6) Joint Multiple ID-SF .",
"Gangadharaiah and Narayanaswamy (2019) propose a multi-task framework with slot-gated mechanism for multiple intent detection and slot filling; (7) AGIF Qin et al. (2020b) proposes an adaptive interaction network to achieve the fine-grained multi-1 We adopt the cleaned verison that removes the repeated sentences in original dataset, which is available at https:// github.com/LooperXX/AGIF.",
"intent information integration, achieving state-of-the-art performance.",
"Following Goo et al. (2018) and Qin et al. (2020b), we evaluate the performance of slot filling using F1 score, intent prediction using accuracy, the sentence-level semantic frame parsing using overall accuracy.",
"Overall accuracy measures the ratio of sentences for which both intent and slot are predicted correctly in a sentence.",
"Table 1 shows the results, we have the following observations: (1) On slot filling task, our framework outperforms the best baseline AGIF in F1 scores on two datasets, which indicates the proposed local slot-aware graph successfully models the dependency across slots, so that the slot filling performance can be improved.",
"(2) More importantly, compared with the AGIF , our framework achieves +2.7% and 1.2% improvements for MixATIS and MixSNIPS on overall accuracy, respectively.",
"We attribute it to the fact that our proposed global intent-slot interaction graph can better capture the correlation between intents and slots, improving the SLU performance.",
"One of the core contributions of our framework is that the decoding process of slot filling can be significantly accelerated with the proposed",
"non-autoregressive mechanism.",
"We evaluate the speed by running the model on the MixATIS test data in an epoch, fixing the batch size to 32.",
"The comparison results are shown in Table 2.",
"We observe that our model achieves the 8.2, 10.8 and 11.5 speedup compared with SOTA models stack-propagation , Joint Multiple ID-SF and AGIF .",
"This is because that their model utilizes an autoregressive architecture that only performs slot filling word by word, while our non-autoregressive framework can conduct slot filling decoding in parallel.",
"In addition, it's worth noting that as the batch size gets larger, GL-GIN can achieve better acceleration where our model could achieve 17.2 speedup compared with AGIF when batch size is 64.",
"We study the effectiveness of the local slot-aware interaction graph layer with the following ablation.",
"We remove the local graph interaction layer and directly feed the output of the slot LSTM to the global intent-slot graph interaction layer.",
"We refer it to w/o local GAL in Tabel 3.",
"We can clearly observe that the slot F1 drops by 1.5% and 1.2% on MixATIS and MixSNIPS datasets.",
"We attribute this to the fact that local slot-aware GAL can capture the slot dependency for each token, which helps to alleviate the slot uncoordinated problems.",
"A qualitative analysis can be founded at Section 4.5.6.",
"In order to verify the effectiveness of slot-intent global interaction graph layer, we remove the global interaction layer and utilizes the output of local slot-aware GAL module for slot filling.",
"It is named as w/o Global Intent-slot GAL in Table 3.",
"We can observe that the slot f1 drops by 0.9%, 1.3%, which demonstrates that intent-slot graph in-Model MixATIS MixSNIPS Overall(Acc) Slot(F1) Intent(Acc) Overall(Acc) Slot(F1) Intent(Acc) w/o Local Slot-Aware GAL 41.1 86.8 74.0 71.4 93.7 95.2 w/o Global Intent-Slot GAL 40.9 87.4 75.5 71.7 93.6 95.5 + More Parameters 41.9 87.7 75.0 73.0 93.8 95.5 w/o Global-locally GAL 40.5 86.3 75.2 70.2 92.9 95.0 GL-GIN 43.5 88.3 76.3 75.4 94.9 95.6 Table 3: Ablation Experiment.",
"teraction layer can capture the correlation between multiple intents, which is beneficial for the semantic performance of SLU system.",
"Following Qin et al. (2020b), we replace multiple LSTM layers (2-layers) as the proposed global-locally graph layer to verify that the proposed global-locally graph interaction layer rather than the added parameters works.",
"Table 3 ( more parameters ) shows the results.",
"We observe that our model outperforms more parameters by 1.6% and 2.4% overall accuracy in two datasets, which shows that the improvements come from the proposed Global-locally graph interaction layer rather than the involved parameters.",
"Instead of using the whole global-locally graph interaction layer for slot filling, we directly leverage the output of slot-aware LSTM to predict each token slot to verify the effect of the global-locally graph interaction layer.",
"We name the experiment as w/o Global-locally GAL in Tabel 3.",
"From the results, We can observe that the absence of global GAT module leads to 3.0% and 5.2% overall accuracy drops on two datasets.",
"This indicates that the MixATIS MixSNIPS 0 20 40 60 80 40.8 74.2 43.5 75.4 50.0 80.7 53.6 82.6 AGIF GL-GIN AGIF + Roberta GL-GIN + Roberta Figure 4: Overall accuracy Performances with Roberta .",
"global-locally graph interaction layer encourages our model to leverage slot dependency and intent information, which can improve SLU performance.",
"To better understand how global-local graph interaction layer affects and contributes to the final result, we visualize the attention value of the Global intent-slot GAL.",
"As is shown in Figure 3, we visualize the dependence of the word 6 on context and intent information.",
"We can clearly observe that token 6 obtains information from all contextual tokens.",
"The information from and 10 helps to predict the slot, where the prior autoregressive models cannot be achieved due to the generation word by word from left to right.",
"We conduct qualitative analysis by providing a case study that consists of two sequence slots which are generated from AGIF and our model.",
"From Table 4, for the word 6 , AGIF predicts its slot label as O incorrectly.",
"This is because that AGIF only models its left information, which makes it hard to predict 6 is a time slot.",
"In contrast, our model predicts the slot label correctly.",
"We attribute this to the fact that our proposed global intent-slot interaction layer can model bidirectional contextual information.",
"In addition, our framework predicts the word slot am correctly while AGIF predicts it incorrectly (I-airport name follows B-depart time), indicating that the proposed local slot-texts What airlines off from LOVE field between 6 and 10 am on June sixth AGIFO O O O B-fromlocairportname I-fromlocairportname O O O B-depart time end time I-tolocairport name O B-depart date month name B-depart date day number GL-GINO O O O B-fromlocairportname I-fromlocairportname O B-depart time start time O B-depart time end time I-depart time end time O B-depart date month name B-depart date day number Table 4: Case study.",
"Following Qin et al. (2019), we explore the pre-trained model in our framework.",
"We replace the self-attentive encoder by Roberta (Liu et al., 2019c) with the fine-tuning approach.",
"We keep other components identical to our framework and follow Qin et al. (2019) to consider the first subword label if a word is broken into multiple subwords.",
"Figure 4 gives the result comparison of AGIF , GL-GIN and two models with Roberta on two datasets.",
"We have two interesting observations.",
"First, the Roberta-based model remarkably well on two datasets.",
"We attribute this to the fact that pre-trained models can provide rich semantic features, which can help SLU.",
"Second, GL-GIN + Roberta outperforms AGIF+Roberta on both datasets and reaches a new state-of-the-art performance, which further verifies the effectiveness of our proposed framework.",
"Slot Filling and Intent Detection Recently, joint models (Zhang and Wang, 2016; Hakkani-Tur et al., 2016; Goo et al., 2018; Li et al., 2018; Xia et al., 2018; E et al., 2019; Liu et al., 2019b; Qin et al., 2019; Zhang et al., 2019; Wu et al., 2020; Qin et al., 2021b; Ni et al., 2021) are proposed to consider the strong correlation between intent detection and slot filling have obtained remarkable success.",
"Compared with their work, we focus on jointly modeling multiple intent detection and slot filling while they only consider the single-intent scenario.",
"More recently, multiple intent detection can handle utterances with multiple intents, which has attracted increasing attention.",
"To the end, Xu and Sarikaya (2013) and Kim et al. (2017) begin to explore the multiple intent detection.",
"Gangadharaiah and Narayanaswamy (2019) first apply a multi-task framework with a slot-gate mechanism to jointly model the multiple intent detection and slot filling.",
"Qin et al. (2020b) propose an adaptive interaction network to achieve the fine-grained multiple intent information integration for token-level slot filling, achieving the state-of-the-art performance.",
"Their models adopt the autoregressive architecture for joint multiple intent detection and slot filling.",
"In contrast, we propose a non-autoregressive approach, achieving parallel decoding.",
"To the best of our knowledge, we are the first to explore a non-autoregressive architecture for multiple intent detection and slot filling.",
"Graph Neural Network for NLP Graph neural networks that operate directly on graph structures to model the structural information, which has been applied successfully in various NLP tasks.",
"Linmei et al. (2019) and Huang and Carley (2019) explore graph attention network (GAT) (Velickovic et al., 2018) for classification task to incorporate the dependency parser information.",
"Cetoli et al. (2017) and Liu et al. (2019a) apply graph neural network to model the non-local contextual information for sequence labeling tasks.",
"Yasunaga et al. (2017) and Feng et al. (2020a) successfully apply a graph network to model the discourse information for the summarization generation task, which achieved promising performance.",
"Graph structure are successfully applied for dialogue direction (Feng et al., 2020b; Fu et al., 2020; Qin et al., 2020a, 2021a).",
"In our work, we apply a global-locally graph interaction network to model the slot dependency and interaction between the multiple intents and slots.",
"In this paper, we investigated a non-autoregressive model for joint multiple intent detection and slot filling.",
"To this end, we proposed a global-locally graph interaction network where the uncoordinated-slots problem can be addressed with the proposed local slot-aware graph while the interaction between intents and slots can be modeled by the proposed global intent-slot graph.",
"Experimental results on two datasets show that our framework achieves state-of-the-art performance with 11 .",
"5 times faster than the prior work.",
"This work was supported by the National Key R&D Program of China via grant 2020AAA0106501 and the National Natural Science Foundation of China (NSFC) via grant 61976072 and 61772153.",
"This work was also supported by the Zhejiang Lab's International Talent Fund for Young Professionals."
] | [
"abstain",
"abstain",
"objective",
"objective",
"result",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"objective",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"other",
"objective",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"other",
"other",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"result",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"result",
"abstain",
"other",
"other"
] |
[
"In this paper, we present a substantial step in better understanding the SOTA sequence-to-sequence (Seq2Seq) pretraining for neural machine translation (NMT).",
"We focus on studying the impact of the jointly pretrained decoder, which is the main difference between Seq2Seq pretraining and previous encoder-based pretraining approaches for NMT.",
"By carefully designing experiments on three language pairs, we find that Seq2Seq pretraining is a double-edged sword: On one hand, it helps NMT models to produce more diverse translations and reduce adequacy-related translation errors.",
"On the other hand, the discrepancies between Seq2Seq pretraining and NMT finetuning limit the translation quality (i.e., domain discrepancy) and induce the over-estimation issue (i.e., objective discrepancy).",
"Based on these observations, we further propose simple and effective strategies, named in-domain pretraining and input adaptation to remedy the domain and objective discrepancies, respectively.",
"Experimental results on several language pairs show that our approach can consistently improve both translation performance and model robustness upon Seq2Seq pretraining.",
"There has been a wealth of research over the past several years on self-supervised pre-training for natural language processing tasks (Devlin et al., 2019; Liu et al., 2019; Conneau et al., 2020; Jiao et al., 2020a), which aims at transferring the knowledge of large-scale unlabeled data to downstream tasks with labeled data.",
"Despite its success in other understanding and generation tasks, self-supervised pretraining is not a common practice in machine translation (MT).",
"One possible reason is the architecture discrepancy between pretraining model Work was mainly done when Wenxuan Wang and Yongchang Hao were interning at Tencent AI Lab.",
"Transformer encoder-decoder ).",
"To remedy the architecture gap, several researchers propose sequence-to-sequence (Seq2Seq) pretraining models for machine translation, e.g., MASS (Song et al., 2019) and BART (Zhu et al., 2019; Lewis et al., 2020).",
"Recently, Liu et al. (2020) extend BART by training on large-scale multilingual language data (i.e., mBART), leading to significant improvement on translation performance across various language pairs.",
"While previous pretraining approaches for NMT generally focus only on Transformer encoder (Lample and Conneau, 2019), mBART pretrains a complete autoregressive Seq2Seq model by recovering the input sentences that are noised by masking phrases.",
"One research question naturally arises: how much does the jointly pretrained decoder matter?",
"In this work, we present a substantial step in better understanding the SOTA Seq2Seq pretraining model.",
"We take a fine-grained look at the impact of the jointly pretrained decoder by carefully designing experiments, which are conducted on several WMT and IWSLT benchmarks across language pairs and data scales using the released mBART-25 model (Liu et al., 2020).",
"By carefully examining the translation outputs, we find that ( 2 . 2 ): Jointly pretraining decoder produces more diverse translations with different word orders, which calls for multiple references to accurately evaluate its effectiveness on large-scale data.",
"Jointly pretraining decoder consistently reduces adequacy-related translation errors over pretraining encoder only.",
"Although jointly pretraining decoder consistently improves translation performance, we also identify several side effects due to the discrepancies between pretraining and finetuning (2.3): domain discrepancy : Seq2Seq pretraining model is generally trained on general domain 2591 data while the downstream translation models are trained on specific domains (e.g., news).",
"The domain discrepancy requires more efforts for the finetuned model to adapt the knowledge in pretrained models to the target in-domain.",
"objective discrepancy : NMT training learns to translate a sentence from one language to an-other, while Seq2Seq pretraining learns to reconstruct the input sentence.",
"The objective discrepancy induces the over-estimation issue and tends to generate more hallucinations with noisy input.",
"The over-estimation problem along with more copying translations induced by Seq2Seq pretraining (Liu et al., 2021) make it suffer from more serious beam search degradation problem.",
"To remedy the above discrepancies, we propose simple and effective strategies, named in-domain pretraining and input adaptation in finetuning (3).",
"In in-domain pretraining, we propose to reduce the domain shift by continuing the pretraining of mBART on in-domain monolingual data, which is more similar in data distribution with the downstream translation tasks.",
"For input adaptation, we add noises to the source sentence of bilingual data, and combine the noisy data with the clean bilingual data for finetuning.",
"We expect the perturbed inputs to better transfer the knowledge from pretrained model to the finetuned model.",
"Experimental results on the benchmark datasets show that in-domain pretraining improves the translation performance significantly and input adaptation enhances the robustness of NMT models.",
"Combining the two approaches gives us the final solution to a well-performing NMT system.",
"Extensive analyses show that our approach can narrow the domain discrepancy, particularly improving the translation of low-frequency words.",
"Besides, our approach can alleviate the over-estimation issue and mitigate the beam search degradation problem of NMT models.",
"In this section, we conduct experiments and analyses to gain a better understanding of current Seq2Seq pretraining for NMT.",
"We first present the translation performance of the pretrained components (2.2), and then show the discrepancy between pretraining and finetuning (2.3).",
"high-resource WMT19 English-German (W19 En-De, 36.8M instances), and low-resource WMT16 English-Romanian (W16 En-Ro, 610K instances) and IWSLT17 English-French (I17 En-Fr, 250K instances).",
"To eliminate the effect of different languages, we also sample a subset from WMT19 EnDe (i.e., W19 En-De (S), 610K instances) to construct a low-resource setting for ablation studies.",
"For the proposed in-domain pretraining , we collect the NewsCrawl monolingual data as the in-domain data for WMT tasks (i.e., 200M English, 200M German, and 60M Romanian), and the TED monolingual data for IWSLT tasks (i.e., 1M English and 0.9M French).",
"Since the monolingual data from TED is rare, we expand it with pseudo in-domain data, OpenSubtitle (Tiedemann, 2016), which also provides spoken languages as TED.",
"Specifically, we use the latest 200M English subtitles and all the available French subtitles (i.e., 100M).",
"We follow Liu et al. (2020) to use their released sentence-piece model (Kudo and Richardson, 2018) with 250K subwords to tokenize both bilingual and monolingual data.",
"We evaluate the translation performance using the Sacre-BLEU (Post, 2018).",
"Models.",
"As for the pretrained models, we adopt the officially released mBART25 model (Liu et al., 2020) 1 , which is trained on the large-scale Com-monCrawl (CC) monolingual data in 25 languages.",
"As a result, the vocabulary is very large in mBART25, including 250K words.",
"mBART uses a larger Transformer model which extends both the encoder and decoder of Transformer-Big to 12 layers.",
"We use the parameters of either encoder or encoder-decoder from the pretrained mBART25 for finetuning.",
"Then, in the following section, we use pretrained encoder, and pretrained encoder-decoder for short.",
"We follow the officially recommended finetuning setting with dropout of 0 .",
"3 , label smoothing of 0 .",
"2 , and warm-up of 2500 steps.",
"We finetune on the high-resource task for 100K steps and the low-resource tasks for 40K steps, respectively.",
"We also list the results of vanilla Transformer without pretraining as baseline.",
"The vocabulary is built on the bilingual data, hence is much smaller (e.g., En-De 44K) than mBART25.",
"Specifically, for high-resource tasks we train 6L-6L Transformer-Big with 460K tokens per batch for 30K steps, and 1 https://github.com/pytorch/fairseq/ tree/main/examples/mbart 2592 Pretraining W19 En-De W19 En-De (S) W16 En-Ro I17 En-Fr Model Enc Dec ) ( ) ( ) ( ) ( no pretrain 39.6 41.0 29.7 30.1 34.5 34.3 37.3 38.0 mBART 39.4 40.1 26.7 27.1 30.0 29.6 35.3 35.1 X 40.8 41.1 31.7 33.5 35.0 35.6 38.4 38.4 X X 40.8 41.4 35.3 35.7 37.1 37.4 39.2 40.2 Table 1: BLEU scores on MT benchmarks.",
"The main difference of Seq2Seq pretraining models (e.g., mBART) from previous pretraining models (e.g., BERT and XLM-R) lies in whether to train the decoder together.",
"In this section, we investigate the impact of the jointly pretrained decoder in terms of BLEU scores, and provide some insights on where the jointly pretrained decoder improves performance.",
"Translation Performance.",
"Table 1 lists the BLEU scores of pretraining different components of NMT models, where we also include the results of NMT models trained on the datasets from scratch (no pretrain).",
"For fair comparisons, we use the same vocabulary size for all variants of pretraining NMT components.",
"We use the pretrained word embedding for the model variant with randomly initialized encoder-decoder (Enc: , Dec: ), which makes it possible to train 12L-12L NMT models on the small-scale datasets.",
"Accordingly, the results of (Enc: , Dec: ) is worse than the no pretrain model due to the larger vocabulary (e.g., 250K vs. 44K) that makes the model training more difficult.",
"Pretraining encoder only (Enc: X , Dec: ) significantly improves translation performance, which is consistent with the findings in previous studies (Zhu et al., 2019; Weng et al., 2020).",
"We also conduct experiments with the pretrained encoder XLM-R (Conneau et al., 2020), which achieves comparable performance as the mBART encoder (see Appendix A.1).",
"For fair comparisons, we only use the mBART encoder in the following sections.",
"Encouragingly, jointly pretraining decoder can further improve translation performance, although the improvement is not significant on the large-scale Src Sie bezichtigt die Erwachsenen Kinderhandel zu betreiben.",
"WMT19 En-De data.",
"These results seem to provide empirical support for the common cognition pretraining is less effective on large-scale data.",
"However, we have some interesting findings of the generated outputs, which may draw different conclusions.",
"To eliminate the effect of language and data bias, we use the full set and sampled subset of WMT19 De ) En data as representative large-scale and small-scale data scenarios.",
"Table 2 shows some translation examples.",
"Firstly, jointly pretraining decoder can produce good translations that are different in the word order from the ground-truth reference (e.g., traf-ficking in children vs. child trafficking\"), thus are assigned low BLEU scores. This may explain why jointly pretraining decoder only marginally improves performance on large-scale data. Secondly, jointly pretraining decoder can reduce translation errors, especially on small-scale data (e.g., correct the mistaken translation of It to She ).",
"We empirically validate the above two findings in the following experiments.",
"Impact on Translation Diversity.",
"We follow Du et al. (2021) to better evaluate the translation quality for different word orders using multiple references.",
"We use the test set released by Ott et al. (2018), which consists of 10 human translations for 500 sentences taken from the WMT14 En ) De test set.",
"As shown in Table 3, the pretrained decoder achieves more significant improvement in all cases when measured by multiple references.",
"These results provide empirical support for our claim that jointly pretraining decoder produces more diverse translations with different word orders, which can be better measured by multiple references.",
"These results may renew our cognition of pretraining, that is, they are also effective on large-scale data when evaluated more accurately .",
"Impact on Adequacy.",
"We conduct a human evaluation to provide a more intuitive understanding of how jointly pre-training decoder improves translation quality.",
"Specifically, we ask two annotators to annotate under-translation, mis-translation and over-translation on 100 sentences randomly sampled from WMT19 De ) En test set.",
"As listed in Table 4, inheriting the pretrained decoder re-Unigram Distribution Unigram Distribution L og F r e qu e n c y -6.5 -5.5 -4.5 -3.5 -2.5 Word Index 0 10000 20000 30000 WMT19: En CC: En NC: En Unigram Distribution L og F r e qu e n c y -6.5 -5.5 -4.5 -3.5 Word Index 0 10000 20000 30000 WMT19: En CC: En NC: En L og F r e qu e n c y -6.5 -5.5 -4.5 -3.5 -2.5 Word Index 0 10000 20000 30000 CC: En WMT19: En Figure 1: Word distributions of English corpora from general domain (i.e., CC data) and in-domain (i.e., WMT19 En-De news domain), respectively.",
"duces more translation errors on small data than on large data, which is consistent with the results of BLEU score in Table 1.",
"Interestingly, inheriting only the pretrained encoder introduces more over-translation errors on small data, which can be solved by combining the pretrained decoder.",
"One possible reason is that inheriting only the pretrained encoder excessively enlarges the impact of source context.",
"2 This problem does not happen on large data, since the large amount of in-domain data can balance the relation between encoder and decoder to accomplish the translation task well.",
"Although Seq2Seq pretraining consistently improves translation performance across data scales, we find several side effects of Seq2Seq pretraining due to the discrepancy between pretraining and finetuning.",
"In this section, we present two important discrepancies: domain discrepancy and objective discrepancy .",
"Unless otherwise stated, we report results on WMT19 En-De test set using small data.",
"Seq2Seq pretraining model is generally trained on general domain data while the downstream translation models are trained on specific domains (e.g., news).",
"Such a domain discrepancy requires more efforts for the finetuned models to adapt the knowledge in pretrained models to the target in-domain.",
"We empirically show the domain discrepancy in terms of lexical distribution and domain classifier.",
"2 Tu et al. (2017a) showed that more impact of source context leads to over-translation errors.",
"2021), we first plot the word distributions of English corpora from general domain (i.e., CC data) and in-domain (i.e., WMT19 En-De news domain) to study their difference at the lexicon level.",
"The words are ranked according to their frequencies in the WMT19 En-De training data.",
"As shown in Figure 1, we observe a clear difference between WMT news data and CC data in the long tail region, which is supposed to carry more domain-specific information.",
"Accordingly, there will be a domain shift from pretraining to finetuning.",
"Domain Classifier for Test Data.",
"We further demonstrate that the test data also follows a consistent domain as the training data.",
"To distinguish general domain and in-domain, we build a domain classifier based on the WMT19 En-De training data and the CC data.",
"We select a subset from the WMT training data with some trusted data (Wang et al., 2018; Jiao et al., 2020b, 2022), which includes 22404 sample from WMT newstest2010-2017 (see Appendix A.2 for details).",
"Specifically, we select 1.0M samples from the WMT training data and the CC data, respectively, to train the domain classifier.",
"The newstest2018 is combined with an equally sized subset of CC data for validation.",
"We adopt the domain classifier to classify each sample in the test sets of WMT19 En-De.",
"As shown in Table 5, most of the sentences (e.g., 70% 80%) are recognized as WMT news domain, which demonstrates the domain consistency between the training data and test data in the downstream tasks.",
"The learning objective discrepancy between Seq2Seq pretraining and NMT training is that NMT learns to translate a sentence from one language to another, while Seq2Seq pretraining learns to reconstruct the input sentence (Liu et al., 2021).",
"In this section, we study the side effects of the objective discrepancy by evaluating the predicting behaviors that are highly affected by the learning objective.",
"the average probability at each time step across a set of sentence pairs.",
"To evaluate the capability of LM modeling on the target language, we also follow Wang and Sennrich (2020) to consider a set of distractor translations, which are random sentences from the CC data that match the corresponding reference translation in length.",
"Figure 2 plots model uncertainties for both references ( Y ) and distractors ( Y ).",
"We find that jointly pretraining decoder significantly improves model certainty after the first few time steps (Figure 2a).",
"As for the distractors, pretraining encoder only results in certainties even lower than training from scratch (Figure 2b), which suggests that the corresponding NMT model is more dominated by the source context.",
"It reconfirms the finding in our human evaluation (Table 4).",
"In contrast, jointly pretraining decoder leads to a significant improvement of certainties, suggesting that the pretrained decoder tends to induce the over-estimation issue of NMT models.",
"A possible reason is that Seq2Seq pretraining does not establish the connection between languages, such that its strong capability of LM modeling still recognizes the distractor as a valid target sentence even though it is mismatched with the source sentence in semantics.",
"Hallucination under Perturbation.",
"One translation problem associated with over-estimation is hallucination (Wang and Sennrich, 2020), where NMT models generate fluent translation but is unrelated to the input.",
"In this section, we follow Lee et al. (2018) to evaluate the model's tendency of generating hallucination under noisy input, to which NMT models are highly sensitive (Be-linkov and Bisk, 2018).",
"Specifically, we employ 2595 Pretrain FPI (%) RSM (%) Enc Dec 4 BLEUHUP 4 BLEUHUP -1.3 0.5 -8.8 2.4 X -0.3 0.5 -8.3 0.5 X X -3.2 7.8 -17.8 15.5 Table 6: BLEU change of model performance under perturbed inputs over the standard inputs, and hallucinations under perturbation (HUP) score.",
"two different perturbation strategies: (1) First position insertion (FPI) that inserts a single additional input token into the source sequence, which can completely divorce the translation from the input sentence (Lee et al., 2018).",
"(2) Random span masking (RSM) that simulates the noisy input in the Seq2Seq pretraining of mBART (Liu et al., 2020).",
"We follow Lee et al. (2018) to count a translation as hallucination under perturbation (HUP) when: (1) BLEU between reference sentence and translation of unperturbed sentence is bigger than 5 and (2) BLEU between the translation of perturbed sentence and the translation of unperturbed sentence is lower than 3.",
"We calculate the percentage of hallucination as the HUP score.",
"Table 6 lists the BLEU change and HUP score for the perturbed inputs.",
"As expected, jointly pretraining decoder is less robust to perturbed inputs (more decline of BLEU scores), and produces more hallucinations than the other two model variants.",
"Beam Search Problem.",
"One commonly-cited weakness of NMT model is the beam search problem, where the model performance declines as beam size increases (Tu et al., 2017b).",
"Previous studies demonstrate that over-estimation is an important reason for the beam search problem (Ott et al., 2018; Cohen and Beck, 2019).",
"We revisit this problem for NMT models with Seq2Seq pretraining, as shown in Table",
"7. We also list the ratio of copying tokens in translation outputs (i.e., directly copy source words to target side without translation) for different beam sizes, which has been shown as a side effect of Seq2Seq pretraining models (Liu et al., 2021).",
"As seen, jointly pretraining decoder suffers from more serious beam search degradation problem, which reconfirms the connection between beam search problem and overestimation.",
"In addition, larger beam size introduces more copying tokens than the other model variants (i.e., 19.4 vs. 13.9, 12.9), which also links copying behaviors associated with Seq2Seq pretraining to the beam search problem.",
"To bridge the above gaps between Seq2Seq pretraining and finetuning, we introduce in-domain pretraining and input adaptation to improve the translation quality and model robustness.",
"In-Domain Pretraining.",
"To bridge the domain gap, we propose to continue the training of mBART (Liu et al., 2020) on the in-domain monolingual data.",
"Specifically, we first remove spans of text and replace them with a mask token.",
"We mask 35% of the words in each sentence by random sampling a span length according to a Poisson distribution ( \u0000 = 3 . 5 ).",
"We also permute the order of sentences within each instance.",
"The training objective is to reconstruct the original sentence at the target side.",
"We expect the in-domain pretraining to reduce the domain shift by re-pretraining on the in-domain data, which is more similar in data distribution with the downstream translation tasks.",
"Input Adaptation in Finetuning.",
"To bridge the objective gap and improve the robustness of models, we propose to add noises (e.g., mask, delete, permute) to the source sentences during finetuning, and keep target sentences as original ones.",
"Empirically, we add noises to 10% of the words in each source sentence, and combine the noisy data with the clean data by the ratio of 1:9, which are used to finetune the pretraining model.",
"We expect the introduction of perturbed inputs in finetuning can help to better transfer the knowledge from pretrained model to the finetuned model, thus alleviate over-estimation and improve the model robustness.",
"domain pretraining, and the combination of these two approaches, respectively.",
"For input adaptation, it achieves comparable translation quality as the general domain pretrained model and significantly reduces the ratio of HUP, indicating the enhancement of model robustness.",
"In-domain pretraining generally improves the translation quality but does not make the model more robust.",
"On the contrary, it may increase the ratio of HUP in some cases (e.g., En ) Ro 5.6 vs. 8.2).",
"Conducting input adaptation right after in-domain pretraining will combine the advantages of these two approaches, and improve both the translation quality and model robustness.",
"The effectiveness of our approaches, especially input adaptation, is more significant when evaluated with multiple references, as shown in Table",
"9. In-Domain Only.",
"Given the promising performance of in-domain pretraining, we investigate whether pretraining on in-domain data only can also obtain significant improvement.",
"We report the results in Table",
"10. We can observe that pretrain-Approach W19 En-De (S) W16 En-Ro ) ( ) ( Baseline 26.7 27.1 30.0 29.6 In-Domain 35.2 35.7 36.1 36.3 Table 10: BLEU scores of in-domain pretraining only.",
"ing solely on the in-domain data can improve the translation performance noticeably over the models without pretraining.",
"However, the improvement is less competitive than the pretrained mBART25 (e.g., En ) Ro: 36.1 v.s. 37.1 in Table 8), which may result from the much larger scale of multilingual data used in general pretraining.",
"We provide some insights into how our approach improves model performance over general pretraining.",
"We report results on WMT19 En ) De test set using small-scale data.",
"Narrowing Domain Gap.",
"Since the difference of lexical distribution between general domain and in-domain data mainly lies in the long tail region (see Figure 1), we study how our approach performs on low-frequency words.",
"Specifically, we calculate the word accuracy of the translation outputs for WMT19 En-De (S) by the compare-mt 3 tool.",
"We follow previous studies (Wang et al., 2021; Jiao et al., 2021) to divide words into three 3 https://github.com/neulab/compare-mt 2597 Approach Frequency Low Med High Baseline 36.8 45.3 57.5 General 44.5 54.3 64.2 + In-Domain 46.2 54.3 64.9 Table 11: F-measures of word prediction for different frequencies that are calculated in the bilingual data.",
"categories based on their frequency in the bilingual data, including High: the most 3,000 frequent words; Medium: the most 3,001-12,000 frequent words; Low: the other words.",
"Table 11 lists the results.",
"The improvements on low-frequency words are the major reason for the performance gains of in-domain pretraining, where it outperforms general pretraining on the translation accuracy of low/medium/highfrequency words by 1.7, 0.0, and 0.7 BLEU scores, respectively.",
"These findings confirm our hypothesis that in-domain pretraining can narrow the domain gap with in-domain data, which is more similar in the lexical distribution as the test sets.",
"Alleviating Over-Estimation.",
"Figure 3 shows the impact of our approach on model uncertainty.",
"Clearly, our approach successfully alleviates the over-estimation issue of general pretraining in both the groundtruth and distractor scenarios.",
"Mitigating Beam Search Degradation.",
"We recap the beam search degradation problem with the application of our approaches in Table 12.",
"The input adaptation approach can noticeably reduce the performance decline when using a larger beam size (e.g., from -1.8 to -0.9), partially due to a reduction of copying tokens in generated translations (e.g., from 19.4% to 15.3%).",
"Although in-domain pretraining does not alleviate the beam search degradation problem, it can be combined with input adaptation to build a well-performing NMT system.",
"Pretraining for NMT.",
"Previous pretraining approaches for NMT generally focus on how to effectively integrate pretrained BERT (Devlin et al., 2019) or GPT (Radford et al., 2019) to NMT models.",
"For example, Yang et al. (2020) propose a concerted training framework, and Weng et al. (2020) propose a dynamic fusion mechanism and a distillation paradigm to acquire knowledge from BERT and GPT.",
"understanding of how Seq2Seq pretraining model works for NMT, and propose a simple and effective approach to improve model performance based on these observations.",
"Intermediate Pretraining.",
"Our in-domain pretraining approach is related to recent successes on intermediate pretraining and intermediate task selection in NLU tasks.",
"For example, Ye et al. (2021) investigate the influence of masking policies in intermediate pretraining.",
"Poth et al. (2021) explore to select tasks for intermediate pretraining.",
"Closely related to our work, Gururangan et al. (2020) propose to continue the pretraining of ROBERTA (Liu et al., 2019) on task-specific data.",
"Inspired by these findings, we employ in-domain pretraining to narrow the domain gap between general Seq2Seq pretraining and NMT training.",
"We also show the necessity of target-side monolingual data on in-domain pretraining (see Appendix A.3), which has not been studied in previous works of in-domain pretraining.",
"In this paper we provide a better understanding of Seq2Seq pretraining for NMT by showing both the benefits and side effects.",
"We propose simple and effective approaches to remedy the side effects by 2598 bridging the gaps between Seq2Seq pretraining and NMT finetuning, which further improves translation performance and model robustness.",
"Future directions include validating our findings on more Seq2Seq pretraining models and language pairs.",
"The work described in this paper was supported by the key program of fundamental research from Shenzhen Science and Technology Innovation Commission (No. JCYJ20200109113403826) and the Research Grants Council of the Hong Kong Special Administrative Region, China (CUHK 14210920 of the General Research Fund)."
] | [
"method",
"method",
"result",
"abstain",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"result",
"objective",
"result",
"objective",
"objective",
"objective",
"objective",
"objective",
"objective",
"method",
"method",
"abstain",
"method",
"result",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"other",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"abstain",
"method",
"abstain",
"result",
"objective",
"abstain",
"other"
] |
[
"Building NLP systems that serve everyone requires accounting for dialect differences.",
"But dialects are not monolithic entities: rather, distinctions between and within dialects are captured by the presence, absence, and frequency of dozens of dialect features in speech and text, such as the deletion of the copula in He run-ning.",
"In this paper, we introduce the task of dialect feature detection, and present two multitask learning approaches, both based on pretrained transformers.",
"For most dialects, large-scale annotated corpora for these features are unavailable, making it difficult to train recognizers.",
"We train our models on a small number of minimal pairs, building on how linguists typically define dialect features.",
"Evaluation on a test set of 22 dialect features of Indian English demonstrates that these models learn to recognize many features with high accuracy, and that a few minimal pairs can be as effective for training as thousands of labeled examples.",
"We also demonstrate the downstream applicability of dialect feature detection both as a measure of dialect density and as a dialect classifier.",
"Dialect variation is a pervasive property of language, which must be accounted for if we are to build robust natural language processing (NLP) systems that serve everyone.",
"Linguists do not characterize dialects as simple categories, but rather as collections of correlated features (Nerbonne, 2009), such as the one shown in Figure 1; speakers of any given dialect vary regarding which features they employ, how frequently, and in which contexts.",
"In comparison to approaches that classify speakers or documents across dialects (typically using meta-data such as geolocation), the feature-based perspective has several advantages: (1) allowing for fine-grained comparisons of speakers or documents Work done while at Google Research.",
"The main challenge for recognizing dialect features computationally is the lack of labeled data.",
"Annotating dialect features requires linguistic expertise and is prohibitively time-consuming given the large number of features and their sparsity.",
"In dialectology, large-scale studies of text are limited to features that can be detected using regular expressions of surface forms and parts-of-speech, e.g., PRP DT for the copula deletion feature in Figure 1; many features cannot be detected with such patterns (e.g. OBJECT FRONTING , EXTRANEOUS ARTICLE ).",
"Furthermore, part-of-speech tagging is unreliable in many language varieties, such as re-1 https://ewave-atlas.org .",
"gional and minority dialects (Jrgensen et al., 2015; Blodgett et al., 2016).",
"As dialect density correlates with social class and economic status (Sahgal and Agnihotri, 1988; Rickford et al., 2015; Grogger et al., 2020), the failure of language technology to cope with dialect differences may create alloca-tional harms that reinforce social hierarchies (Blod-gett et al., 2020).",
"In this paper, we propose and evaluate learning-based approaches to recognize dialect features.",
"We focus on Indian English, given the availability of domain expertise and labeled corpora for evaluation.",
"First, we consider a standard multitask classification approach, in which a pretrained transformer (Vaswani et al., 2017) is fine-tuned to recognize a set of dialect features.",
"The architecture can be trained from two possible sources of supervision: (1) thousands of labeled corpus examples, (2) a small set of minimal pairs , which are hand-crafted examples designed to highlight the key aspects of each dialect feature (as in the typical example field of Figure 1).",
"Because most dialects have little or no labeled data, the latter scenario is more realistic for most dialects.",
"We also consider a multitask architecture that learns across multiple features by encoding the feature names, similar to recent work on few-shot or zero-shot multitask learning (Lo-geswaran et al., 2019; Brown et al., 2020).",
"It is possible to detect individual dialect features: several features can be recognized with reasonably high accuracy.",
"Our best models achieve a macro-AUC of .",
"848 across ten grammatical features for which a large test set is available.",
"This performance can be obtained by training on roughly five minimal pairs per feature.",
"Minimal pairs are significantly more effective for training than a comparable number of corpus examples.",
"Dialect feature recognizers can be used to rank documents by their density of dialect features, enabling within-dialect density computation for Indian English and accurate classification between Indian and U.S. English.",
"We develop methods for detecting 22 dialect features associated with Indian English.",
"Although India has over 125 million English speakers making it the world's second largest English-speaking population there is relatively little NLP research focused on Indian English.",
"Our methods are not designed exclusively for specific properties of Indian English; many of the features that are associated with Indian English are also present in other dialects of English.",
"We use two sources of data in our study: an annotated corpus ( 2.1) and a dataset of minimal pairs ( 2.2).",
"For evaluation, we use corpus annotations exclusively.",
"The features are described in Table 1, and our data is summarized in Table",
"2. 2.1 Corpus Annotations The International Corpus of English (ICE; Greenbaum and Nelson, 1996) is a collection of corpora of world varieties of English, organized primarily by the national origin of the speakers/writers.",
"We focus on annotations of spoken dialogs (S1A-001 S1A-090) from the Indian English subcorpus (ICE-India).",
"The ICE-India subcorpus was chosen in part because it is one of the only corpora with large-scale annotations of dialect features.",
"To contrast Indian English with U.S. English ( 4), we use the Santa Barbara Corpus of Spoken American English (Du Bois et al., 2000) that constitutes the ICE-USA subcorpus of spoken dialogs.",
"Lange features.",
"The first set of annotations come from Claudia Lange (2012), who annotated 10 features in 100 transcripts for an analysis of discourse-driven syntax in Indian English, such as topic marking and fronting.",
"We use half of this data for training (50 transcripts, 9392 utterances), and half for testing (50 transcripts, 9667 utterances).",
"Extended features.",
"To test a more diverse set of features, we additionally annotated 18 features on a set of 300 turns randomly selected from the conversational subcorpus of ICE-India, 2 as well as 50 examples randomly selected from a secondary dataset of sociolinguistic interviews (Sharma, 2009) to ensure diverse feature instantiation.",
"We selected our 18 features based on multiple criteria: 1) prevalence in Indian English based on the dialectology literature, 2) coverage in the data (we started out with a larger set of features and removed those with fewer than two occurrences), 3) diversity of linguistic phenomena.",
"The extended 2 We manually split turns that were longer than two clauses, resulting in 317 examples.",
"features overlap with those annotated by Lange, yielding a total set of 22 features.",
"Annotations were produced by consensus from the first two authors.",
"To measure interrater agreement, a third author (JE) independently re-annotated 10% of the examples, with Cohen's = 0 .",
"79 (Cohen, 1960).",
"3 2.2 Minimal Pairs For each of the 22 features in Table 1, we created a small set of minimal pairs.",
"The pairs were created by first designing a short example that demonstrated the feature, and then manipulating the example so that the feature is absent.",
"This negative example captures the envelope of variation for the feature, demonstrating a site at which the feature could be applied (Labov, 1972).",
"Consequently, 3 Our annotations will be made available at https:// dialectfeatures.page.link/annotations .",
"negative examples in minimal pairs carry more information than in the typical annotation scenario, where absence of evidence does not usually imply evidence of absence.",
"In our minimal pairs, the negative examples were chosen to be acceptable in standard U.S. and U.K. English, and can thus be viewed as situating dialects against standard varieties.",
"Here are some example minimal pairs: ARTICLE OMISSION : chair is black the chair is black FOCUS only : I was there yesterday only I was there just yesterday .",
"NON-INITIAL EXISTENTIAL : every year inflation is there every year there is inflation .",
"For most features, each minimal pair contains exactly one positive and one negative example.",
"However, in some cases where more than two variants are available for an example (e.g., for the feature INVARIANT TAG (isn't it, no, na) ), we provide multiple positive examples to illustrate different variants.",
"For Lange's set of 10 features, we provide a total of 113 unique examples; for the 18 extended features, we provide a set of 208 unique examples, roughly split equally between positives and negatives.",
"The complete list of minimal pairs is included in Appendix D. y x 1 [ CLS ] article omission [ SEP ] Chair is black.",
"[ SEP ] 0 [ CLS ] article omission [ SEP ] The chair is black.",
"[ SEP ] 0 [ CLS ] article omission [ SEP ] I was there yesterday only.",
"[ SEP ] . . . .",
". .",
"1 [ CLS ] focus only [ SEP ] I was there yesterday only.",
"[ SEP ] 0 [ CLS ] focus only [ SEP ] I was there just yesterday.",
"[ SEP ] 0 [ CLS ] focus only [ SEP ] Chair is black.",
"[ SEP ] . . . .",
". .",
"We train models to recognize dialect features by fine-tuning the BERT-base uncased transformer architecture (Devlin et al., 2019).",
"We consider two strategies for constructing training data, and two architectures for learning across multiple features.",
"Minimal pairs.",
"We apply a simple procedure to convert minimal pairs into training data for classification.",
"The positive part of each pair is treated as a positive instance for the associated feature, and the negative part is treated as a negative instance.",
"Then, to generate more data, we also include elements of other minimal pairs as examples for each feature: for instance, a positive example of the RESUMPTIVE OBJECT PRONOUN feature would be a negative example for FOCUS only , unless the example happened to contain both features (this was checked manually).",
"In this way, we convert the minimal pairs into roughly 113 examples per feature for Lange's features and roughly 208 examples per feature for the extended features.",
"The total number of unique surface forms is still 113 and 208 respectively.",
"Given the lack of labeled data for most dialects of the world, having existing minimal pairs or collecting a small number of minimal pairs is the most realistic data scenario.",
"based on these labeled instances.",
"We use 50 of the ICE-India transcripts annotated by Lange, which consists of 9392 labeled examples (utterances) per feature.",
"While we are lucky to have such a large resource for the Indian English dialect, this high-resource data scenario is rare.",
"Multihead.",
"In this architecture, which is standard for multitask classification, we estimate a linear prediction head for each feature, which is simply a vector of weights.",
"This is a multitask architecture, because the vast majority of model parameters from the input through the deep BERT stack remain shared among dialect features.",
"The prediction head is then multiplied by the BERT embedding for the [ CLS ] token to obtain a score for a feature's applicability to a given instance.",
"DAMTL.",
"Due to the few-shot nature of our prediction task, we also consider an architecture that attempts to exploit the natural language descriptions of each feature.",
"This is done by concatenating the feature description to each element of the minimal pair.",
"The instance is then labeled for whether the feature is present.",
"This construction is shown in Figure",
"2. Prediction is performed by learning a single linear prediction head on the [ CLS ] token.",
"We call this model description-aware multitask learning , or DAMTL.",
"Model details.",
"Both architectures are built on top of the BERT-base uncased model, which we fine-tune by cross-entropy for 500 epochs (due to the small size of the training data) using the Adam optimizer (Kingma and Ba, 2014), batch size of 32 and a learning rate of 10 5 , warmed up over the first 150 epochs.",
"Annotations of dialect features were not used for hyperparameter selection.",
"Instead, the hyperparameters were selected to maximize the discriminability between corpora of Indian and U.S. English, as described in 5.2.",
"All models trained in less than two hours on a pod of four v2 TPU chips, with the exception of DAMTL on corpus examples, which required up to 18 hours.",
"In dialectology, regular expression pattern matching is the standard tool for recognizing dialect features (e.g., Nerbonne et al., 2011).",
"For the features Supervision: Corpus examples Minimal pairs Dialect feature DAMTL Multihead DAMTL Multihead FOCUS itself * 0 .",
"described in Table 1, we were able to design regular expressions for only five.",
"4 Prior work sometimes relies on patterns that include both surface forms and part-of-speech (e.g., Bohmann, 2019), but part-of-speech cannot necessarily be labeled automatically for non-standard dialects (Jrgensen et al., 2015; Blodgett et al., 2016), so we consider only regular expressions over surface forms.",
"In this section, we present results on the detection of individual dialect features.",
"Using the features shown in Table 1, we compare supervision sources (corpus examples versus minimal pairs) and classification architectures (multihead versus DAMTL) as described in",
"3. To avoid tuning a threshold for detection, we report area under the ROC curve (ROC-AUC), which has a value of .",
"5 for random guessing and 1 for perfect prediction.",
"5 4.1 Results on Lange Data and Features We first consider the 10 syntactic features from Lange (2012), for which we have large-scale annotated data: the 100 annotated transcripts from the ICE-India corpus are split 50/50 into training and test sets.",
"As shown in Table 3, it is possible to achieve a Macro-AUC approaching .85 overall with multihead predictions on minimal pair examples.",
"This is promising, because it suggests the possibility of recognizing dialect features for which we lack labeled corpus examples and such low-data 4 Features: FOCUS itself , FOCUS only , NON-INITIAL EXISTENTIAL , INVARIANT TAG (isn't it, no, na) , and GENERAL EXTENDER and all .",
"Table 7 lists all regular expressions.",
"5 Results for area under the precision-recall (AUPR) curve are shown in Appendix C. According to this metric, minimal pairs are less effective than the full training set of corpus examples, on average.",
"The multihead architecture outperforms DAMTL on both corpus examples and minimal pairs.",
"In an ablation, we replaced the feature descriptions with non-descriptive identifiers such as Feature 3.",
"This reduced the Macro-AUC from to .",
"80 with corpus examples, and to .",
"76 with minimal pairs (averaged over five random seeds).",
"We also tried longer feature descriptions, but this did not improve performance.",
"Unsurprisingly, the lexical features (e.g., FOCUS itself ) are easiest to recognize.",
"The more syntactical features (e.g., COPULA OMISSION , RESUMPTIVE OBJECT PRONOUN ) are more difficult, although some movement-based features (e.g., LEFT DISLOCATION , RESUMPTIVE SUBJECT PRONOUN ) can be recognized accurately.",
"Qualitative model comparison.",
"We conducted a qualitative comparison of three models: regular expressions and two versions of the multihead model, one trained on corpus examples and another trained on minimal pairs.",
"Table 4 includes illustrative examples for the Lange data and features where models make different predictions.",
"We find that the minimal pair model is better able to account for rare cases (e.g. use of non-focus only in Example 1), likely as it was trained on a few carefully selected set of examples illustrating positives and negatives.",
"Both multihead models are able to account for disfluencies and restarts, in contrast to regular expressions (Example 2).",
"Our analysis shows that several model errors are accounted for by difficult examples (Example 3: is there followed by isn't; Example 6: restart mistaken for left dislocation) or the lack of contextual information available to the model (Example 4 & 7: truncated examples).",
"Please see Appendix B for more details and random samples of model predictions.",
"Learning from fewer corpus examples.",
"The minimal pair annotations consist of 113 examples; in contrast, there are 9392 labeled corpus examples, requiring far more effort to create.",
"We now consider the situation when the amount of labeled data is reduced, focusing on the Lange features (for which labeled training data is available).",
"As shown in Figure 3, even 5000 labeled corpus examples do not match the performance of training on roughly 5 minimal pairs per feature.",
"Corpus examples stratified by feature.",
"One reason that subsampled datasets yield weaker results is that they lack examples for many features.",
"To enable a more direct comparison of corpus examples and minimal pairs, we created a set of strat-ified datasets of corpus examples, such that the number of positive and negative examples for each feature exactly matches the minimal pair data.",
"Averaged over ten such random stratified samples, the multihead model achieves a Macro-AUC of .",
"790 ( = 0 . 029 ), and DAMTL achieves a Macro-AUC of .",
"722 ( = . 020 ).",
"These results are considerably worse than training on an equivalent number of minimal pairs, where the multihead model achieves a Macro-AUC of .",
"848 and DAMTL achieves a Macro-AUC of .",
"783 .",
"This demonstrates the utility of minimal pairs over corpus examples for learning to recognize dialect features.",
"Next, we consider the extended features, for which we have sufficient annotations for testing but not training (Table 1).",
"Here we compare the DAMTL and multihead models, using minimal pair data in both cases.",
"As shown in Table 5, performance on these features is somewhat lower than on the Lange features, and for several features, at least one of the recognizers does worse than chance: DIRECT OBJECT PRO-DROP , EXTRANEOUS ARTICLE , MASS NOUNS AS COUNT NOUNS .",
"These features seem to require deeper syntactic and semantic analysis, which may be difficult to learn from a small number of minimal pairs.",
"On the other extreme, features with a strong lexical signature are recognized with high accuracy: GENERAL EXTENDER and all , FOCUS itself , FOCUS only .",
"These three features can also be recognized by regular expressions, as can NON-INITIAL EXISTENTIAL .",
"6 However, for a number of other features, it is possible to learn a fairly accurate recognizer from just five minimal pairs: ARTICLE OMISSION , INVERSION IN EMBEDDED CLAUSE , LEFT DISLOCATION , LACK OF INVERSION IN WH-QUESTIONS .",
"Many dialect features can be automatically recognized with reasonably high discriminative power, as measured by area under the ROC curve.",
"However, there are also features that are difficult to recognize: particularly, features of omission (such as DIRECT OBJECT PRO-DROP and PREPOSITION OMISSION ), and the more semantic features such as MASS NOUNS AS COUNT NOUNS .",
"While some features can also be identified through regular expressions (e.g., FOCUS only ), there are many features that can be learned but cannot be recognized by regular expressions.",
"We now move from individual features to aggregate measures of dialect density.",
"A dialect density measure (DDM) is an aggregate over multiple dialect features that tracks the vernacularity of a passage of speech or text.",
"Such measures are frequently used in dialectology (Van Hofwegen and Wolfram, 2010), and are also useful in research on education (e.g., Craig and Washington, 2002).",
"Recently, a DDM was used to evaluate the performance of speech recognition systems by the density of AAVE features (Koenecke et al., 2020).",
"The use of DDMs reflects the reality that speakers construct individual styles drawing on linguistic repertoires such as dialects to varying degrees (Benor, 2010).",
"This necessitates a more nuanced description for speakers and texts than a discrete dialect category.",
"Following prior work (e.g., Van Hofwegen and Wolfram, 2010) we construct dialect density measures from feature detectors by counting the predicted number of features in each utterance, and dividing by the number of tokens.",
"For the learning-based feature detectors (minimal pairs and corpus examples), we include partial counts from the detection probability; for the regular expression detectors, we simply count the number of matches and dividing by the number of tokens.",
"In addition, we construct a DDM based on a document classifier: we train a classifier to distinguish Indian English from U.S. English, and then use its predictive probability as the DDM.",
"These DDMs are then compared on two tasks: distinguishing Indian and U.S. English, and correlation with the density of expert-annotated features.",
"The classifier is trained by fine-tuning BERT, using a prediction head on the [ CLS ] token.",
"One application of dialect feature recognizers is to rank documents based on their dialect density, e.g. to identify challenging cases for evaluating downstream NLP systems, or for dialectology research.",
"We correlate the dialect density against the density of expert-annotated features from Lange (2012), both measured at the transcript-level, and report the Spearman rank-correlation .",
"As shown in Table 6, the document classifier performs poorly: learning to distinguish Indian and U.S. English offers no information on the density of Indian dialect features, suggesting that the model is attending to other information, such as topics or entities.",
"The feature-based model trained on labeled examples performs best, which is unsurprising because it is trained on the same type of features that it is now asked to predict.",
"Performance is weaker when the model is trained from minimal pairs.",
"Minimal pair training is particularly helpful on rare features, but offers far fewer examples on the high-frequency features, which in turn dominate the DDM scores on test data.",
"Regular expressions perform well on this task, because we happen to have regular expressions for the high-frequency features, and because the precision issues are less problematic in aggregate when the DDM is not applied to non-dialectal transcripts.",
"Another application of dialect feature recognizers is to classify documents or passages by dialect (Dunn, 2018).",
"This can help to test the performance of downstream models across dialects, assessing dialect transfer loss (e.g., Blodgett et al., 2016), as well as identifying data of interest for manual dialectological research.",
"We formulate a classification problem using the ICE-India and the Santa Barbara Corpus (ICE-USA).",
"Each corpus is divided into equal-size training and test sets.",
"The training corpus was also used for hyperparameter selection for the dialect feature recognition models, as described in 3.2.",
"The dialect classifier was constructed by building on the components from 5.1.",
"For the test set, we measure the D (cid:48) (D-prime) statistic (Macmil-lan and Creelman, 1991), D (cid:48) = IN US (cid:113) 12 ( 2 IN + 2 US ) .",
"This statistic, which can be interpreted similarly to a Z -score, quantifies the extent to which a metric distinguishes between the two populations.",
"We also report classification accuracy; lacking a clear way to set a threshold, for each classifier we balance the number of false positives and false negatives.",
"As shown in Table 6, both the document classifier and the corpus-based feature detection model (trained on labeled examples) achieve high accuracy at discriminating U.S. and Indian English.",
"The D (cid:48) discriminability score is higher for the document classifier, which is trained on a cross-entropy objective that encourages making confident predictions.",
"Regular expressions suffer from low precision because they respond to surface cues that may be present in U.S. English, even when the dialect feature is not present (e.g., the word only, the phrase is there).",
"Dialect classification.",
"Prior work on dialect in natural language processing has focused on distinguishing between dialects (and closely-related lan-guages).",
"For example, the VarDial 2014 shared task required systems to distinguish between nation-level language varieties, such as British versus U.S. English, as well as closely-related language pairs such as Indonesian versus Malay (Zampieri et al., 2014); later evaluation campaigns expanded this Ranking Classification Dialect density measure D (cid:48) acc.",
"set to other varieties (Zampieri et al., 2017).",
"In general, participants in these shared tasks have taken a text classification approach; neural architectures have appeared in the more recent editions of these shared tasks, but with a few exceptions (e.g., Bernier-Colborne et al., 2019), they have not outperformed classical techniques such as support vector machines.",
"Our work differs by focusing on a specific set of known dialect features, rather than document-level classification between dialects, which aligns with the linguistic view of dialects as bundles of correlated features (Nerbonne, 2009) and tracks variable realization of features within dialect usage.",
"Discovering and detecting dialect features.",
"Machine learning feature selection techniques have been employed to discover dialect features from corpora.",
"For example, Dunn (2018, 2019) induces a set of constructions (short sequences of words, parts-of-speech, or constituents) from a neutral corpus, and then identifies constructions with distinctive distributions over the geographical subcor-pora of the International Corpus of English (ICE).",
"In social media, features of African American Vernacular English (AAVE) can be identified by correlating linguistic frequencies with the aggregate demographic statistics of the geographical areas from which geotagged social media was posted (Eisen-stein et al., 2011; Stewart, 2014; Blodgett et al., 2016).",
"In contrast, we are interested in detecting predefined dialect features from well-validated resources such as dialect atlases.",
"Along these lines, Jrgensen et al. (2015) and Jones (2015) designed lexical patterns to identify non-standard spellings that match known phonological variables from AAVE (e.g., sholl sure'), demonstrating the presence of these variables in social media posts from regions with high proportions of African Americans.",
"Blodgett et al. (2016) use the same geography-based approach to test for phonological spellings and constructions corresponding to syntactic variables such as habitual be ; Hovy et al. (2015) show that a syntactic feature of Jutland Danish can be linked to the geographical origin of product reviews.",
"These approaches have focused mainly on features that could be recognized directly from surface forms, or in some cases, from part-of-speech (POS) sequences.",
"In contrast, we show that it is possible to learn to recognize features from examples, enabling the recognition of features for which it is difficult or impossible to craft surface or POS patterns.",
"Minimal pairs in NLP.",
"A distinguishing aspect of our approach is the use of minimal pairs rather than conventional labeled data.",
"Minimal pairs are well known in natural language processing from the Winograd Schema (Levesque et al., 2012), which is traditionally used for evaluation, but Kocijan et al. (2019) show that fine-tuning on a related dataset of minimal pairs can improve performance on the Winograd Schema itself.",
"A similar idea arises in counterfactually-augmented data (Kaushik et al., 2019) and contrast sets (Gardner et al., 2020), in which annotators are asked to identify the minimal change to an example that is sufficient to alter its label.",
"However, those approaches use counterfactual examples to augment an existing training set, while we propose minimal pairs as a replacement for large-scale labeled data.",
"Minimal pairs have also been used to design controlled experiments and probe neural models' ability to capture various linguistic phenomena (Gulordava et al., 2018; Ettinger et al., 2018; Futrell et al., 2019; Gardner et al., 2020; Schuster et al., 2020).",
"Finally, Liang et al. (2020) use contrastive explanations as part of an active learning framework to improve data effi-ciency.",
"Our work shares the objective of Liang et al. (2020) to improve data efficiency, but is methodologically closer to probing work that uses minimal pairs to represent specific linguistic features.",
"We introduce the task of dialect feature detection and demonstrate that it is possible to construct dialect feature recognizers using only a small number of minimal pairs in most cases, just five positive and negative examples per feature.",
"This makes it possible to apply computational analysis to the many dialects for which labeled data does not exist.",
"Future work will extend this approach to multiple dialects, focusing on cases in which features are shared across two or more dialects.",
"This lays the groundwork for the creation of dialect-based checklists (Ribeiro et al., 2020) to assess the performance of NLP systems across the diverse range of linguistic phenomena that may occur in any given language.",
"Our objective in building dialect feature recognizers is to aid developers and researchers to effectively benchmark NLP model performance across and within different dialects, and to assist social scientists and dialectologists studying dialect use.",
"The capability to detect dialectal features may enable developers to test for and mitigate any unintentional and undesirable biases in their models towards or against individuals speaking particular dialects.",
"This is especially important because dialect density has been documented to correlate with lower socioeconomic status (Sahgal and Agnihotri, 1988).",
"However, this technology is not without its risks.",
"As some dialects correlate with ethnicities or countries of origin, there is a potential dual use risk of the technology being used to profile individuals.",
"Dialect features could also be used as predictors in downstream tasks; as with other proxies of demographic information, this could give the appearance of improving accuracy while introducing spurious correlations and imposing disparate impacts on disadvantaged groups.",
"Hence we recommend that developers of this technology consider downstream use cases, including malicious use and misuse, when assessing the social impact of deploying and sharing this technology.",
"The focus on predefined dialect features can introduce a potential source of bias if the feature set is oriented towards the speech of specific subcommunities within a dialect.",
"However, analogous issues can arise in fully data-driven approaches, in which training corpora may also be biased towards subcommunities of speakers or writers.",
"The feature-based approach has the advantage of making any such bias easier to identify and correct.",
"Acknowledgments.",
"Thanks to Claudia Lange for sharing her annotations, and for discussion of this research.",
"Thanks to Axel Bohmann for sharing information about his work on recognizing dialect features with regular expressions.",
"Valuable feedback on this research was provided by Jason Baldridge, Dan Jurafsky, Slav Petrov, Jason Riesa, Kristina Toutanova, and especially Vera Axelrod.",
"Thanks also to the anonymous reviewers.",
"Devyani Sharma is supported in part by a Google Faculty Research Award."
] | [
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"other",
"other",
"objective",
"other",
"other",
"other",
"objective",
"other",
"abstain",
"other",
"other",
"objective",
"other",
"other",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain"
] |
[
"Generative dialogue systems tend to produce generic responses, which often leads to boring conversations.",
"For alleviating this issue, Recent studies proposed to retrieve and introduce knowledge facts from knowledge graphs.",
"While this paradigm works to a certain extent, it usually retrieves knowledge facts only based on the entity word itself, without considering the specific dialogue context.",
"Thus, the introduction of the context-irrelevant knowledge facts can impact the quality of generations.",
"To this end, this paper proposes a novel commonsense knowledge-aware dialogue generation model, ConKADI.",
"We design a Felicitous Fact mechanism to help the model focus on the knowledge facts that are highly relevant to the context; furthermore, two techniques, Context-Knowledge Fusion and Flexible Mode Fusion are proposed to facilitate the integration of the knowledge in the ConKADI.",
"We collect and build a large-scale Chinese dataset aligned with the commonsense knowledge for dialogue generation.",
"Extensive evaluations over both an open-released English dataset and our Chinese dataset demonstrate that our approach ConKADI outperforms the state-of-the-art approach CCM, in most experiments.",
"Nowadays, open-domain dialogue response generation systems have shown impressive potential, to endow a machine with the ability to con-verse with a human, using natural language (Chen et al., 2017).",
"Although such models have achieved promising performance, they still suffer from generating generic and boring responses, such as I don't",
"know. Such low-quality responses always reduce the attractiveness of generative dialogue systems to end-users.",
"Researchers have tried to Corresponding author: Ying Li, li.ying@pku.edu.cn tackle it from multiple aspects; for example, using the enhanced objective function (Li et al., 2016a); introducing additional contents (Xu et al., 2019).",
"However, these methods haven't solved the issue thoroughly.",
"Different from a human being, who is capable of associating the dialogue with the background knowledge in his/her mind, a machine can merely capture limited information from the surface text of the query message (Ghazvininejad et al., 2018).",
"Consequently, it is difficult for a machine to understand the query fully, and then to generate diverse and informative responses (Zhou et al., 2018).",
"To bridge the gap of the knowledge between the human and the machine, researchers have begun to introduce large-scale knowledge graphs for enhancing the dialogue generation (Zhu et al., 2017; Zhou et al., 2018; Liu et al., 2018), and they have obtained lots of impressive results.",
"Generally, the retrieval of knowledge facts is based on the entity name; in detail, the first step is to recognize entity words in the given query message, and then facts that contain the mentioned entities can be retrieved as candidates 1 .",
"Subsequently, a knowledge-aware response can be generated based on the query message and previously retrieved facts.",
"Although such a straightforward paradigm works to a certain extent, some challenges in knowledge-aware dialogue generation still keep unsolved.",
"1) An entity word usually can refer to different concepts, i.e., an entity has multiple meanings, but only one specific concept is involved in a particular context.",
"Without considering this, some pre-fetched knowledge fact candidates can be irrelevant to the context.",
"2) Even if we only consider a particular entity meaning, the related knowledge facts may cover various target topics.",
"However, some of those topics do not con-1 For example, for a mentioned entity apple in a query, the fact (apple, is a type of, fruit) or (fruit, related to, apple) can be retrieved.",
"tribute to the dialogue generation.",
"Figure 1 presents an illustrative example to demonstrate such two issues.",
"Here, a subgraph is retrieved based on the entity word Apple in the query.",
"In general, Apple can be interpreted as either a type of fruit or a brand name.",
"In this context, it is evident that Apple refers to a brand name.",
"However, some knowledge facts concerning a type of fruit are retrieved too.",
"If a model makes an inappropriate choice of irrelevant facts, the generated response will make no sense to the query message.",
"In our example, even for the entities in blue circle related to the brand name Apple, only some of them have a positive effect in the dialogue generation, e.g., Jobs should not make any contribution to the #1.",
"3) The integration of the knowledge and the dialogue generation in previous approaches is insufficient, including the way of integration, as well as the types of knowledge.",
"To tackle such challenges, this paper proposes a Context Knowledge-Aware Diverse and Informative conversation generation model, ConKADI.",
"First , we design a Felicitous Fact mechanism to help the model highlight the knowledge facts that are highly relevant to the context, that is, Felici-tous Facts.",
"Felicitous Fact mechanism generates a felicitous fact probability distribution over the retrieved facts.",
"For improving the selection of felicitous facts, human-generated answers (i.e., the ground-truth responses) are used as the posterior context knowledge to supervise the training of the prior felicitous fact probability distribution.",
"Next , Context-Knowledge Fusion is proposed to lift the role of knowledge facts in the dialogue generation, by fusing the context and the felicitous knowledge before the decoding.",
"Last , ConKADI can generate three types of words owing to the Flexible Mode Fusion module, which aims at simultaneously fusing multiple types of knowledge.",
"To summarize, Felicitous Fact mechanism can alleviate the first two issues, and the next two techniques solve the last issue.",
"Consequently, our approach can improve the utilization rate of knowledge graphs, as well as can promote the diversity and informativeness of the generated responses.",
"In the experiments, a large-scale Chinese Weibo dataset is collected and aligned with the commonsense knowledge for dialogue generation.",
"We perform extensive evaluations on two large-scale datasets: an open-released English Reddit dataset and our proposed Chinese Weibo dataset.",
"The experimental results demonstrate that our proposed ConKADI model significantly outperforms representative methods in knowledge utilization, diversity, and informativeness.",
"Especially, ConKADI exceeds the latest knowledge-aware dialogue generation model, CCM (Zhou et al., 2018), in most experiments.",
"Seq2Seq (Sutskever et al., 2014; Vinyals and Le, 2015) has been widely used in the open-domain dialogue generation.",
"However, models tend to generate generic responses (Serban et al., 2016).",
"To tackle this issue, researchers have proposed new objectives (Li et al., 2016a), enhanced decoding algorithms (Li et al., 2016b), latent-variable based methods (Zhao et al., 2017, 2018; Gao et al., 2019).",
"Introducing additional contents into the dialogue generation is also helpful.",
"(Xu et al., 2019) uses meta-words; (Zhu et al., 2019) uses the retrieved existing dialogues.",
"However, the leading cause of generating generic responses is that the model can not obtain enough background knowledge from the query message (Ghazvininejad et al., 2018; Liu et al., 2019).",
"Recently, to alleviate the lack of background knowledge, researchers have begun to introduce the knowledge into the generation.",
"The knowledge can be the unstructured knowledge texts (Ghazvinine-jad et al., 2018), the structured knowledge graphs (Zhou et al., 2018), or the hybrid of them (Liu et al., 2019).",
"The structured knowledge has the best quality, because it is generally extracted and summarized by the human.",
"The structured knowledge graph can be either domain-specific (Zhu et al., 2017; Liu et al., 2018) or open-domain (Young et al., 2018; Zhou et al., 2018).",
"ConceptNet (Speer et al., 2017) is a multilingual open-domain commonsense knowledge graph, which is designed to represent the general knowledge and to improve understanding of the meanings behind the words people use.",
"Two previous studies (Young et al., 2018; Zhou et al., 2018) have proved the feasibility of introducing commonsense knowledge into dialogue systems.",
"The first work (Young et al., 2018) is designed for retrieval-based systems; therefore, only the current state-of-the-art CCM (Zhou et al., 2018) is our direct competitor.",
"In comparison with CCM, 1) ConKADI is aware of the context when using the knowledge.",
"2) ConKADI uses human's responses as posterior knowledge in training.",
"In addition, our Felicitous Fact mechanism is different from the word/knowledge selection mechanisms previously proposed in related tasks; for example, selecting a cue word (Mou et al., 2016; Yao et al., 2017) or selecting a knowledge (Liu et al., 2019).",
"First, ConKADI can access more contextual information because our model is fully end-to-end, while previous works use independent and external modules.",
"Second, our Felicitous Fact outputs a probabilistic distribution instead of a hard singleton value, as did the previous works.",
"Formally, given a training data D of triplets, where each triplet includes a query message X = ( x 1 , . . . , x n ) , a response Y = ( y 1 , . . . , y m ) , and a set of commonsense knowledge facts F = { f 1 , . . . , f l } .",
"The training goal of knowledge-aware dialogue generation is to maximize the probability (cid:80) ( X,Y,F ) D 1 |D| p ( Y | X, F ) ; the inference goal is to find Y = arg max Y p ( Y | X, F ) .",
"Knowledge facts F are retrieved from the knowledge graph G ; each fact is organized as a triplet ( h, r, t ) .",
"The overview of ConKADI has been shown in Figure",
"2. Knowledge fact set F is retrieved by the Knowledge Retriever given the query message X .",
"The Context Encoder summarizes an utterance into contextual representations.",
"The Felicitous Fact Recognizer calculates the felicitous fact probability distribution z over the F , which is used to initialize the Decoder and guide the generation.",
"The Triple Knowledge Decoder can generate three types of words: vocabulary words, entity words, and copied words, with the Flexible Mode Fusion .",
"Knowledge Retriever: Given a query message X , if a word x i X is recognized as an entity word and can be matched to a vertex e src in the knowledge graph G , then, each neighbour e tgt Neighbour ( e src ) and the corresponding directional relation r is retrieved as a candidate fact f .",
"e src / e tgt is called as source / target entity.",
"If a word can't match any vertex, a special fact f NAF will be used.",
"Context Encoder: The Context Encoder is a bidirectional GRU network (Cho et al., 2014), which reads X or Y and outputs a contextual state sequence.",
"For simplicity, we take X as an example.",
"At the time step t , the Encoder outputs a forward state and a backward state, the concatenation of such two states h xt = [ h fwt ; h bwt ] R 2 d h 1 is regarded as the contextual state : h fwt = GRU fw ( h fwt 1 , x t , e x t ) h bwt = GRU bw ( h bwt 1 , x n t + 1 , e x n t + 1 ) (1) where x t is the word embedding of x t .",
"To enhance the semantic information, the matched entity embedding e x t of x t is also involved.",
"Finally, the contextual state sequence of X/Y is denoted as H x/y = ( h x/y 1 , . . . , h x/yn/m ) .",
"Specifically, H x is the prior context; H y is the posterior context that is only available in the training stage.",
"Felicitous Fact Recognizer: Recall the example illustrated in Figure 1 , some preliminary retrieved knowledge facts may be inappropriate in the dialogue context.",
"The Felicitous Fact Recognizer is designed to detect the facts that highly coincide with the dialogue context, i.e., Felicitous Facts.",
"The Felicitous Fact Recognizer reads the contextual information, then outputs a probability distribution z R l 1 over the F ; therefore, the i -th dimension value z [ i ] indicates the weight of f i .",
"In the training stage, the high-quality human-generated response Y is served as the posterior knowledge; hence, the posterior z post is adopted in training, the prior z prior is adopted in inference: z post = ( ( F W ft ) ([ h xn (cid:62) ; h ym (cid:62) ] W post )) (cid:62) z prior = ( ( F W ft ) ( h xn (cid:62) W prior )) (cid:62) (2) where F R l ( d e + d r + d e ) is the embedding matrix of the retrieved facts F , W ft , W post and W prior are trainable parameters, is softmax activation , Figure 2: An overview of the proposed approach ConKADI.",
"is tanh activation.",
"KullbackLeibler Divergence (Kullback and Leibler, 1951) ( KLD ) is used to force two distributions to become as close as possible.",
"L k = KLD ( z post , z prior ) (3) Context-Knowledge Fusion: To enhance the Decoder's understanding of the background knowledge, the Decoder is initialized based on the fused knowledge f (cid:62) z = z (cid:62) F and the query context: h y0 (cid:62) = tanh ([ h xn (cid:62) ; f z (cid:62) ] W init ) (4) where W init is a trainable parameter.",
"Following the previous work (Zhao et al., 2017), we adopt the Bag-of-Words Loss to ensure the accuracy of the input of the Context-Knowledge Fusion, namely, h xn and f z .",
"Meanwhile, we construct a 0-1 indicator vector I f R l 1 to supervise the training of z post , where I f [ i ] is set to 1 if the target entity of the i -th fact f i appears in the Y , otherwise",
"0. Thus, the objective is to minimize the L f given by: (cid:80) y b B log p b ( y b | h xn , f z )) | B | I f (cid:62) log( z post ) | I f | (5) where B is the word bag of Y , p b is a 2-layer MLP bow activated by softmax , which outputs the probability distribution over the vocabulary V .",
"types of words: vocabulary words, knowledgeable entity words, and copied words.",
"ConKADI first updates the internal state: h yt = g ( h yt 1 , u t 1 , c t 1 ) (6) where u t 1 (cid:62) = [ y (cid:62) t 1 ; e (cid:62) y t 1 ; h xy t 1 (cid:62) ] , and y t 1 , e y t 1 , h xy t 1 are the word embedding, the entity embedding and the pointed-then-copied source state of the last predicted token y t 1 , respectively; and c t 1 is the Attention 2 .",
"p w,t = ( elu ([ h yt ; u t 1 ; c t ] W v1 ) W v2 ) (7) where W v1 / 2 are trainable parameters, and the non-linear activation elu is proposed by (Clevert et al., 2016).",
"Knowledgeable Entity Words: An entity word can be generated by extracting the target entity of the best-matched fact f at each time step.",
"The corresponding probability distribution p k,t R l 1 over the F is calculated as: z d , t = ( ( F W fd ) ([ h yt (cid:62) ; u t 1 (cid:62) ] W d ) (cid:62) ) t = sigmoid ([ h yt (cid:62) ; u t (cid:62) ; c t (cid:62) ] W gate ) R 1 p k,t = t z + (1 . 0 t ) z d (8) 2 We have omitted the description of Attention.",
"where the previous z here serves as a static global distribution (denoted as GlFact ), z d , t is the dynamic distribution, and t is a gate to control the contribution of each distribution.",
"Copied Words: The Decoder can further point out a word x from X , and then copies the x .",
"The corresponding probability distribution p c,t R n 1 over the query message X is calculated as: p c,t = ( ( H x W cs ) ( u ct (cid:62) W ct ) (cid:62) ) u ct (cid:62) = [ h yt (cid:62) ; u t 1 (cid:62) ; c t (cid:62) ] (9) Flexible Mode Fusion: Previous three distributions can be fused by the MF ( h yt , u t 1 , c t ) , a 2-layer MLP activated by softmax .",
"MF can outputs a probability distribution ( w,t , k,t , c,t ) over three modes at each time step: p out,t = w,t p w,t + k,t p k,t + c,t p c,t (10) The proposed MF can be regarded as a multi-class classifier; therefore, the advantage of MF is the flexibility, we can additionally integrate more modes or remove existing modes by simply changing the number of classes.",
"For a more reasonable fusion, the Cross-Entropy between the ground-truth mode and the predicted distribution by MF is used to supervise the training; the corresponding Cross-Entropy loss is denoted as L m .",
"Next, we optimize the fused output distribution p out ( Y | X, F ) by minimizing the L n , which is given by: (cid:88) t t log p out,t ( y t | y t 1:1 , X, F ) + L m 2 (11) where t is a normalization term to penalize the out-of-vocabulary words, t = 1 #( unk Y ) 3 if y t is an unk , otherwise t = 1 .",
"To verify the generalization among different languages, we evaluate models not only on a public English Reddit dataset (Zhou et al., 2018), but we also collect and construct a Chinese Weibo dataset.",
"Both datasets are aligned with the commonsense knowledge graph ConcetNet (conceptnet.io), the statistics have been reported in Table",
"1. 3 #( ) is the count of Reddit Weibo #Train 1,352,961 1,019,908 #Dev/#Test 40,000 56,661 #Vocab 30,000 50,000 Batch Size 100 50 #Entity/#Relation 21,471/44 27,189/26 #Fact 149,803 696,466 Table 1: The statistics of two datasets.",
"The English Reddit: We did some filtering on the raw data: Utterances that are too short ( < 4 words) or too long ( > 30 words) were removed, and each message can be associated with at most 300 related fact triplets.",
"The Chinese Weibo: We first collected three open-sourced Weibo (weibo.com) datasets, which originally contained 4.44M (Shang et al., 2015), 1.96M (Ke et al., 2018) and 10.48M (Li and Yan, 2018) pairs of dialogue, respectively.",
"Jieba 4 was used to segment; utterances that are too short/long were removed as well.",
"Next, we crawled 4.48M entities and 13.98M facts from the ConceptNet.",
"Stop entities, and low-frequent entities are excluded.",
"For a dialogue pair, if one entity in the message and another entity in the response can be connected by a 1-hop edge in the knowledge graph, this dialogue was kept.",
"In comparison with the English Reddit, our dataset has more facts, but the relation types are quite limited; hence, we set the limit that a message can be associated with at most 150 fact triplets.",
"For two datasets, the embedding of entities and relations are learned by using TransE (Bordes et al., 2013); then, they are kept fixed in training.",
"Our experimental resources are available at the web 5 .",
"Baselines: The widely used S2S (Sutskever et al., 2014), and its Attentive version ATS2S (Luong et al., 2015).",
"We further add the bidi-MMI (Li et al., 2016a) or the diverse decoding (Li et al., 2016b) to improve the diversity of ATS2S, which are denoted as ATS2S MMI and ATS2S DD 6 .",
"Copy mechanism (Gu et al., 2016; Vinyals et al., 2015) allows Decoder to point then copy a source word.",
"GenDS is a knowledge-aware model, which can generate responses with the utilizing of entity words.",
"(Zhu et al., 2017).",
"CCM is the current state-of-the-art approach in the task of response generation with 4 https://pypi.python.org/pypi/jieba/ 5 https://github.com/pku-orangecat/ ACL2020-ConKADI 6 The best k was searched form [0 . 1 , 3 . 0] .",
"Implementation: We implemented all models except CCM, CCM was tested based on its offi-cial code 7 .",
"Most hyper-parameters are kept the same as CCM, and hyper-parameters among models are kept the same as possible.",
"In detail, the word embedding dimension is 300, Encoder is a 2-layer bidirectional GRU with 512 units, and Decoder is a 2-layer GRU with 512 units.",
"Adam is used to optimizing model with an initial learning rate lr = 0 .",
"0001 ; if perplexity begins to increase, the lr will be halved, if perplexity increases in two continuous epochs, the training will be stopped.",
"Following the CCM, the maximum epoch number is 20.",
"Objective Metrics: We evaluate the generated responses from four aspects: Knowledge Utilization ( A 1 ) : E match is the averaged number of the matched target entities per generation.",
"(Zhou et al., 2018).",
"E use further counts the source entities.",
"E recall is the ratio of recalled entities.",
"Embedding-based Relevance ( A 2a ) : Following (Liu et al., 2016), we use the Emb avg that considers the averaged word embedding, and the Emb ex that considers each dimension's extreme value.",
"Overlapping-based Relevance ( A 2b ) : BLEU-2/3 (Tian et al., 2017; Wu et al., 2017).",
"Diversity ( A 3 ): We report 7 CCM doesn't support beam-search, so we use the greedy search except ATS2S MMI and ATS2S DD use beam=10.",
"the ratio of distinct uni/bi-grams, i.e., Distinct-1/2, in all generated texts (Li et al., 2016a; Wu et al., 2018).",
"Informativeness ( A 4 ) : We report the word-level Entropy (Mou et al., 2016).",
"Relative Score: To illustrate the comprehensive performance of models, we first compute the average score of 7 baselines metric by metric ( AVG ), then, we report the arithmetic mean score: R a = 1 5 (cid:88) A i ( 1 | A i | (cid:88) m A i m j m j,AV G ) (13) and the geometric mean score: R g = ( (cid:89) A i ( (cid:89) m j A i m j m j,AV G ) 1 | Ai | ) 15 (14) 4.3 Experimental Results The objective evaluation results on the two datasets have been reported in Table",
"2. By reviewing the Relative Score , it can be seen that the overall performance of ConKADI outperforms baseline models.",
"More specifically, our ConKADI outperforms baseline models in terms of all metrics except BLEU-3 on the Chinese Weibo, and our ConKADI outperforms baseline models in terms of almost all metrics on the English Reddit.",
"In comparison with the state-of-the-art method CCM, our ConKADI increases the overall performance by 153%/95% (arithmetic/geometric mean) on the Chinese dataset, as well as increases the overall performance by 48%/25% on the English dataset.",
"Knowledge Utilization: By accessing the knowledge, three knowledge-aware models, i.e., GenDS, CCM, and ConKADI, can significantly outperform other models.",
"In comparison with GenDS and CCM, the advantages of ConKADI can be summarized as 1) ConKADI has a higher utilization of the knowledge, which can be proved by E match .",
"2) By using the point-then-copy mechanism (ConKADI vs. ConKADI cp ), ConKADI further expands the total generated entity number (E use ).",
"After adding the point-then-copy mechanism, while the E match drops by 7.5%, the overall E use increases by 10%.",
"It means ConKADI can reasonably decide whether to use a knowledge fact or copy a source word.",
"3) ConKADI is more potential to find out the accurate knowledge; hence, our E recall is much higher than the E recall of GenDS and CCM.",
"Such results can demonstrate that the proposed Felicitous Fact mechanism can help the model better focus on the facts that are relevant to the dialogue context, and increase the utilization rate of the knowledge graph and the accuracy of the knowledge selection.",
"Diversity and Informativeness: Generative models have been suffering from generating responses without enough diversity and informativeness.",
"Although previous GenDS and CCM can utilize the knowledge, they fail to solve this challenge; they even can be beaten by other baselines.",
"By contrast, our ConKADI has significantly alleviated this issue.",
"According to our ablation experiments, such notable promotion can be attributed to the proposed Context-Knowledge Fusion.",
"The more detail will be discussed in the ablation study.",
"Relevance: On the Chinese dataset, ConKADI has the best overall performance, but ConKADI's performance is not ideal on the English dataset.",
"First, we think the reason is the inherent difference of datasets; two datasets are collected from different sources and have varying densities of entity-relations (see Table 1).",
"Next, we must emphasize these metrics can only evaluate the relevance to the given reference.",
"Instead of the 1-to-1 mapping, the dialogue is undoubtedly a 1-to-n mapping; therefore, these results cannot show the generation is not consistent with the query.",
"ConKADI is a very diverse model; only use one reference to judge is unfair.",
"Similarly, this limitation has been found and explained in a recent work (Gao et al., 2019).",
"Following (Liu et al., 2019), we randomly sample 200 query messages from the test set, and then we conduct the pair-wise comparison.",
"For the variations of S2S, We remain two most representative models, ATS2S and ATS2S MMI .",
"Thus, we have 1,000 pairs in total.",
"For each pair, we invite three well-educated volunteers to judge which response is better, in terms of the following two metrics: 1) Appropriateness, which mainly considers the flu-ency and the logical relevance.",
"2) Informativeness, which considers whether the model provides new information/knowledge or not.",
"The tie is allowed, but volunteers are required to avoid it as possible.",
"The model names are masked, and the A-B order is random.",
"For the appropriateness, 2/3 agreement (i.e., the percentage of cases that at least 2 volunteers give the same label) is 95%, and the 3/3 agreement is 67.1%.",
"For the informativeness, 2/3 agreement is 97%, and the 3/3 agreement is 79.1%.",
"The results have been reported in Table",
"3. ATS2S MMI is the strongest baseline owing to the beam search and the MMI re-ranking, especially in terms of appropriateness.",
"While the generation of ATS2S MMI is more generic, it's friendly for human reading; hence, it tends to receive higher scores.",
"GenDS and CCM are far behind our model.",
"We find their generation is usually not fluent, while a lot of entities are generated.",
"Comparing two metrics, ConKADI has more notable advantages in terms of informativeness.",
"We focus on the ablation of the Felicitous Fact mechanism.",
"There are 3 factors, GlFact (using the distribution z to guide the entity word generation), CKF (Context-Knowledge Fusion), and CKF's loss L f .",
"Copy has fully removed the Felicitous Fact mechanism (i.e., above 3 factors); Base further Query #1:My cat likes bananas and bread.",
"The results have been reported in Table 5.",
"1) The performance drops significantly without using the context-knowledge fused result to initialize the Decoder (#5 #7), indicating that CKF is very important for the Decoder.",
"2) If GlFact is adopted solely, it can affect performance in turn.",
"3) L f is essential to the Copy in comparison with Base.",
"Analysis of KL Divergence: The training stage introduces posterior knowledge, which is absent during the inference.",
"Therefore, reducing the difference between such two distribution is very necessary.",
"We here check the curve of the KLD between the z prior and z post , i.e., L k .",
"A lower L k means the two distribution are closer.",
"As shown in Figure 3: 1) KLD is strongly related to the overall performance.",
"2) The importance that using the fused knowledge to initialize the Decoder (CKF) has been proved once again (#5 vs. #6).",
"Three cases are sampled in Table 4.",
"In case 1, except ATS2S MMI and our ConKADI, the remaining models have generated weird responses.",
"ATS2S MMI generated a fluent response, but this reFigure 3: The KullbackLeibler Divergence between the between the z prior and z post on Chinese Weibo against the training iteration number.",
"sponse is not very logically relevant to the query.",
"In case 2, although GenDS and CCM have generated entity words, they also generate some redundant generic patterns, namely, I'm not sure ....",
"It is perhaps because their understanding of background knowledge is still not enough.",
"Our ConKADI generates a fluent and informative response.",
"The last challenging case is sampled from the Chinese dataset.",
"Taylor Swift is a female singer, but it is an unknown word for models.",
"All generated responses are not absolutely perfect.",
"Only the generations of ATS2S MMI and ConKADI are fluent.",
"In comparison with ATS2S MMI , the generation of ConKADI provides more information; the only small flaw is ConKADI wrongly thinks Taylor Swift is a male singer.",
"To bridge the gap of the knowledge between machines and human beings in the dialogue generation,",
"generation, this paper proposes a novel knowledge-aware model ConKADI.",
"The proposed Felicitous Fact mechanism can help the ConKADI focus on the facts that are highly relevant to the dialogue context, by generating a felicitous fact probability distribution over the retrieved facts.",
"Besides, the proposed Context-Knowledge Fusion and Flexible Mode Fusion can facilitate the integration of the knowledge in the ConKADI.",
"Extensive evaluations over both an open-released English dataset and our constructed Chinese dataset demonstrate our ConKADI can significantly outperform the state-of-the-art model CCM and other baselines in most experiments.",
"Although ConKADI has achieved a notable performance, there is still much room to improve.",
"1) While ATS2S MMI is behind our ConKADI, we find MMI can effectively enhance the ATS2S; hence, in the future, we plan to verify the feasibility of the re-ranking technique for knowledge-aware models.",
"2) We will continue to promote the integration of high-quality knowledge, including more types of knowledge and a more natural integration method.",
"This work is supported by the National Key R&D Program of China (Grant No. 2017YFB1002000), and PKU-Tencent Joint Innovation Research Program.",
"Our deepest gratitude goes to the reviewers for their thoughtful suggestions, and we need to thank all our team members in FineLab for their help."
] | [
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"objective",
"objective",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"other",
"method",
"method",
"other",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"method",
"abstain",
"method",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"objective",
"abstain",
"result",
"abstain",
"other",
"other"
] |